paper
stringlengths
24
111
paper_id
stringlengths
10
10
table_caption
stringlengths
3
663
table_column_names
sequencelengths
2
14
table_content_values
sequencelengths
1
49
text
stringlengths
116
2.01k
full_body_text
stringlengths
19.3k
104k
Neural Belief Tracker: Data-Driven Dialogue State Tracking
1606.03777
Table 2: DSTC2 and WOZ 2.0 test set performance (joint goals and requests) of the NBT-CNN model making use of three different word vector collections. The asterisk indicates statistically significant improvement over the baseline xavier (random) word vectors (paired t-test; p<0.05).
[ "[BOLD] Word Vectors", "[BOLD] DSTC2 [BOLD] Goals", "[BOLD] DSTC2 [BOLD] Requests", "[BOLD] WOZ 2.0 [BOLD] Goals", "[BOLD] WOZ 2.0 [BOLD] Requests" ]
[ [ "xavier [BOLD] (No Info.)", "64.2", "81.2", "81.2", "90.7" ], [ "[BOLD] GloVe", "69.0*", "96.4*", "80.1", "91.4" ], [ "[BOLD] Paragram-SL999", "[BOLD] 73.4*", "[BOLD] 96.5*", "[BOLD] 84.2*", "[BOLD] 91.6" ] ]
The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. , trained using co-occurrence information in large textual corpora; and 3) semantically specialised Paragram-SL999 vectors Wieting et al. Paragram-SL999 vectors (significantly) outperformed GloVe and xavier vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g. north and south, expensive and inexpensive), offsetting the useful semantic content embedded in this vector spaces. The NBT-DNN model showed the same trends.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \setcounter{dbltopnumber}{8} \setcounter{topnumber}{2} \setcounter{bottomnumber}{2} \setcounter{totalnumber}{4} \renewcommand{\topfraction}{0.85} \renewcommand{\bottomfraction}{0.85} \renewcommand{\textfraction}{0.15} \renewcommand{\floatpagefraction}{0.7} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Neural Belief Tracker: Data-Driven Dialogue State Tracking} \author{Nikola Mrk\v{s}i\'c$^{\mathbf{1}}$, ~ Diarmuid {\'O S\'eaghdha}$^{\mathbf{2}}$ \\ \textbf{Tsung-Hsien Wen$^{\mathbf{1}}$, ~ {Blaise Thomson$^{\mathbf{2}}$, ~ Steve Young$^{\mathbf{1}}$}} \\ $^{\mathbf{1}}$ University of Cambridge \\ $^{\mathbf{2}}$ Apple Inc. \\ { \tt \{nm480, thw28, sjy\}@cam.ac.uk} \\ { \tt\{doseaghdha, blaisethom\}@apple.com}} \date{} \begin{document} \maketitle \begin{abstract} One of the core components of modern spoken dialogue systems is the \textit{belief tracker}, which estimates the user's goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: \textbf{a)} Spoken Language Understanding models that require large amounts of annotated training data; or \textbf{b)} hand-crafted lexicons for capturing some of the linguistic variation in users' language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided. \end{abstract} \section{Introduction} Spoken dialogue systems (SDS) allow users to interact with computer applications through conversation. Task-based systems help users achieve goals such as finding restaurants or booking flights. The \emph{dialogue state tracking} (DST) component of an SDS serves to interpret user input and update the \emph{belief state}, which is the system's internal representation of the state of the conversation \cite{young:10c}. This is a probability distribution over dialogue states used by the downstream \emph{dialogue manager} to decide which action the system should perform next \cite{su:2016:nnpolicy,Su:16}; the system action is then verbalised by the {natural language generator} \cite{wen:15a,wen:15b,Dusek:15}. The Dialogue State Tracking Challenge (DSTC) series of shared tasks has provided a common evaluation framework accompanied by labelled datasets \cite{Williams:16}. In this framework, the dialogue system is supported by a \emph{domain ontology} which describes the range of user intents the system can process. The ontology defines a collection of \emph{slots} and the \emph{values} that each slot can take. The system must track the search constraints expressed by users (\emph{goals} or \emph{informable} slots) and questions the users ask about search results (\emph{requests}), taking into account each user utterance (input via a speech recogniser) and the dialogue context (e.g., what the system just said). The example in Figure \ref{fig:example-dialogue} shows the true state after each user utterance in a three-turn conversation. As can be seen in this example, DST models depend on identifying mentions of ontology items in user utterances. This becomes a non-trivial task when confronted with lexical variation, the dynamics of context and noisy automated speech recognition (ASR) output. Traditional statistical approaches use separate Spoken Language Understanding (SLU) modules to address lexical variability within a single dialogue turn. However, training such models requires substantial amounts of {domain-specific} annotation. Alternatively, turn-level SLU and cross-turn DST can be coalesced into a single model to achieve superior belief tracking performance, as shown by \newcite{Henderson:14b}. Such coupled models typically rely on {manually constructed} semantic dictionaries to identify alternative mentions of ontology items that vary lexically or morphologically. Figure \ref{fig:sem_dict} gives an example of such a dictionary for three slot-value pairs. This approach, which we term \emph{delexicalisation}, is clearly not scalable to larger, more complex dialogue domains. Importantly, the focus on English in DST research understates the considerable challenges that morphology poses to systems based on exact matching in morphologically richer languages such as Italian or German (see \newcite{Vulic:2017}). In this paper, we present two new models, collectively called the {Neural Belief Tracker} (NBT) family. The proposed models couple SLU and DST, efficiently learning to handle variation without requiring {any} hand-crafted resources. To do that, NBT models move away from exact matching and instead reason entirely over pre-trained word vectors. The vectors making up the user utterance and preceding system output are first composed into intermediate representations. These representations are then used to decide which of the ontology-defined intents have been expressed by the user up to that point in the conversation. To the best of our knowledge, NBT models are the first to successfully use pre-trained word vector spaces to improve the language understanding capability of belief tracking models. In evaluation on two datasets, we show that: \textbf{a)} NBT models match the performance of delexicalisation-based models which make use of hand-crafted semantic lexicons; and \textbf{b)} the NBT models significantly outperform those models when such resources are not available. Consequently, we believe this work proposes a framework better-suited to scaling belief tracking models for deployment in real-world dialogue systems operating over sophisticated application domains where the creation of such domain-specific lexicons would be infeasible. % comparable task-specific requirements \section{Background} Models for probabilistic dialogue state tracking, or \emph{belief tracking}, were introduced as components of spoken dialogue systems in order to better handle noisy speech recognition and other sources of uncertainty in understanding a user's goals \cite{Bohus:06,Williams:07,young:10c}. Modern dialogue management policies can learn to use a tracker's distribution over intents to decide whether to execute an action or request clarification from the user. As mentioned above, the DSTC shared tasks have spurred research on this problem and established a standard evaluation paradigm \cite{Williams:13a,Henderson:14c,Henderson:14a}. In this setting, the task is defined by an \emph{ontology} that enumerates the goals a user can specify and the attributes of entities that the user can request information about. \iffalse\footnote{Alternative \emph{chat-bot} style systems do not make use of task ontologies or the pipeline model. Instead, these models learn to generate/choose system responses based on previous dialogue turns \cite{vinyals:15,Lowe:15,Serban:16,Serban:16b,Anjuli:17}. This means these models cannot interact with databases or react to user queries different from those encountered in their training data.} \fi Many different belief tracking models have been proposed in the literature, from generative \cite{Thomson:10} and discriminative \cite{Henderson:14b} statistical models to rule-based systems \cite{Wang:13}. To motivate the work presented here, we categorise prior research according to their reliance (or otherwise) on a separate SLU module for interpreting user utterances:\footnote{The best-performing models in DSTC2 all used both raw ASR output and the output of (potentially more than one) SLU decoders \cite{Williams:14,Williams:16}. This does not mean that those models are immune to the drawbacks identified here for the two model categories; in fact, they share the drawbacks of both.} \paragraph{Separate SLU} Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state \cite{Thomson:10,Wang:13,Lee:16,Perez:16,Perez:16b,Sun:16,Jang:16,Shi:2016,Dernoncourt:16a,Liu:2017,Vodolan:2017}. In the DSTC challenges, some systems used the output of template-based matching systems such as Phoenix \cite{Wang:94}. However, more robust and accurate statistical SLU systems are available. Many discriminative approaches to spoken dialogue SLU train independent binary models that decide whether each slot-value pair was expressed in the user utterance. Given enough data, these models can learn which lexical features are good indicators for a given value and can capture elements of paraphrasing \cite{Mairesse:09}. This line of work later shifted focus to robust handling of rich ASR output \cite{Henderson:12,Tur:13}. SLU has also been treated as a sequence labelling problem, where each word in an utterance is labelled according to its role in the user's intent; standard labelling models such as CRFs or Recurrent Neural Networks can then be used \cite[i.a.]{Raymond:07,Yao:14,Celikyilmaz:2015,Mesnil:15,Peng:15,Zhang:16,Liu:16a,Vu:2016,Liu:16b}. Other approaches adopt a more complex modelling structure inspired by semantic parsing \cite{Saleh:14,Vlachos:14}. One drawback shared by these methods is their resource requirements, either because they need to learn independent parameters for each slot and value or because they need fine-grained manual annotation at the word level. This hinders scaling to larger, more realistic application domains. \paragraph{Joint SLU/DST} Research on belief tracking has found it advantageous to reason about SLU and DST jointly, taking ASR predictions as input and generating belief states as output \cite{Henderson:14b,Sun:14,Zilka:15,Mrksic:15}. In DSTC2, systems which used no external SLU module outperformed all systems that only used external SLU features. Joint models typically rely on a strategy known as \emph{delexicalisation} whereby slots and values mentioned in the text are replaced with generic labels. Once the dataset is transformed in this manner, one can extract a collection of template-like $n$-gram features such as \textbf{[want \emph{tagged-value} food]}. To perform belief tracking, the shared model iterates over all slot-value pairs, extracting delexicalised feature vectors and making a separate binary decision regarding each pair. Delexicalisation introduces a hidden dependency that is rarely discussed: how do we identify slot/value mentions in text? For toy domains, one can manually construct \emph{semantic dictionaries} which list the potential rephrasings for all slot values. As shown by Mrk{\v{s}}i\'c et al.~\shortcite{Mrksic:16}, the use of such dictionaries is essential for the performance of current delexicalisation-based models. Again though, this will not scale to the rich variety of user language or to general domains. The primary motivation for the work presented in this paper is to overcome the limitations that affect previous belief tracking models. The NBT model efficiently learns from the available data by: \textbf{1)} leveraging semantic information from pre-trained word vectors to resolve lexical/morphological ambiguity; \textbf{2)} maximising the number of parameters shared across ontology values; and \textbf{3)} having the flexibility to learn domain-specific paraphrasings and other kinds of variation that make it infeasible to rely on exact matching and delexicalisation as a robust strategy. \section{Neural Belief Tracker} The Neural Belief Tracker (NBT) is a model designed to detect the slot-value pairs that make up the user's goal at a given turn during the flow of dialogue. Its input consists of the system dialogue acts preceding the user input, the user utterance itself, and a single candidate slot-value pair that it needs to make a decision about. For instance, the model might have to decide whether the goal \textsc{food=Italian} has been expressed in \emph{`I'm looking for good pizza'}. To perform belief tracking, the NBT model \emph{iterates} over {all} candidate slot-value pairs (defined by the ontology), and decides which ones have just been expressed by the user. Figure \ref{fig:sys_diagram} presents the flow of information in the model. The first layer in the NBT hierarchy performs representation learning given the three model inputs, producing vector representations for the user utterance ($\mathbf{r}$), the {current} candidate slot-value pair ($\mathbf{c}$) and the system dialogue acts ($\mathbf{t_{q}, t_{s}, t_{v}}$). Subsequently, the learned vector representations interact through the \emph{context modelling} and \emph{semantic decoding} submodules to obtain the intermediate \emph{interaction summary} vectors $\mathbf{d_{r}, d_{c}}$ and $\mathbf{d}$. These are used as input to the final \emph{decision-making} module which decides whether the user expressed the intent represented by the candidate slot-value pair. \subsection{Representation Learning} For any given user utterance, system act(s) and candidate slot-value pair, the representation learning submodules produce vector representations which act as input for the downstream components of the model. All representation learning subcomponents make use of pre-trained collections of word vectors. As shown by Mrk{\v{s}}i\'c et al.~\shortcite{Mrksic:16}, specialising word vectors to express \emph{semantic similarity} rather than \emph{relatedness} is essential for improving belief tracking performance. For this reason, we use the semantically-specialised Paragram-SL999 word vectors \cite{Wieting:15} throughout this work. The NBT training procedure keeps these vectors fixed: that way, at test time, unseen words semantically related to familiar slot values (i.e.~\emph{inexpensive} to \emph{cheap}) will be recognised purely by their position in the original vector space (see also Rockt\"aschel et al.~\shortcite{rocktaschel:2016}). This means that the NBT model parameters can be shared across all values of the given slot, or even across all slots. Let $u$ represent a user utterance consisting of $k_u$ words $u_1, u_2, \ldots, u_{k_u}$. Each word has an associated word vector $\mathbf{u}_1, \ldots, \mathbf{u}_{k_u}$. We propose two model variants which differ in the method used to produce vector representations of $u$: \textsc{NBT-DNN} and \textsc{NBT-CNN}. Both act over the constituent $n$-grams of the utterance. Let $\mathbf{v}_{i}^{n}$ be the concatenation of the $n$ word vectors starting at index $i$, so that: \begin{equation} \mathbf{v}_{i}^{n} = \mathbf{u}_{i} \oplus \ldots \oplus \mathbf{u}_{i+n-1} \end{equation} \noindent where $\oplus$ denotes vector concatenation. The simpler of our two models, which we term \textsc{NBT-DNN}, is shown in Figure \ref{fig:nbt_dnn}. This model computes cumulative $n$-gram representation vectors $\mathbf{r}_{1}$, $\mathbf{r}_{2}$ and $\mathbf{r}_{3}$, which are the $n$-gram `summaries' of the unigrams, bigrams and trigrams in the user utterance:% For $n = 1,2,3$, these are expressed as: \begin{equation} \mathbf{r}_{n} = \sum_{i=1}^{k_u-n+1}{\mathbf{v}_{i}^{n}} %, ~~ \end{equation} \noindent Each of these vectors is then non-linearly mapped to intermediate representations of the same size: %For $n$-gram lengths of $1,2,3$, these are given by: \begin{equation} \mathbf{r}_{n}' = \sigma (W_{n}^{s}\mathbf{r}_{n} + b_{n}^{s}) \end{equation} \noindent where the weight matrices and bias terms map the cumulative $n$-grams to vectors of the same dimensionality and $\sigma$ denotes the sigmoid activation function. We maintain a separate set of parameters for each slot (indicated by superscript $s$). The three vectors are then summed to obtain a single representation for the user utterance: \begin{equation} \mathbf{r} ~ = ~ \mathbf{r}_{1}' + \mathbf{r}_{2}' +\mathbf{r}_{3}' \label{eqn:r} \\ \end{equation} The cumulative $n$-gram representations used by this model are just unweighted sums of all word vectors in the utterance. Ideally, the model should learn to recognise which parts of the utterance are more relevant for the subsequent classification task. For instance, it could learn to ignore verbs or stop words and pay more attention to adjectives and nouns which are more likely to express slot values. \paragraph{\textsc{NBT-CNN}} Our second model draws inspiration from successful applications of Convolutional Neural Networks (CNNs) for language understanding \cite{Collobert:11,Kalchbrenner:14,Kim:14}. These models typically apply a number of convolutional filters to $n$-grams in the input sentence, followed by non-linear activation functions and max-pooling. Following this approach, the \textsc{NBT-CNN} model applies $L=300$ different filters for $n$-gram lengths of $1,2$ and $3$ (Figure \ref{fig:nbt_cnn}). Let $F_{n}^{s} \in R^{L \times nD}$ denote the collection of filters for each value of $n$, where $D = 300$ is the word vector dimensionality. If $\mathbf{v}_{i}^{n}$ denotes the concatenation of $n$ word vectors starting at index $i$, let $\mathbf{m}_{n} = [\mathbf{v}_{1}^{n}; \mathbf{v}_{2}^{n}; \ldots; \mathbf{v}_{k_u-n+1}^{n}]$ be the list of $n$-grams that convolutional filters of length $n$ run over. The three intermediate representations are then given by: \begin{equation} {R}_{n} = F_n^s ~ \mathbf{m}_n \end{equation} Each column of the intermediate matrices ${R}_n$ is produced by a single convolutional filter of length $n$. We obtain summary $n$-gram representations by pushing these representations through a rectified linear unit (ReLU) activation function \cite{Nair:2010icml} and max-pooling over time (i.e.~columns of the matrix) to get a single feature for each of the $L$ filters applied to the utterance: \begin{equation} \mathbf{r}_{n}' = \mathtt{maxpool} \left( \mathtt{ReLU} \left( {R}_{n} + b_{n}^{s} \right) \right) %\\ \end{equation} \noindent where $b_{n}^{s}$ is a bias term broadcast across all filters. Finally, the three summary $n$-gram representations are summed to obtain the final utterance representation vector $\mathbf{r}$ (as in Equation \ref{eqn:r}). The \textsc{NBT-CNN} model is (by design) better suited to longer utterances, as its convolutional filters interact directly with subsequences of the utterance, and not just their noisy summaries given by the \textsc{NBT-DNN}'s cumulative $n$-grams. \subsection{Semantic Decoding} The NBT diagram in Figure \ref{fig:sys_diagram} shows that the utterance representation $\mathbf{r}$ and the candidate slot-value pair representation $\mathbf{c}$ directly interact through the \emph{semantic decoding} module. This component decides whether the user explicitly expressed an intent matching the current candidate pair (i.e.~without taking the dialogue context into account). Examples of such matches would be \emph{`I want Thai food'} with \texttt{food=Thai} or more demanding ones such as \emph{`a pricey restaurant'} with \texttt{price=expensive}. This is where the use of high-quality pre-trained word vectors comes into play: a delexicalisation-based model could deal with the former example but would be helpless in the latter case, unless a human expert had provided a {semantic dictionary} listing all potential rephrasings for each value in the domain ontology. Let the vector space representations of a candidate pair's slot name and value be given by $\mathbf{c_{s}}$ and $\mathbf{c_{v}}$ (with vectors of multi-word slot names/values summed together). The NBT model learns to map this tuple into a single vector $\mathbf{c}$ of the same dimensionality as the utterance representation $\mathbf{r}$. These two representations are then forced to interact in order to learn a similarity metric which discriminates between interactions of utterances with slot-value pairs that they either do or do not express: \begin{align} \mathbf{c} ~ &= \sigma \big( W_{c}^{s} (\mathbf{c_{s}} + \mathbf{c_{v}}) + b_{c}^{s} \big) \\ \mathbf{d} &= \mathbf{r} \otimes \mathbf{c} \end{align} \noindent where $\otimes$ denotes \emph{element-wise} vector multiplication. The dot product, which may seem like the more intuitive similarity metric, would reduce the rich set of features in $\mathbf{d}$ to a single scalar. The element-wise multiplication allows the downstream network to make better use of its parameters by learning non-linear interactions between sets of features in $\mathbf{r}$ and $\mathbf{c}$.\footnote{We also tried to concatenate $\mathbf{r}$ and $\mathbf{c}$ and pass that vector to the downstream decision-making neural network. However, this set-up led to very weak performance since our relatively small datasets did not suffice for the network to learn to model the interaction between the two feature vectors.} \subsection{Context Modelling} This `decoder' does not yet suffice to extract intents from utterances in human-machine dialogue. To understand some queries, the belief tracker must be aware of \emph{context}, i.e.~the flow of dialogue leading up to the latest user utterance. While all previous system and user utterances are important, the most relevant one is the last system utterance, in which the dialogue system could have performed (among others) one of the following two \emph{system acts}: \begin{enumerate} \item \textbf{System Request}: The system asks the user about the value of a specific slot $T_{q}$. If the system utterance is: \emph{`what price range would you like?'} and the user answers with \emph{any}, the model must infer the reference to \emph{price range}, and not to other slots such as \emph{area} or \emph{food}. \item \textbf{System Confirm:} The system asks the user to confirm whether a specific slot-value pair $(T_{s}, T_{v})$ is part of their desired constraints. For example, if the user responds to \emph{`how about Turkish food?'} with \emph{`yes'}, the model must be aware of the system act in order to correctly update the belief state. \end{enumerate} If we make the Markovian decision to only consider the last set of system acts, we can incorporate context modelling into the NBT. Let $\mathbf{t_{q}}$ and $(\mathbf{t_{s}}, \mathbf{t_{v}})$ be the word vectors of the arguments for the system request and confirm acts (zero vectors if none). The model computes the following measures of similarity between the system acts, candidate pair $(\mathbf{c_{s}}, \mathbf{c_{v}})$ and utterance representation $\mathbf{r}$: \begin{align} \mathbf{m_{r}} &= ~ (\mathbf{c_{s}} \cdot \mathbf{t_{q}}) \mathbf{r} \\ \mathbf{m_{c}} &= ~ ( \mathbf{c_{s}} \cdot \mathbf{t_{s}} ) ( \mathbf{c_{v}} \cdot \mathbf{t_{v}} ) \mathbf{r} \end{align} \noindent where $\cdot$ denotes dot product. The computed similarity terms act as gating mechanisms which only pass the utterance representation through if the system asked about the current candidate slot or slot-value pair. This type of interaction is particularly useful for the confirm system act: if the system asks the user to confirm, the user is likely not to mention any slot values, but to just respond affirmatively or negatively. This means that the model must consider the \emph{three-way interaction} between the utterance, candidate slot-value pair and the slot value pair offered by the system. If (and only if) the latter two are the same should the model consider the affirmative or negative polarity of the user utterance when making the subsequent binary decision. \paragraph{Binary Decision Maker} The intermediate representations are passed through another hidden layer and then combined. If $\phi_{dim}(\mathbf{x}) = \sigma (W \mathbf{x} + b)$ is a layer which maps input vector $\mathbf{x}$ to a vector of size $dim$, the input to the final binary softmax (which represents the decision) is given by: \begin{align*} \mathbf{y} &= \phi_{2} \big( \phi_{100}(\mathbf{d}) + \phi_{100}({\mathbf{m_r}}) + \phi_{100}({\mathbf{m_c}}) \big) \end{align*} \section{Belief State Update Mechanism} In spoken dialogue systems, belief tracking models operate over the output of automatic speech recognition (ASR). Despite improvements to speech recognition, the need to make the most out of imperfect ASR will persist as dialogue systems are used in increasingly noisy environments. In this work, we define a simple rule-based belief state update mechanism which can be applied to ASR $N$-best lists. For dialogue turn $t$, let $sys^{t-1}$ denote the preceding system output, and let $h^{t}$ denote the list of $N$ ASR hypotheses $h_{i}^{t}$ with posterior probabilities $p_{i}^{t}$. For any hypothesis $h^{t}_{i}$, slot $s$ and slot value $v \in V_{s}$, NBT models estimate $\mathbb{P}(s, v \mid h^{t}_{i}, sys^{t-1})$, which is the (turn-level) probability that $(s, v)$ was expressed in the given hypothesis. The predictions for $N$ such hypotheses are then combined as: \\ \begin{equation*} \mathbb{P}(s, v \mid h^{t}, sys^{t-1}) = \sum_{i=1}^{N} p_{i}^{t} ~ \mathbb{P}\left( s,v \mid h_{i}^{t}, sys^{t}\right) \end{equation*} This turn-level belief state estimate is then combined with the (cumulative) belief state up to time $(t-1)$ to get the updated belief state estimate: \begin{eqnarray*} \mathbb{P}(s, v \mid h^{1:t}, sys^{1:t-1}) ~=~ \lambda ~ \mathbb{P}\left(s, v \mid h^{t}, sys^{t-1}\right) \\ + ~ (1 - \lambda) ~ \mathbb{P}\left(s, v \mid h^{1:t-1}, sys^{1:t-2}\right) \end{eqnarray*} \noindent where $\lambda$ is the coefficient which determines the relative weight of the turn-level and previous turns' belief state estimates.\footnote{This coefficient was tuned on the DSTC2 development set. The best performance was achieved with $\lambda = 0.55$.} For slot $s$, the set of its \emph{detected values} at turn $t$ is then given by: \begin{equation*} V_{s}^{t} = \lbrace {v \in V_{s}} ~ \mid ~ {\mathbb{P}\left( s,v \mid h^{1:t}, sys^{1:t-1} \right) \geq 0.5} \rbrace \end{equation*} For informable (i.e.~goal-tracking) slots, the value in $V_{s}^{t}$ with the highest probability is chosen as the current goal (if $V_{s}^{t} \neq \lbrace \emptyset \rbrace $). For requests, all slots in $V_{req}^{t}$ are deemed to have been requested. As requestable slots serve to model single-turn user queries, they require no belief tracking across turns. \section{Experiments} \subsection{Datasets} Two datasets were used for training and evaluation. Both consist of user conversations with task-oriented dialogue systems designed to help users find suitable restaurants around Cambridge, UK. The two corpora share the same domain ontology, which contains three \emph{informable} (i.e.~goal-tracking) slots: \textsc{food}, \textsc{area} and \textsc{price}. The users can specify {values} for these slots in order to find restaurants which best meet their criteria. Once the system suggests a restaurant, the users can ask about the values of up to eight \emph{requestable} slots (\textsc{phone number, address}, etc.). The two datasets are: \begin{enumerate} \item \textbf{DSTC2}: We use the transcriptions, ASR hypotheses and turn-level semantic labels provided for the Dialogue State Tracking Challenge 2 \cite{Henderson:14a}. The official transcriptions contain various spelling errors which we corrected manually; the cleaned version of the dataset is available at \url{mi.eng.cam.ac.uk/~nm480/dstc2-clean.zip}. The training data contains 2207 dialogues \iffalse (15,611 dialogue turns) \fi and the test set consists of 1117 dialogues. We train NBT models on transcriptions but report belief tracking performance on test set ASR hypotheses provided in the original challenge. \item \textbf{WOZ 2.0}: Wen et al.~\shortcite{Wen:16} performed a Wizard of Oz style experiment in which Amazon Mechanical Turk users assumed the role of the system or the user of a task-oriented dialogue system based on the DSTC2 ontology. Users typed instead of using speech, which means performance in the WOZ experiments is more indicative of the model's capacity for semantic understanding than its robustness to ASR errors. Whereas in the DSTC2 dialogues users would quickly adapt to the system's (lack of) language understanding capability, the WOZ experimental design gave them freedom to use more sophisticated language. We expanded the original WOZ dataset from Wen et al.~\shortcite{Wen:16} using the same data collection procedure, yielding a total of 1200 dialogues. \iffalse (5,012 turns). \fi We divided these into 600 training, 200 validation and 400 test set dialogues. The WOZ 2.0 dataset is available at \url{mi.eng.cam.ac.uk/~nm480/woz_2.0.zip}. \end{enumerate} \paragraph{Training Examples} The two corpora are used to create training data for two separate experiments. For each dataset, we iterate over all train set utterances, generating one example for \emph{each} of the slot-value pairs in the ontology. An example consists of a transcription, its context (i.e.~list of preceding system acts) and a candidate slot-value pair. The binary label for each example indicates whether or not its utterance and context express the example's candidate pair. For instance, `\emph{I would like Irish food}' would generate a positive example for candidate pair \textsc{food={Irish}}, and a negative example for every other slot-value pair in the ontology. \paragraph{Evaluation} We focus on two key evaluation metrics introduced in \cite{Henderson:14a}: \vspace{-0mm} \begin{enumerate} \item \textbf{Goals} (`joint goal accuracy'): the proportion of dialogue turns where all the user's search goal constraints were correctly identified; \vspace{-0mm} \item \textbf{Requests}: similarly, the proportion of dialogue turns where user's requests for information were identified correctly. \vspace{-0mm} \end{enumerate} \subsection{Models} We evaluate two NBT model variants: \textsc{NBT-DNN} and \textsc{NBT-CNN}. To train the models, we use the Adam optimizer \cite{Adam:15} with cross-entropy loss, backpropagating through all the NBT subcomponents while keeping the pre-trained word vectors fixed (in order to allow the model to deal with unseen words at test time). The model is trained separately for each slot. Due to the high class bias (most of the constructed examples are negative), we incorporate a fixed number of positive examples in each mini-batch.\footnote{Model hyperparameters were tuned on the respective validation sets. For both datasets, the initial Adam learning rate was set to $0.001$, and $\frac{1}{8}$th of positive examples were included in each mini-batch. The batch size did not affect performance: it was set to 256 in all experiments. Gradient clipping (to $\left[-2.0, 2.0\right]$) was used to handle exploding gradients. Dropout \cite{Srivastava:2014} was used for regularisation (with 50\% dropout rate on all intermediate representations). Both \textsc{NBT} models were implemented in TensorFlow \cite{tf:15}. } \paragraph{Baseline Models} For each of the two datasets, we compare the NBT models to: \begin{enumerate} \item A baseline system that implements a well-known competitive delexicalisation-based model for that dataset. For DSTC2, the model is that of Henderson et al.~\shortcite{Henderson:14d,Henderson:14b}. This model is an $n$-gram based neural network model with recurrent connections between turns (but not inside utterances) which replaces occurrences of slot names and values with generic delexicalised features. For WOZ 2.0, we compare the NBT models to a more sophisticated belief tracking model presented in \cite{Wen:16}. This model uses an RNN for belief state updates and a CNN for turn-level feature extraction. Unlike \textsc{NBT-CNN}, their CNN operates not over vectors, but over delexicalised features akin to those used by \newcite{Henderson:14d}. \item The same baseline model supplemented with a task-specific semantic dictionary (produced by the baseline system creators). The two dictionaries are available at \url{mi.eng.cam.ac.uk/\~nm480/sem-dict.zip}. The DSTC2 dictionary contains only three rephrasings. Nonetheless, the use of these rephrasings translates to substantial gains in DST performance (see Sect.~6.1). We believe this result supports our claim that the vocabulary used by Mechanical Turkers in DSTC2 was constrained by the system's inability to cope with lexical variation and ASR noise. The WOZ dictionary includes 38 rephrasings, showing that the unconstrained language used by Mechanical Turkers in the Wizard-of-Oz setup requires more elaborate lexicons. \end{enumerate} Both baseline models map exact matches of ontology-defined intents (and their lexicon-specified rephrasings) to one-hot delexicalised $n$-gram features. This means that pre-trained vectors cannot be incorporated directly into these models. \section{Results} \subsection{Belief Tracking Performance} Table \ref{tab:dstc2_performance} shows the performance of NBT models trained and evaluated on DSTC2 and WOZ 2.0 datasets. The NBT models outperformed the baseline models in terms of both {joint goal} and request accuracies. For goals, the gains are \emph{always} statistically significant (paired $t$-test, $p<0.05$). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries. While the NBT performs well across the board, we can compare its performance on the two datasets to understand its strengths. The improvement over the baseline is greater on WOZ 2.0, which corroborates our intuition that the NBT's ability to learn linguistic variation is vital for this dataset containing longer sentences, richer vocabulary and no ASR errors. By comparison, the language of the subjects in the DSTC2 dataset is less rich, and compensating for ASR errors is the main hurdle: given access to the DSTC2 test set transcriptions, the NBT models' goal accuracy rises to 0.96. This indicates that future work should focus on better ASR compensation if the model is to be deployed in environments with challenging acoustics. \subsection{The Importance of Word Vector Spaces} The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. Table \ref{tab:wv_comparison} shows the performance of \textsc{NBT-CNN}\footnote{The \textsc{NBT-DNN} model showed the same trends. For brevity, Table \ref{tab:wv_comparison} presents only the \textsc{NBT-CNN} figures. } models making use of three different word vector collections: \textbf{1)} `random' word vectors initialised using the \textsc{xavier} initialisation \cite{Glorot:2010aistats}; \textbf{2)} distributional GloVe vectors \cite{Pennington:14}, trained using co-occurrence information in large textual corpora; and \textbf{3)} \emph{semantically specialised} Paragram-SL999 vectors \cite{Wieting:15}, which are obtained by injecting \emph{semantic similarity constraints} from the Paraphrase Database \cite{ppdb:13} into the distributional GloVe vectors in order to improve their semantic content. The results in Table \ref{tab:wv_comparison} show that the use of semantically specialised word vectors leads to considerable performance gains: Paragram-SL999 vectors (significantly) outperformed GloVe and \textsc{xavier} vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g.~\emph{north} and \emph{south}, \emph{expensive} and \emph{inexpensive}), offsetting the useful semantic content embedded in this vector spaces. \section{Conclusion} In this paper, we have proposed a novel neural belief tracking (NBT) framework designed to overcome current obstacles to deploying dialogue systems in real-world dialogue domains. The NBT models offer the known advantages of coupling Spoken Language Understanding and Dialogue State Tracking, without relying on hand-crafted semantic lexicons to achieve state-of-the-art performance. Our evaluation demonstrated these benefits: the NBT models match the performance of models which make use of such lexicons and vastly outperform them when these are not available. Finally, we have shown that the performance of NBT models improves with the semantic quality of the underlying word vectors. To the best of our knowledge, we are the first to move past intrinsic evaluation and show that \emph{semantic specialisation} boosts performance in downstream tasks. In future work, we intend to explore applications of the NBT for multi-domain dialogue systems, as well as in languages other than English that require handling of complex morphological variation. \section*{Acknowledgements} The authors would like to thank Ivan Vuli\'{c}, Ulrich Paquet, the Cambridge Dialogue Systems Group and the anonymous ACL reviewers for their constructive feedback and helpful discussions. \clearpage \bibliographystyle{acl2017} \clearpage \end{document}
Neural Belief Tracker: Data-Driven Dialogue State Tracking
1606.03777
Table 1: DSTC2 and WOZ 2.0 test set accuracies for: a) joint goals; and b) turn-level requests. The asterisk indicates statistically significant improvement over the baseline trackers (paired t-test; p<0.05).
[ "[BOLD] DST Model", "[BOLD] DSTC2 [BOLD] Goals", "[BOLD] DSTC2 [BOLD] Requests", "[BOLD] WOZ 2.0 [BOLD] Goals", "[BOLD] WOZ 2.0 [BOLD] Requests" ]
[ [ "[BOLD] Delexicalisation-Based Model", "69.1", "95.7", "70.8", "87.1" ], [ "[BOLD] Delexicalisation-Based Model + Semantic Dictionary", "72.9*", "95.7", "83.7*", "87.6" ], [ "Neural Belief Tracker: NBT-DNN", "72.6*", "96.4", "[BOLD] 84.4*", "91.2*" ], [ "Neural Belief Tracker: NBT-CNN", "[BOLD] 73.4*", "[BOLD] 96.5", "84.2*", "[BOLD] 91.6*" ] ]
The NBT models outperformed the baseline models in terms of both joint goal and request accuracies. For goals, the gains are always statistically significant (paired t-test, p<0.05). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \setcounter{dbltopnumber}{8} \setcounter{topnumber}{2} \setcounter{bottomnumber}{2} \setcounter{totalnumber}{4} \renewcommand{\topfraction}{0.85} \renewcommand{\bottomfraction}{0.85} \renewcommand{\textfraction}{0.15} \renewcommand{\floatpagefraction}{0.7} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Neural Belief Tracker: Data-Driven Dialogue State Tracking} \author{Nikola Mrk\v{s}i\'c$^{\mathbf{1}}$, ~ Diarmuid {\'O S\'eaghdha}$^{\mathbf{2}}$ \\ \textbf{Tsung-Hsien Wen$^{\mathbf{1}}$, ~ {Blaise Thomson$^{\mathbf{2}}$, ~ Steve Young$^{\mathbf{1}}$}} \\ $^{\mathbf{1}}$ University of Cambridge \\ $^{\mathbf{2}}$ Apple Inc. \\ { \tt \{nm480, thw28, sjy\}@cam.ac.uk} \\ { \tt\{doseaghdha, blaisethom\}@apple.com}} \date{} \begin{document} \maketitle \begin{abstract} One of the core components of modern spoken dialogue systems is the \textit{belief tracker}, which estimates the user's goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: \textbf{a)} Spoken Language Understanding models that require large amounts of annotated training data; or \textbf{b)} hand-crafted lexicons for capturing some of the linguistic variation in users' language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided. \end{abstract} \section{Introduction} Spoken dialogue systems (SDS) allow users to interact with computer applications through conversation. Task-based systems help users achieve goals such as finding restaurants or booking flights. The \emph{dialogue state tracking} (DST) component of an SDS serves to interpret user input and update the \emph{belief state}, which is the system's internal representation of the state of the conversation \cite{young:10c}. This is a probability distribution over dialogue states used by the downstream \emph{dialogue manager} to decide which action the system should perform next \cite{su:2016:nnpolicy,Su:16}; the system action is then verbalised by the {natural language generator} \cite{wen:15a,wen:15b,Dusek:15}. The Dialogue State Tracking Challenge (DSTC) series of shared tasks has provided a common evaluation framework accompanied by labelled datasets \cite{Williams:16}. In this framework, the dialogue system is supported by a \emph{domain ontology} which describes the range of user intents the system can process. The ontology defines a collection of \emph{slots} and the \emph{values} that each slot can take. The system must track the search constraints expressed by users (\emph{goals} or \emph{informable} slots) and questions the users ask about search results (\emph{requests}), taking into account each user utterance (input via a speech recogniser) and the dialogue context (e.g., what the system just said). The example in Figure \ref{fig:example-dialogue} shows the true state after each user utterance in a three-turn conversation. As can be seen in this example, DST models depend on identifying mentions of ontology items in user utterances. This becomes a non-trivial task when confronted with lexical variation, the dynamics of context and noisy automated speech recognition (ASR) output. Traditional statistical approaches use separate Spoken Language Understanding (SLU) modules to address lexical variability within a single dialogue turn. However, training such models requires substantial amounts of {domain-specific} annotation. Alternatively, turn-level SLU and cross-turn DST can be coalesced into a single model to achieve superior belief tracking performance, as shown by \newcite{Henderson:14b}. Such coupled models typically rely on {manually constructed} semantic dictionaries to identify alternative mentions of ontology items that vary lexically or morphologically. Figure \ref{fig:sem_dict} gives an example of such a dictionary for three slot-value pairs. This approach, which we term \emph{delexicalisation}, is clearly not scalable to larger, more complex dialogue domains. Importantly, the focus on English in DST research understates the considerable challenges that morphology poses to systems based on exact matching in morphologically richer languages such as Italian or German (see \newcite{Vulic:2017}). In this paper, we present two new models, collectively called the {Neural Belief Tracker} (NBT) family. The proposed models couple SLU and DST, efficiently learning to handle variation without requiring {any} hand-crafted resources. To do that, NBT models move away from exact matching and instead reason entirely over pre-trained word vectors. The vectors making up the user utterance and preceding system output are first composed into intermediate representations. These representations are then used to decide which of the ontology-defined intents have been expressed by the user up to that point in the conversation. To the best of our knowledge, NBT models are the first to successfully use pre-trained word vector spaces to improve the language understanding capability of belief tracking models. In evaluation on two datasets, we show that: \textbf{a)} NBT models match the performance of delexicalisation-based models which make use of hand-crafted semantic lexicons; and \textbf{b)} the NBT models significantly outperform those models when such resources are not available. Consequently, we believe this work proposes a framework better-suited to scaling belief tracking models for deployment in real-world dialogue systems operating over sophisticated application domains where the creation of such domain-specific lexicons would be infeasible. % comparable task-specific requirements \section{Background} Models for probabilistic dialogue state tracking, or \emph{belief tracking}, were introduced as components of spoken dialogue systems in order to better handle noisy speech recognition and other sources of uncertainty in understanding a user's goals \cite{Bohus:06,Williams:07,young:10c}. Modern dialogue management policies can learn to use a tracker's distribution over intents to decide whether to execute an action or request clarification from the user. As mentioned above, the DSTC shared tasks have spurred research on this problem and established a standard evaluation paradigm \cite{Williams:13a,Henderson:14c,Henderson:14a}. In this setting, the task is defined by an \emph{ontology} that enumerates the goals a user can specify and the attributes of entities that the user can request information about. \iffalse\footnote{Alternative \emph{chat-bot} style systems do not make use of task ontologies or the pipeline model. Instead, these models learn to generate/choose system responses based on previous dialogue turns \cite{vinyals:15,Lowe:15,Serban:16,Serban:16b,Anjuli:17}. This means these models cannot interact with databases or react to user queries different from those encountered in their training data.} \fi Many different belief tracking models have been proposed in the literature, from generative \cite{Thomson:10} and discriminative \cite{Henderson:14b} statistical models to rule-based systems \cite{Wang:13}. To motivate the work presented here, we categorise prior research according to their reliance (or otherwise) on a separate SLU module for interpreting user utterances:\footnote{The best-performing models in DSTC2 all used both raw ASR output and the output of (potentially more than one) SLU decoders \cite{Williams:14,Williams:16}. This does not mean that those models are immune to the drawbacks identified here for the two model categories; in fact, they share the drawbacks of both.} \paragraph{Separate SLU} Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state \cite{Thomson:10,Wang:13,Lee:16,Perez:16,Perez:16b,Sun:16,Jang:16,Shi:2016,Dernoncourt:16a,Liu:2017,Vodolan:2017}. In the DSTC challenges, some systems used the output of template-based matching systems such as Phoenix \cite{Wang:94}. However, more robust and accurate statistical SLU systems are available. Many discriminative approaches to spoken dialogue SLU train independent binary models that decide whether each slot-value pair was expressed in the user utterance. Given enough data, these models can learn which lexical features are good indicators for a given value and can capture elements of paraphrasing \cite{Mairesse:09}. This line of work later shifted focus to robust handling of rich ASR output \cite{Henderson:12,Tur:13}. SLU has also been treated as a sequence labelling problem, where each word in an utterance is labelled according to its role in the user's intent; standard labelling models such as CRFs or Recurrent Neural Networks can then be used \cite[i.a.]{Raymond:07,Yao:14,Celikyilmaz:2015,Mesnil:15,Peng:15,Zhang:16,Liu:16a,Vu:2016,Liu:16b}. Other approaches adopt a more complex modelling structure inspired by semantic parsing \cite{Saleh:14,Vlachos:14}. One drawback shared by these methods is their resource requirements, either because they need to learn independent parameters for each slot and value or because they need fine-grained manual annotation at the word level. This hinders scaling to larger, more realistic application domains. \paragraph{Joint SLU/DST} Research on belief tracking has found it advantageous to reason about SLU and DST jointly, taking ASR predictions as input and generating belief states as output \cite{Henderson:14b,Sun:14,Zilka:15,Mrksic:15}. In DSTC2, systems which used no external SLU module outperformed all systems that only used external SLU features. Joint models typically rely on a strategy known as \emph{delexicalisation} whereby slots and values mentioned in the text are replaced with generic labels. Once the dataset is transformed in this manner, one can extract a collection of template-like $n$-gram features such as \textbf{[want \emph{tagged-value} food]}. To perform belief tracking, the shared model iterates over all slot-value pairs, extracting delexicalised feature vectors and making a separate binary decision regarding each pair. Delexicalisation introduces a hidden dependency that is rarely discussed: how do we identify slot/value mentions in text? For toy domains, one can manually construct \emph{semantic dictionaries} which list the potential rephrasings for all slot values. As shown by Mrk{\v{s}}i\'c et al.~\shortcite{Mrksic:16}, the use of such dictionaries is essential for the performance of current delexicalisation-based models. Again though, this will not scale to the rich variety of user language or to general domains. The primary motivation for the work presented in this paper is to overcome the limitations that affect previous belief tracking models. The NBT model efficiently learns from the available data by: \textbf{1)} leveraging semantic information from pre-trained word vectors to resolve lexical/morphological ambiguity; \textbf{2)} maximising the number of parameters shared across ontology values; and \textbf{3)} having the flexibility to learn domain-specific paraphrasings and other kinds of variation that make it infeasible to rely on exact matching and delexicalisation as a robust strategy. \section{Neural Belief Tracker} The Neural Belief Tracker (NBT) is a model designed to detect the slot-value pairs that make up the user's goal at a given turn during the flow of dialogue. Its input consists of the system dialogue acts preceding the user input, the user utterance itself, and a single candidate slot-value pair that it needs to make a decision about. For instance, the model might have to decide whether the goal \textsc{food=Italian} has been expressed in \emph{`I'm looking for good pizza'}. To perform belief tracking, the NBT model \emph{iterates} over {all} candidate slot-value pairs (defined by the ontology), and decides which ones have just been expressed by the user. Figure \ref{fig:sys_diagram} presents the flow of information in the model. The first layer in the NBT hierarchy performs representation learning given the three model inputs, producing vector representations for the user utterance ($\mathbf{r}$), the {current} candidate slot-value pair ($\mathbf{c}$) and the system dialogue acts ($\mathbf{t_{q}, t_{s}, t_{v}}$). Subsequently, the learned vector representations interact through the \emph{context modelling} and \emph{semantic decoding} submodules to obtain the intermediate \emph{interaction summary} vectors $\mathbf{d_{r}, d_{c}}$ and $\mathbf{d}$. These are used as input to the final \emph{decision-making} module which decides whether the user expressed the intent represented by the candidate slot-value pair. \subsection{Representation Learning} For any given user utterance, system act(s) and candidate slot-value pair, the representation learning submodules produce vector representations which act as input for the downstream components of the model. All representation learning subcomponents make use of pre-trained collections of word vectors. As shown by Mrk{\v{s}}i\'c et al.~\shortcite{Mrksic:16}, specialising word vectors to express \emph{semantic similarity} rather than \emph{relatedness} is essential for improving belief tracking performance. For this reason, we use the semantically-specialised Paragram-SL999 word vectors \cite{Wieting:15} throughout this work. The NBT training procedure keeps these vectors fixed: that way, at test time, unseen words semantically related to familiar slot values (i.e.~\emph{inexpensive} to \emph{cheap}) will be recognised purely by their position in the original vector space (see also Rockt\"aschel et al.~\shortcite{rocktaschel:2016}). This means that the NBT model parameters can be shared across all values of the given slot, or even across all slots. Let $u$ represent a user utterance consisting of $k_u$ words $u_1, u_2, \ldots, u_{k_u}$. Each word has an associated word vector $\mathbf{u}_1, \ldots, \mathbf{u}_{k_u}$. We propose two model variants which differ in the method used to produce vector representations of $u$: \textsc{NBT-DNN} and \textsc{NBT-CNN}. Both act over the constituent $n$-grams of the utterance. Let $\mathbf{v}_{i}^{n}$ be the concatenation of the $n$ word vectors starting at index $i$, so that: \begin{equation} \mathbf{v}_{i}^{n} = \mathbf{u}_{i} \oplus \ldots \oplus \mathbf{u}_{i+n-1} \end{equation} \noindent where $\oplus$ denotes vector concatenation. The simpler of our two models, which we term \textsc{NBT-DNN}, is shown in Figure \ref{fig:nbt_dnn}. This model computes cumulative $n$-gram representation vectors $\mathbf{r}_{1}$, $\mathbf{r}_{2}$ and $\mathbf{r}_{3}$, which are the $n$-gram `summaries' of the unigrams, bigrams and trigrams in the user utterance:% For $n = 1,2,3$, these are expressed as: \begin{equation} \mathbf{r}_{n} = \sum_{i=1}^{k_u-n+1}{\mathbf{v}_{i}^{n}} %, ~~ \end{equation} \noindent Each of these vectors is then non-linearly mapped to intermediate representations of the same size: %For $n$-gram lengths of $1,2,3$, these are given by: \begin{equation} \mathbf{r}_{n}' = \sigma (W_{n}^{s}\mathbf{r}_{n} + b_{n}^{s}) \end{equation} \noindent where the weight matrices and bias terms map the cumulative $n$-grams to vectors of the same dimensionality and $\sigma$ denotes the sigmoid activation function. We maintain a separate set of parameters for each slot (indicated by superscript $s$). The three vectors are then summed to obtain a single representation for the user utterance: \begin{equation} \mathbf{r} ~ = ~ \mathbf{r}_{1}' + \mathbf{r}_{2}' +\mathbf{r}_{3}' \label{eqn:r} \\ \end{equation} The cumulative $n$-gram representations used by this model are just unweighted sums of all word vectors in the utterance. Ideally, the model should learn to recognise which parts of the utterance are more relevant for the subsequent classification task. For instance, it could learn to ignore verbs or stop words and pay more attention to adjectives and nouns which are more likely to express slot values. \paragraph{\textsc{NBT-CNN}} Our second model draws inspiration from successful applications of Convolutional Neural Networks (CNNs) for language understanding \cite{Collobert:11,Kalchbrenner:14,Kim:14}. These models typically apply a number of convolutional filters to $n$-grams in the input sentence, followed by non-linear activation functions and max-pooling. Following this approach, the \textsc{NBT-CNN} model applies $L=300$ different filters for $n$-gram lengths of $1,2$ and $3$ (Figure \ref{fig:nbt_cnn}). Let $F_{n}^{s} \in R^{L \times nD}$ denote the collection of filters for each value of $n$, where $D = 300$ is the word vector dimensionality. If $\mathbf{v}_{i}^{n}$ denotes the concatenation of $n$ word vectors starting at index $i$, let $\mathbf{m}_{n} = [\mathbf{v}_{1}^{n}; \mathbf{v}_{2}^{n}; \ldots; \mathbf{v}_{k_u-n+1}^{n}]$ be the list of $n$-grams that convolutional filters of length $n$ run over. The three intermediate representations are then given by: \begin{equation} {R}_{n} = F_n^s ~ \mathbf{m}_n \end{equation} Each column of the intermediate matrices ${R}_n$ is produced by a single convolutional filter of length $n$. We obtain summary $n$-gram representations by pushing these representations through a rectified linear unit (ReLU) activation function \cite{Nair:2010icml} and max-pooling over time (i.e.~columns of the matrix) to get a single feature for each of the $L$ filters applied to the utterance: \begin{equation} \mathbf{r}_{n}' = \mathtt{maxpool} \left( \mathtt{ReLU} \left( {R}_{n} + b_{n}^{s} \right) \right) %\\ \end{equation} \noindent where $b_{n}^{s}$ is a bias term broadcast across all filters. Finally, the three summary $n$-gram representations are summed to obtain the final utterance representation vector $\mathbf{r}$ (as in Equation \ref{eqn:r}). The \textsc{NBT-CNN} model is (by design) better suited to longer utterances, as its convolutional filters interact directly with subsequences of the utterance, and not just their noisy summaries given by the \textsc{NBT-DNN}'s cumulative $n$-grams. \subsection{Semantic Decoding} The NBT diagram in Figure \ref{fig:sys_diagram} shows that the utterance representation $\mathbf{r}$ and the candidate slot-value pair representation $\mathbf{c}$ directly interact through the \emph{semantic decoding} module. This component decides whether the user explicitly expressed an intent matching the current candidate pair (i.e.~without taking the dialogue context into account). Examples of such matches would be \emph{`I want Thai food'} with \texttt{food=Thai} or more demanding ones such as \emph{`a pricey restaurant'} with \texttt{price=expensive}. This is where the use of high-quality pre-trained word vectors comes into play: a delexicalisation-based model could deal with the former example but would be helpless in the latter case, unless a human expert had provided a {semantic dictionary} listing all potential rephrasings for each value in the domain ontology. Let the vector space representations of a candidate pair's slot name and value be given by $\mathbf{c_{s}}$ and $\mathbf{c_{v}}$ (with vectors of multi-word slot names/values summed together). The NBT model learns to map this tuple into a single vector $\mathbf{c}$ of the same dimensionality as the utterance representation $\mathbf{r}$. These two representations are then forced to interact in order to learn a similarity metric which discriminates between interactions of utterances with slot-value pairs that they either do or do not express: \begin{align} \mathbf{c} ~ &= \sigma \big( W_{c}^{s} (\mathbf{c_{s}} + \mathbf{c_{v}}) + b_{c}^{s} \big) \\ \mathbf{d} &= \mathbf{r} \otimes \mathbf{c} \end{align} \noindent where $\otimes$ denotes \emph{element-wise} vector multiplication. The dot product, which may seem like the more intuitive similarity metric, would reduce the rich set of features in $\mathbf{d}$ to a single scalar. The element-wise multiplication allows the downstream network to make better use of its parameters by learning non-linear interactions between sets of features in $\mathbf{r}$ and $\mathbf{c}$.\footnote{We also tried to concatenate $\mathbf{r}$ and $\mathbf{c}$ and pass that vector to the downstream decision-making neural network. However, this set-up led to very weak performance since our relatively small datasets did not suffice for the network to learn to model the interaction between the two feature vectors.} \subsection{Context Modelling} This `decoder' does not yet suffice to extract intents from utterances in human-machine dialogue. To understand some queries, the belief tracker must be aware of \emph{context}, i.e.~the flow of dialogue leading up to the latest user utterance. While all previous system and user utterances are important, the most relevant one is the last system utterance, in which the dialogue system could have performed (among others) one of the following two \emph{system acts}: \begin{enumerate} \item \textbf{System Request}: The system asks the user about the value of a specific slot $T_{q}$. If the system utterance is: \emph{`what price range would you like?'} and the user answers with \emph{any}, the model must infer the reference to \emph{price range}, and not to other slots such as \emph{area} or \emph{food}. \item \textbf{System Confirm:} The system asks the user to confirm whether a specific slot-value pair $(T_{s}, T_{v})$ is part of their desired constraints. For example, if the user responds to \emph{`how about Turkish food?'} with \emph{`yes'}, the model must be aware of the system act in order to correctly update the belief state. \end{enumerate} If we make the Markovian decision to only consider the last set of system acts, we can incorporate context modelling into the NBT. Let $\mathbf{t_{q}}$ and $(\mathbf{t_{s}}, \mathbf{t_{v}})$ be the word vectors of the arguments for the system request and confirm acts (zero vectors if none). The model computes the following measures of similarity between the system acts, candidate pair $(\mathbf{c_{s}}, \mathbf{c_{v}})$ and utterance representation $\mathbf{r}$: \begin{align} \mathbf{m_{r}} &= ~ (\mathbf{c_{s}} \cdot \mathbf{t_{q}}) \mathbf{r} \\ \mathbf{m_{c}} &= ~ ( \mathbf{c_{s}} \cdot \mathbf{t_{s}} ) ( \mathbf{c_{v}} \cdot \mathbf{t_{v}} ) \mathbf{r} \end{align} \noindent where $\cdot$ denotes dot product. The computed similarity terms act as gating mechanisms which only pass the utterance representation through if the system asked about the current candidate slot or slot-value pair. This type of interaction is particularly useful for the confirm system act: if the system asks the user to confirm, the user is likely not to mention any slot values, but to just respond affirmatively or negatively. This means that the model must consider the \emph{three-way interaction} between the utterance, candidate slot-value pair and the slot value pair offered by the system. If (and only if) the latter two are the same should the model consider the affirmative or negative polarity of the user utterance when making the subsequent binary decision. \paragraph{Binary Decision Maker} The intermediate representations are passed through another hidden layer and then combined. If $\phi_{dim}(\mathbf{x}) = \sigma (W \mathbf{x} + b)$ is a layer which maps input vector $\mathbf{x}$ to a vector of size $dim$, the input to the final binary softmax (which represents the decision) is given by: \begin{align*} \mathbf{y} &= \phi_{2} \big( \phi_{100}(\mathbf{d}) + \phi_{100}({\mathbf{m_r}}) + \phi_{100}({\mathbf{m_c}}) \big) \end{align*} \section{Belief State Update Mechanism} In spoken dialogue systems, belief tracking models operate over the output of automatic speech recognition (ASR). Despite improvements to speech recognition, the need to make the most out of imperfect ASR will persist as dialogue systems are used in increasingly noisy environments. In this work, we define a simple rule-based belief state update mechanism which can be applied to ASR $N$-best lists. For dialogue turn $t$, let $sys^{t-1}$ denote the preceding system output, and let $h^{t}$ denote the list of $N$ ASR hypotheses $h_{i}^{t}$ with posterior probabilities $p_{i}^{t}$. For any hypothesis $h^{t}_{i}$, slot $s$ and slot value $v \in V_{s}$, NBT models estimate $\mathbb{P}(s, v \mid h^{t}_{i}, sys^{t-1})$, which is the (turn-level) probability that $(s, v)$ was expressed in the given hypothesis. The predictions for $N$ such hypotheses are then combined as: \\ \begin{equation*} \mathbb{P}(s, v \mid h^{t}, sys^{t-1}) = \sum_{i=1}^{N} p_{i}^{t} ~ \mathbb{P}\left( s,v \mid h_{i}^{t}, sys^{t}\right) \end{equation*} This turn-level belief state estimate is then combined with the (cumulative) belief state up to time $(t-1)$ to get the updated belief state estimate: \begin{eqnarray*} \mathbb{P}(s, v \mid h^{1:t}, sys^{1:t-1}) ~=~ \lambda ~ \mathbb{P}\left(s, v \mid h^{t}, sys^{t-1}\right) \\ + ~ (1 - \lambda) ~ \mathbb{P}\left(s, v \mid h^{1:t-1}, sys^{1:t-2}\right) \end{eqnarray*} \noindent where $\lambda$ is the coefficient which determines the relative weight of the turn-level and previous turns' belief state estimates.\footnote{This coefficient was tuned on the DSTC2 development set. The best performance was achieved with $\lambda = 0.55$.} For slot $s$, the set of its \emph{detected values} at turn $t$ is then given by: \begin{equation*} V_{s}^{t} = \lbrace {v \in V_{s}} ~ \mid ~ {\mathbb{P}\left( s,v \mid h^{1:t}, sys^{1:t-1} \right) \geq 0.5} \rbrace \end{equation*} For informable (i.e.~goal-tracking) slots, the value in $V_{s}^{t}$ with the highest probability is chosen as the current goal (if $V_{s}^{t} \neq \lbrace \emptyset \rbrace $). For requests, all slots in $V_{req}^{t}$ are deemed to have been requested. As requestable slots serve to model single-turn user queries, they require no belief tracking across turns. \section{Experiments} \subsection{Datasets} Two datasets were used for training and evaluation. Both consist of user conversations with task-oriented dialogue systems designed to help users find suitable restaurants around Cambridge, UK. The two corpora share the same domain ontology, which contains three \emph{informable} (i.e.~goal-tracking) slots: \textsc{food}, \textsc{area} and \textsc{price}. The users can specify {values} for these slots in order to find restaurants which best meet their criteria. Once the system suggests a restaurant, the users can ask about the values of up to eight \emph{requestable} slots (\textsc{phone number, address}, etc.). The two datasets are: \begin{enumerate} \item \textbf{DSTC2}: We use the transcriptions, ASR hypotheses and turn-level semantic labels provided for the Dialogue State Tracking Challenge 2 \cite{Henderson:14a}. The official transcriptions contain various spelling errors which we corrected manually; the cleaned version of the dataset is available at \url{mi.eng.cam.ac.uk/~nm480/dstc2-clean.zip}. The training data contains 2207 dialogues \iffalse (15,611 dialogue turns) \fi and the test set consists of 1117 dialogues. We train NBT models on transcriptions but report belief tracking performance on test set ASR hypotheses provided in the original challenge. \item \textbf{WOZ 2.0}: Wen et al.~\shortcite{Wen:16} performed a Wizard of Oz style experiment in which Amazon Mechanical Turk users assumed the role of the system or the user of a task-oriented dialogue system based on the DSTC2 ontology. Users typed instead of using speech, which means performance in the WOZ experiments is more indicative of the model's capacity for semantic understanding than its robustness to ASR errors. Whereas in the DSTC2 dialogues users would quickly adapt to the system's (lack of) language understanding capability, the WOZ experimental design gave them freedom to use more sophisticated language. We expanded the original WOZ dataset from Wen et al.~\shortcite{Wen:16} using the same data collection procedure, yielding a total of 1200 dialogues. \iffalse (5,012 turns). \fi We divided these into 600 training, 200 validation and 400 test set dialogues. The WOZ 2.0 dataset is available at \url{mi.eng.cam.ac.uk/~nm480/woz_2.0.zip}. \end{enumerate} \paragraph{Training Examples} The two corpora are used to create training data for two separate experiments. For each dataset, we iterate over all train set utterances, generating one example for \emph{each} of the slot-value pairs in the ontology. An example consists of a transcription, its context (i.e.~list of preceding system acts) and a candidate slot-value pair. The binary label for each example indicates whether or not its utterance and context express the example's candidate pair. For instance, `\emph{I would like Irish food}' would generate a positive example for candidate pair \textsc{food={Irish}}, and a negative example for every other slot-value pair in the ontology. \paragraph{Evaluation} We focus on two key evaluation metrics introduced in \cite{Henderson:14a}: \vspace{-0mm} \begin{enumerate} \item \textbf{Goals} (`joint goal accuracy'): the proportion of dialogue turns where all the user's search goal constraints were correctly identified; \vspace{-0mm} \item \textbf{Requests}: similarly, the proportion of dialogue turns where user's requests for information were identified correctly. \vspace{-0mm} \end{enumerate} \subsection{Models} We evaluate two NBT model variants: \textsc{NBT-DNN} and \textsc{NBT-CNN}. To train the models, we use the Adam optimizer \cite{Adam:15} with cross-entropy loss, backpropagating through all the NBT subcomponents while keeping the pre-trained word vectors fixed (in order to allow the model to deal with unseen words at test time). The model is trained separately for each slot. Due to the high class bias (most of the constructed examples are negative), we incorporate a fixed number of positive examples in each mini-batch.\footnote{Model hyperparameters were tuned on the respective validation sets. For both datasets, the initial Adam learning rate was set to $0.001$, and $\frac{1}{8}$th of positive examples were included in each mini-batch. The batch size did not affect performance: it was set to 256 in all experiments. Gradient clipping (to $\left[-2.0, 2.0\right]$) was used to handle exploding gradients. Dropout \cite{Srivastava:2014} was used for regularisation (with 50\% dropout rate on all intermediate representations). Both \textsc{NBT} models were implemented in TensorFlow \cite{tf:15}. } \paragraph{Baseline Models} For each of the two datasets, we compare the NBT models to: \begin{enumerate} \item A baseline system that implements a well-known competitive delexicalisation-based model for that dataset. For DSTC2, the model is that of Henderson et al.~\shortcite{Henderson:14d,Henderson:14b}. This model is an $n$-gram based neural network model with recurrent connections between turns (but not inside utterances) which replaces occurrences of slot names and values with generic delexicalised features. For WOZ 2.0, we compare the NBT models to a more sophisticated belief tracking model presented in \cite{Wen:16}. This model uses an RNN for belief state updates and a CNN for turn-level feature extraction. Unlike \textsc{NBT-CNN}, their CNN operates not over vectors, but over delexicalised features akin to those used by \newcite{Henderson:14d}. \item The same baseline model supplemented with a task-specific semantic dictionary (produced by the baseline system creators). The two dictionaries are available at \url{mi.eng.cam.ac.uk/\~nm480/sem-dict.zip}. The DSTC2 dictionary contains only three rephrasings. Nonetheless, the use of these rephrasings translates to substantial gains in DST performance (see Sect.~6.1). We believe this result supports our claim that the vocabulary used by Mechanical Turkers in DSTC2 was constrained by the system's inability to cope with lexical variation and ASR noise. The WOZ dictionary includes 38 rephrasings, showing that the unconstrained language used by Mechanical Turkers in the Wizard-of-Oz setup requires more elaborate lexicons. \end{enumerate} Both baseline models map exact matches of ontology-defined intents (and their lexicon-specified rephrasings) to one-hot delexicalised $n$-gram features. This means that pre-trained vectors cannot be incorporated directly into these models. \section{Results} \subsection{Belief Tracking Performance} Table \ref{tab:dstc2_performance} shows the performance of NBT models trained and evaluated on DSTC2 and WOZ 2.0 datasets. The NBT models outperformed the baseline models in terms of both {joint goal} and request accuracies. For goals, the gains are \emph{always} statistically significant (paired $t$-test, $p<0.05$). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries. While the NBT performs well across the board, we can compare its performance on the two datasets to understand its strengths. The improvement over the baseline is greater on WOZ 2.0, which corroborates our intuition that the NBT's ability to learn linguistic variation is vital for this dataset containing longer sentences, richer vocabulary and no ASR errors. By comparison, the language of the subjects in the DSTC2 dataset is less rich, and compensating for ASR errors is the main hurdle: given access to the DSTC2 test set transcriptions, the NBT models' goal accuracy rises to 0.96. This indicates that future work should focus on better ASR compensation if the model is to be deployed in environments with challenging acoustics. \subsection{The Importance of Word Vector Spaces} The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. Table \ref{tab:wv_comparison} shows the performance of \textsc{NBT-CNN}\footnote{The \textsc{NBT-DNN} model showed the same trends. For brevity, Table \ref{tab:wv_comparison} presents only the \textsc{NBT-CNN} figures. } models making use of three different word vector collections: \textbf{1)} `random' word vectors initialised using the \textsc{xavier} initialisation \cite{Glorot:2010aistats}; \textbf{2)} distributional GloVe vectors \cite{Pennington:14}, trained using co-occurrence information in large textual corpora; and \textbf{3)} \emph{semantically specialised} Paragram-SL999 vectors \cite{Wieting:15}, which are obtained by injecting \emph{semantic similarity constraints} from the Paraphrase Database \cite{ppdb:13} into the distributional GloVe vectors in order to improve their semantic content. The results in Table \ref{tab:wv_comparison} show that the use of semantically specialised word vectors leads to considerable performance gains: Paragram-SL999 vectors (significantly) outperformed GloVe and \textsc{xavier} vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g.~\emph{north} and \emph{south}, \emph{expensive} and \emph{inexpensive}), offsetting the useful semantic content embedded in this vector spaces. \section{Conclusion} In this paper, we have proposed a novel neural belief tracking (NBT) framework designed to overcome current obstacles to deploying dialogue systems in real-world dialogue domains. The NBT models offer the known advantages of coupling Spoken Language Understanding and Dialogue State Tracking, without relying on hand-crafted semantic lexicons to achieve state-of-the-art performance. Our evaluation demonstrated these benefits: the NBT models match the performance of models which make use of such lexicons and vastly outperform them when these are not available. Finally, we have shown that the performance of NBT models improves with the semantic quality of the underlying word vectors. To the best of our knowledge, we are the first to move past intrinsic evaluation and show that \emph{semantic specialisation} boosts performance in downstream tasks. In future work, we intend to explore applications of the NBT for multi-domain dialogue systems, as well as in languages other than English that require handling of complex morphological variation. \section*{Acknowledgements} The authors would like to thank Ivan Vuli\'{c}, Ulrich Paquet, the Cambridge Dialogue Systems Group and the anonymous ACL reviewers for their constructive feedback and helpful discussions. \clearpage \bibliographystyle{acl2017} \clearpage \end{document}
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 6: Impact of changing the target language on POS tagging accuracy. Self = German/Czech in rows 1/2 respectively.
[ "SourceTarget", "English", "Arabic", "Self" ]
[ [ "German", "93.5", "92.7", "89.3" ], [ "Czech", "75.7", "75.2", "71.8" ] ]
We report here results that were omitted from the paper due to the space limit. As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology.
\section{Motivation} \label{sec:motivation} Translating morphologically-rich languages is especially difficult due to a large vocabulary size and a high level of sparsity. Different solutions have been proposed to deal with this problem, for example factored models in phrase-based MT~\cite{koehn-hoang:2007:EMNLP-CoNLL2007} or softmax approximations in neural MT~\cite{ruder-softmax}. More recently, neural MT systems have shown significant gains by exploiting characters and other sub-word units~\cite{costajussa-fonollosa:2016:P16-2,sennrich-haddow-birch:2016:P16-12,wu2016google}. Presumably, such models are better than word-based models in representing the structure of rare and unseen words. Indeed, \newcite{sennrich-haddow-birch:2016:P16-12} have found that the unigram translation accuracy of words decreases for lower-frequency words. They also observed somewhat different behavior when translating into different languages. It is less clear, however, what and how neural translation models learn about word structure. In this work we are interested in answering questions such as the following: \begin{itemize} \item How do character-based models improve neural MT? \item What components of the NMT system encoder word structure? \item How does the target language affect the learning of word structure? \item What is the role of the decoder in learning word representations? \end{itemize} In the next section, we describe our data-driven approach for addressing such questions. We aim to obtain quantitative answers that will lead to generalizable conclusions. \section{Methodology} Given a source sentence $s = \{w_1, w_2, ..., w_N\}$ and a target sentence $t=\{u_1, u_2, ..., u_M\}$, we first generate a vector representation for the source sentence using an encoder (Eqn.\ \ref{eq:enc}) and then map this vector to the target sentence using a decoder (Eqn.\ \ref{eq:dec}) \cite{sutskever2014sequence}: \begin{align} &\texttt{ENC}: s=\{w_1, w_2, ..., w_N\} \mapsto \sss \in \reals^k \label{eq:enc} \\ &\texttt{DEC} : \sss \in \reals^k \mapsto t=\{u_1, u_2, ..., u_M\} \label{eq:dec} \end{align} In this work, we use long short-term memory (LSTM) \cite{hochreiter1997long} encoder-decoders with attention \cite{bahdanau2014neural}, which we train on parallel data. After training the NMT system, we freeze the parameters of the encoder and use \texttt{ENC} as a feature extractor to generate vectors representing words in the sentence. Let $\texttt{ENC}_i(s)$ denote the encoded representation of word $w_i$. For example, this may be the output of the LSTM after word $w_i$. We feed $\texttt{ENC}_i(s)$ to a neural classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier. By comparing the performance of classifiers trained with features from different instantiations of \texttt{ENC}, we can evaluate what MT encoders learn about word structure. Figure \ref{fig:approach} illustrates this process. We follow a similar procedure for analyzing representation learning in $\texttt{DEC}$. The classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system, we choose to model the classifier as a simple feed-forward network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.\footnote{We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; \newcite{qian-qiu-huang:2016:P16-11} reported similar findings.} We emphasize that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models % learn about morphology. The classifier is trained with a cross-entropy loss; more details on its architecture are in the supplementary material. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[normalem]{ulem} % http://ctan.org/pkg/pifont \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \newcommand\alert[1]{{\textcolor{red}{#1}}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{496} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\reals}{\mathbb{R}} \newcommand{\xx}{\mathbf{x}} \newcommand{\ii}{\mathbf{i}} \newcommand{\ff}{\mathbf{f}} \newcommand{\oo}{\mathbf{o}} \newcommand{\cc}{\mathbf{c}} \newcommand{\bb}{\mathbf{b}} \newcommand{\hh}{\mathbf{h}} \newcommand{\uu}{\mathbf{u}} \newcommand{\ww}{\mathbf{w}} % word representation \newcommand{\sss}{\mathbf{s}} % sentence representation \newcommand{\WW}{\mathbf{W}} \newcommand{\mm}{\mathbf{m}} % memory \newcommand{\aaa}{\mathbf{a}} % attention \newcommand{\rr}{\mathbf{r}} % attention \newcommand{\zz}{\mathbf{z}} % noise \title{What do Neural Machine Translation Models Learn about Morphology?} \author{Yonatan Belinkov$^1$ ~~ Nadir Durrani$^2$ ~~ Fahim Dalvi$^2$ ~~ Hassan Sajjad$^2$ ~~ James Glass$^1$ \\\\ $^1$MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA \\ {\tt \{belinkov, glass\}@mit.edu} \\ $^2$Qatar Computing Research Institute, HBKU, Doha, Qatar \\ {\tt \{ndurrani, faimaduddin, hsajjad\}@qf.org.qa} } \date{} \begin{document} \maketitle \begin{framed} \noindent This is a modified version of a paper originally published at ACL 2017 with updated results and discussion in section 5. \end{framed} \begin{abstract} Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs.\ character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs.\ decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.\footnote{Our code is available at \url{https://github.com/boknilev/nmt-repr-analysis}.} \end{abstract} \input{introduction} \input{methodology} \input{data} \input{encoder-analysis} \input{decoder-analysis} \input{related-work} \input{conclusion} \section*{Acknowledgments} We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions. This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). \bibliographystyle{acl_natbib} \newpage \appendix \input{supplement} \end{document} \section{Conclusion} Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work, we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions: \begin{itemize}%[itemsep=1pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words. \item Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning. \item Translating into morphologically-poorer languages leads to better source-side representations. This is partly, but not completely, % correlated with BLEU scores. % \item There are only little differences between encoder and decoder representation quality. The attention mechanism does not seem to significantly affect the quality of the decoder representations, while it is important for the encoder representations. \end{itemize} These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation. Another area for future work is to extend the analysis to other word % representations (e.g.\ byte-pair encoding), deeper networks, and more semantically-oriented tasks such as semantic role-labeling or semantic parsing. \section{Supplementary Material} \label{sec:supplemental} \subsection{Training Details} \label{sec:sup-training} \paragraph{POS/Morphological classifier} The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout ($\rho=0.5$), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder's hidden state (typically 500 dimensions). We use Adam \cite{kingma2014adam} with default parameters to minimize the cross-entropy objective. Training is run with mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs. \paragraph{Neural MT system} We train a 2-layer LSTM encoder-decoder with attention. We use the \texttt{seq2seq-attn} implementation \cite{kim2016} with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The character-based model is a CNN with a highway network over characters \cite{kim2015character} with 1000 feature maps and a kernel width of 6 characters. This model was found to be useful for translating morphologically-rich languages \cite{costajussa-fonollosa:2016:P16-2}. The MT system is trained for 20 epochs, and the model with the best dev loss is used for extracting features for the classifier. \subsection{Data and Taggers} \label{sec:sup-data} \paragraph{Datasets} All of the translation models are trained on the Ted talks corpus included in WIT$^3$ \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016}. Statistics about each language pair are available on the WIT$^3$ website: \url{https://wit3.fbk.eu}. For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual \cite{PASHA14.593.L14-1479}) and the Tiger corpus for German.\footnote{\url{http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html}} \paragraph{POS and morphological taggers} We used the following tools to annotate the MT corpora: MADAMIRA \cite{PASHA14.593.L14-1479} for Arabic POS and morphological tags, Tree-Tagger \cite{schmid:2004:PAPERS} for Czech and French POS tags, LoPar \cite{schmid:00a} for German POS and morphological tags, and MXPOST \cite{ratnaparkhi98maximum} for English POS tags. These tools are recommended on the Moses website.\footnote{\url{http://www.statmt.org/moses/?n=Moses.ExternalTools}} As mentioned before, our goal is not to achieve state-of-the-art results, but rather to study what different components of the NMT architecture learn about word morphology. Please refer to \newcite{mueller-schmid-schutze:2013:EMNLP} for representative POS and morphological tagging accuracies. \subsection{Supplementary Results} \label{sec:sup-results} We report here results that were omitted from the paper due to the space limit. Table \ref{tab:different_layers} shows encoder results using different layers, languages, and representations (word/char-based). As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology. Table \ref{tab:different_language} shows that translating into a morphologically-poor language (English) leads to better source representations, and Table \ref{tab:decoder} provides additional decoder results. Table~\ref{tab:decoder-old} shows POS tagging accuracy using decoder representations, where the current word representation was used to predict the next word's tag. The idea is to evaluate whether the current word representation contains POS information about the output of the decoder. Clearly, the current word representation cannot be used to predict the next word's tag. This also holds when removing the attention (En-Ar, 85.54\%) or using character-based representations (En-Ar, 44.5\%). Since the decoder representation is in the pre-Softmax layer, this means that most of the work of predicting the next work is done in the Softmax layer, while the pre-Softmax representation contains much information about the current input word. \newpage \section{Introduction} Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism \cite{bahdanau2014neural,sutskever2014sequence}. The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation \cite{luong-manning:iwslt15,bentivogli-EtAl:2016:EMNLP2016,toral-sanchezcartagena:2017:EACLlong}. \bigskip However, little is known about what and how much these models learn about each language and its features. Recent work has started exploring the role of the NMT encoder in learning source syntax \cite{shi-padhi-knight:2016:EMNLP2016}, but research studies are yet to answer important questions such as: \textit{(i)} what do NMT models learn about word morphology? \textit{(ii)} what is the effect on learning when translating into/from morphologically-rich languages? \mbox{\textit{(iii)} what} impact do different representations (character vs.\ word) have on learning? and \textit{(iv)} what do different modules learn about the syntactic and semantic structure of a language? Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring \textit{(i)}, \textit{(ii)}, and \textit{(iii)} by providing quantitative, data-driven answers to the following specific questions: \begin{itemize}%[itemsep=5pt,topsep=8pt] %[leftmargin=*] \item Which parts of the NMT architecture capture word structure? \item What is the division of labor between different components (e.g.\ different layers or %of encoder vs.\ decoder)? \item How do different word representations help learn better morphology and modeling of infrequent words? \item How does the target language affect the learning of word structure? \end{itemize} To achieve this, we follow a simple but effective procedure with three steps: \mbox{\textit{(i)} train} a neural MT system on a parallel corpus; \mbox{\textit{(ii)} use} the trained model to extract feature representations for words in a language of interest; and \mbox{\textit{(iii)} train} a classifier using extracted features to make predictions for another task. We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task. We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs.\ the decoder. We experiment with several languages with varying degrees of morphological richness: French, German, Czech, Arabic, and Hebrew.\ Our analysis reveals interesting insights such as: \begin{itemize}%[itemsep=3pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are much better for learning morphology, especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words. \item Lower layers of the encoder are better at capturing word structure, while deeper networks improve translation quality, suggesting that higher layers focus more on word meaning. \item The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores. \item The NMT encoder and decoder learn representations of similar quality. The attention mechanism affects the quality of the encoder representations more than that of the decoder representations. \end{itemize} \section{Data} \paragraph{Language pairs } We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden our analysis and study the effect of having morphologically-rich languages on both source and target sides, we also include Arabic-Hebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies. \paragraph{MT data} Our translation models are trained on the WIT$^3$ corpus of TED talks \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016} made available for IWSLT 2016. This allows for comparable and cross-linguistic analysis. Statistics about each language pair are given in Table \ref{tab:tagsets} (under Pred). We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets. \paragraph{Annotated data} We use two kinds of datasets to train POS and morphological classifiers: gold-standard and predicted tags. For predicted tags, we simply used freely available taggers to annotate the MT data. For gold tags, we use gold-annotated datasets. Table \ref{tab:tagsets} gives statistics for datasets with gold and predicted tags; see supplementary material for details on taggers and gold data. We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available. \section{Decoder Analysis} \label{sec:dec-analysis} So far we only looked at the encoder. However, the decoder \texttt{DEC} is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence. These features are used to train a classifier on POS or morphological tagging on the target side.\footnote{In this section we only experiment with predicted tags as there are no parallel data with gold POS/morphological tags that we are aware of.} % Note that in this case the decoder is given the correct target words one-by-one, similar to the usual NMT training regime. Table \ref{tab:pos-dec-enc-attn-nogold} (1st row) shows the results of using representations extracted with \texttt{ENC} and \texttt{DEC} from the Arabic-English and English-Arabic models, respectively. There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs.\ Arabic to English. We observed simmilar small drops with higher quality translation directions (Table~\ref{tab:decoder}, Appendix~\ref{sec:sup-results}). The little gap between encoder and decoder representations may sound surprising, when we consider the fundamental tasks of the two modules. The encoder's task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT \cite{johnson2016google}. The decoder's task is to use this representation to generate the target sentence in a specific language. One might conjecture that it would be sufficient for the decoder to learn a strong language model in order % to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. However, their performance seems more or less comparable. In the following section we investigate what the role of the attention mechanism in the division of labor between encoder and decoder. \subsection{Effect of attention} Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder's hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism might hurt the quality of the target word representations learned by the decoder. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. As Table~\ref{tab:pos-dec-enc-attn-nogold} shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations significantly, but only mildly hurts the quality of the decoder representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis. \subsection{Effect of word representation} We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations. Table~\ref{tab:pos-dec-enc-word-char-nogold} shows POS accuracy of word-based % vs.\ char-based representations in the encoder and decoder. In both bases, char-based representations perform better. BLEU scores behave differently: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon % is that the decoder's predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabic-to-English the char-based model reduces the number of generated unknown words % in the MT % test set by 25\%, while in English-to-Arabic the number of unknown words % remains roughly the same between word-based % and char-based models. \section{Related Work} \label{sec:related-work} \paragraph{Analysis of neural models} The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks that are trained for a given task \cite{elman1991distributed,karpathy2015visualizing,kadar2016representation,qian-qiu-huang:2016:EMNLP2016}. While such visualizations illuminate the inner workings of the network, they are often qualitative in nature and somewhat anecdotal. A different approach tries to provide a quantitative analysis by correlating parts of the neural % network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings \cite{kohn:2015:EMNLP,qian-qiu-huang:2016:P16-11}, through LSTM gates or states \cite{qian-qiu-huang:2016:EMNLP2016}, to sentence embeddings \cite{adi2016fine}. Our work is most similar to \newcite{shi-padhi-knight:2016:EMNLP2016}, who use hidden vectors from a neural MT encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria. \newcite{vylomova2016word} also analyze different % representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations. \paragraph{Word representations in MT} Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation \cite{C00-2162,E03-1076,Badr:2008:SES:1557690.1557732} and factored translation and reordering models \cite{koehn-hoang:2007:EMNLP-CoNLL2007,durrani-EtAl:2014:Coling}. Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich \cite{Luong:D10-1015} or closely related language pairs \cite{durrani-EtAl:2010:ACL,Nakov:Tiedemann:2012}. In neural MT, such units are obtained in a pre-processing step---e.g.\ by byte-pair encoding \cite{sennrich-haddow-birch:2016:P16-12} or the word-piece model \cite{wu2016google}---or learned during training using a character-based convolutional/recurrent sub-network \cite{costajussa-fonollosa:2016:P16-2,Luong:P16-1100,vylomova2016word}. The latter approach has the advantage of keeping the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation \cite{kim2015character,belinkov-glass:2016:SeMaT, costajussa-fonollosa:2016:P16-2,jozefowicz2016exploring,sajjad:2017:ACL}. We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system. \section{Encoder Analysis} \label{sec:enc-analysis} Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder \texttt{ENC} and a sentence $s$ with POS/morphology annotation, we generate word features $\texttt{ENC}_i(s)$ for every word in the sentence. We then train a classifier that uses the features $\texttt{ENC}_i(s)$ to predict POS or morphological tags. \subsection{Effect of word representation} In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training \cite{kim2015character,costajussa-fonollosa:2016:P16-2}; see appendix \ref{sec:sup-training} for specific settings. In both cases we run the encoder over these representations and use its output $\texttt{ENC}_i(s)$ as features for the classifier. Table~\ref{tab:results-all-pairs} shows POS tagging accuracy using features from different NMT encoders. Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively.\footnote{The results are not far below dedicated taggers (e.g.\ 95.1/84.1 on Arabic POS/morphology \cite{PASHA14.593.L14-1479}), indicating that NMT models learn quite good representations.} The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table~\ref{tab:results-all-pairs}. \paragraph{Impact of word frequency} Let us look more closely at an example case: Arabic POS and morphological tagging. Figure~\ref{fig:repr} shows the effect of using word-based vs.\ char-based feature representations, obtained from the encoder of the Arabic-Hebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is true for the overall accuracy (+14.3\% in POS, +14.5\% in morphology), but more so on OOV words (+37.6\% in POS, +32.7\% in morphology). Figure~\ref{fig:repr-freqs} shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples to make a decision. \paragraph{Analyzing specific tags} In Figure~\ref{fig:repr-pos-cm} we plot confusion matrices for POS tagging using word-based and char-based representations (from Arabic encoders). While the char-based representations are overall better, the two models still share similar misclassified tags. Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case, relatively many tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the char-based case, this hardly happens. This suggests that char-based representations are predictive of the presence of a determiner, which in Arabic is expressed as the prefix ``Al-'' (the definite article), a pattern easily captured by a char-based model. In Figure~\ref{fig:repr-pos-tag-freq} we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner occur more frequently in the training set and are better predicted by char-based compared to word-based representations. There are a few fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations: mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (\mbox{``-wn/yn''} for masc. plural, \mbox{``-At''} for fem. plural). The char-based model is thus especially good with frequent tags and infrequent words, which is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs. \subsection{Effect of encoder depth} Modern NMT systems use very deep architectures with up to 8 or 16 layers \cite{wu2016google,TACL863}. We would like to understand what kind of information different layers capture. Given a trained model with multiple layers, we extract representations from the different layers in the encoder. Let $\texttt{ENC}^l_i(s)$ denote the encoded representation of word $w_i$ after the $l$-th layer. We vary $l$ and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder for simplicity ($l \in \{1,2\}$). Figure~\ref{fig:layer-effect-all-langs} shows POS tagging results using representations from different encoding layers across five language pairs. The general trend is that passing word vectors through the encoder improves POS tagging, which can be explained by contextual information contained in the representations after one layer. However, it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure. Figure~\ref{fig:layer-effect-lines} shows that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.\footnote{We found this result to be also true in French, German, and Czech experiments (see the supplementary material). } In contrast, BLEU scores actually increase when training \mbox{2-layer} vs.\ \mbox{1-layer} models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models). Thus translation quality improves when adding layers but morphology quality degrades. Intuitively, it seems that lower layers of the network learn to represent word structure while higher layers focus more on word meaning. A similar pattern was recently observed in a joint language-vision deep recurrent net~\cite{gelderloos-chrupala:2016:COLING}. \subsection{Effect of target language} While translating from morphologically-rich languages is challenging, translating into such languages is even harder. For instance, our basic system obtains BLEU of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech. How does the target language affect the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models on different target languages. For example, given an Arabic source we train Arabic-to-English/Hebrew/German systems. These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German). To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is completely identical: the models are trained on the same Arabic sentences with different translations. Figure~\ref{fig:target-lang} shows POS and morphology accuracy of word-based representations from the NMT encoders, as well as corresponding BLEU scores. As expected, translating to English is easier than translating to the morphologically-richer Hebrew and German, resulting in higher BLEU. Despite their similar morphologies, translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the richer Hebrew morphology compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating to English are better for predicting POS or morphology than those learned when translating to German, which are in turn better than those learned when translating to Hebrew. This is remarkable given that English is a morphologically-poor language that does not display many of the morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them would make the model learn more about morphology. A possible explanation for this phenomenon is that the Arabic-English model is simply better than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table \ref{tab:results-all-pairs}. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. We found that it learns to recreate the test sentences extremely well, with very high BLEU scores (Figure~\ref{fig:target-lang}). However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 2: POS accuracy on gold and predicted tags using word-based and character-based representations, as well as corresponding BLEU scores.
[ "[EMPTY]", "Gold", "Pred", "BLEU" ]
[ [ "[EMPTY]", "Word/Char", "Word/Char", "Word/Char" ], [ "Ar-En", "80.31/93.66", "89.62/95.35", "24.7/28.4" ], [ "Ar-He", "78.20/92.48", "88.33/94.66", "9.9/10.7" ], [ "De-En", "87.68/94.57", "93.54/94.63", "29.6/30.4" ], [ "Fr-En", "–", "94.61/95.55", "37.8/38.8" ], [ "Cz-En", "–", "75.71/79.10", "23.2/25.4" ] ]
Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs.
\section{Motivation} \label{sec:motivation} Translating morphologically-rich languages is especially difficult due to a large vocabulary size and a high level of sparsity. Different solutions have been proposed to deal with this problem, for example factored models in phrase-based MT~\cite{koehn-hoang:2007:EMNLP-CoNLL2007} or softmax approximations in neural MT~\cite{ruder-softmax}. More recently, neural MT systems have shown significant gains by exploiting characters and other sub-word units~\cite{costajussa-fonollosa:2016:P16-2,sennrich-haddow-birch:2016:P16-12,wu2016google}. Presumably, such models are better than word-based models in representing the structure of rare and unseen words. Indeed, \newcite{sennrich-haddow-birch:2016:P16-12} have found that the unigram translation accuracy of words decreases for lower-frequency words. They also observed somewhat different behavior when translating into different languages. It is less clear, however, what and how neural translation models learn about word structure. In this work we are interested in answering questions such as the following: \begin{itemize} \item How do character-based models improve neural MT? \item What components of the NMT system encoder word structure? \item How does the target language affect the learning of word structure? \item What is the role of the decoder in learning word representations? \end{itemize} In the next section, we describe our data-driven approach for addressing such questions. We aim to obtain quantitative answers that will lead to generalizable conclusions. \section{Methodology} Given a source sentence $s = \{w_1, w_2, ..., w_N\}$ and a target sentence $t=\{u_1, u_2, ..., u_M\}$, we first generate a vector representation for the source sentence using an encoder (Eqn.\ \ref{eq:enc}) and then map this vector to the target sentence using a decoder (Eqn.\ \ref{eq:dec}) \cite{sutskever2014sequence}: \begin{align} &\texttt{ENC}: s=\{w_1, w_2, ..., w_N\} \mapsto \sss \in \reals^k \label{eq:enc} \\ &\texttt{DEC} : \sss \in \reals^k \mapsto t=\{u_1, u_2, ..., u_M\} \label{eq:dec} \end{align} In this work, we use long short-term memory (LSTM) \cite{hochreiter1997long} encoder-decoders with attention \cite{bahdanau2014neural}, which we train on parallel data. After training the NMT system, we freeze the parameters of the encoder and use \texttt{ENC} as a feature extractor to generate vectors representing words in the sentence. Let $\texttt{ENC}_i(s)$ denote the encoded representation of word $w_i$. For example, this may be the output of the LSTM after word $w_i$. We feed $\texttt{ENC}_i(s)$ to a neural classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier. By comparing the performance of classifiers trained with features from different instantiations of \texttt{ENC}, we can evaluate what MT encoders learn about word structure. Figure \ref{fig:approach} illustrates this process. We follow a similar procedure for analyzing representation learning in $\texttt{DEC}$. The classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system, we choose to model the classifier as a simple feed-forward network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.\footnote{We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; \newcite{qian-qiu-huang:2016:P16-11} reported similar findings.} We emphasize that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models % learn about morphology. The classifier is trained with a cross-entropy loss; more details on its architecture are in the supplementary material. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[normalem]{ulem} % http://ctan.org/pkg/pifont \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \newcommand\alert[1]{{\textcolor{red}{#1}}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{496} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\reals}{\mathbb{R}} \newcommand{\xx}{\mathbf{x}} \newcommand{\ii}{\mathbf{i}} \newcommand{\ff}{\mathbf{f}} \newcommand{\oo}{\mathbf{o}} \newcommand{\cc}{\mathbf{c}} \newcommand{\bb}{\mathbf{b}} \newcommand{\hh}{\mathbf{h}} \newcommand{\uu}{\mathbf{u}} \newcommand{\ww}{\mathbf{w}} % word representation \newcommand{\sss}{\mathbf{s}} % sentence representation \newcommand{\WW}{\mathbf{W}} \newcommand{\mm}{\mathbf{m}} % memory \newcommand{\aaa}{\mathbf{a}} % attention \newcommand{\rr}{\mathbf{r}} % attention \newcommand{\zz}{\mathbf{z}} % noise \title{What do Neural Machine Translation Models Learn about Morphology?} \author{Yonatan Belinkov$^1$ ~~ Nadir Durrani$^2$ ~~ Fahim Dalvi$^2$ ~~ Hassan Sajjad$^2$ ~~ James Glass$^1$ \\\\ $^1$MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA \\ {\tt \{belinkov, glass\}@mit.edu} \\ $^2$Qatar Computing Research Institute, HBKU, Doha, Qatar \\ {\tt \{ndurrani, faimaduddin, hsajjad\}@qf.org.qa} } \date{} \begin{document} \maketitle \begin{framed} \noindent This is a modified version of a paper originally published at ACL 2017 with updated results and discussion in section 5. \end{framed} \begin{abstract} Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs.\ character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs.\ decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.\footnote{Our code is available at \url{https://github.com/boknilev/nmt-repr-analysis}.} \end{abstract} \input{introduction} \input{methodology} \input{data} \input{encoder-analysis} \input{decoder-analysis} \input{related-work} \input{conclusion} \section*{Acknowledgments} We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions. This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). \bibliographystyle{acl_natbib} \newpage \appendix \input{supplement} \end{document} \section{Conclusion} Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work, we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions: \begin{itemize}%[itemsep=1pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words. \item Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning. \item Translating into morphologically-poorer languages leads to better source-side representations. This is partly, but not completely, % correlated with BLEU scores. % \item There are only little differences between encoder and decoder representation quality. The attention mechanism does not seem to significantly affect the quality of the decoder representations, while it is important for the encoder representations. \end{itemize} These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation. Another area for future work is to extend the analysis to other word % representations (e.g.\ byte-pair encoding), deeper networks, and more semantically-oriented tasks such as semantic role-labeling or semantic parsing. \section{Supplementary Material} \label{sec:supplemental} \subsection{Training Details} \label{sec:sup-training} \paragraph{POS/Morphological classifier} The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout ($\rho=0.5$), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder's hidden state (typically 500 dimensions). We use Adam \cite{kingma2014adam} with default parameters to minimize the cross-entropy objective. Training is run with mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs. \paragraph{Neural MT system} We train a 2-layer LSTM encoder-decoder with attention. We use the \texttt{seq2seq-attn} implementation \cite{kim2016} with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The character-based model is a CNN with a highway network over characters \cite{kim2015character} with 1000 feature maps and a kernel width of 6 characters. This model was found to be useful for translating morphologically-rich languages \cite{costajussa-fonollosa:2016:P16-2}. The MT system is trained for 20 epochs, and the model with the best dev loss is used for extracting features for the classifier. \subsection{Data and Taggers} \label{sec:sup-data} \paragraph{Datasets} All of the translation models are trained on the Ted talks corpus included in WIT$^3$ \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016}. Statistics about each language pair are available on the WIT$^3$ website: \url{https://wit3.fbk.eu}. For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual \cite{PASHA14.593.L14-1479}) and the Tiger corpus for German.\footnote{\url{http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html}} \paragraph{POS and morphological taggers} We used the following tools to annotate the MT corpora: MADAMIRA \cite{PASHA14.593.L14-1479} for Arabic POS and morphological tags, Tree-Tagger \cite{schmid:2004:PAPERS} for Czech and French POS tags, LoPar \cite{schmid:00a} for German POS and morphological tags, and MXPOST \cite{ratnaparkhi98maximum} for English POS tags. These tools are recommended on the Moses website.\footnote{\url{http://www.statmt.org/moses/?n=Moses.ExternalTools}} As mentioned before, our goal is not to achieve state-of-the-art results, but rather to study what different components of the NMT architecture learn about word morphology. Please refer to \newcite{mueller-schmid-schutze:2013:EMNLP} for representative POS and morphological tagging accuracies. \subsection{Supplementary Results} \label{sec:sup-results} We report here results that were omitted from the paper due to the space limit. Table \ref{tab:different_layers} shows encoder results using different layers, languages, and representations (word/char-based). As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology. Table \ref{tab:different_language} shows that translating into a morphologically-poor language (English) leads to better source representations, and Table \ref{tab:decoder} provides additional decoder results. Table~\ref{tab:decoder-old} shows POS tagging accuracy using decoder representations, where the current word representation was used to predict the next word's tag. The idea is to evaluate whether the current word representation contains POS information about the output of the decoder. Clearly, the current word representation cannot be used to predict the next word's tag. This also holds when removing the attention (En-Ar, 85.54\%) or using character-based representations (En-Ar, 44.5\%). Since the decoder representation is in the pre-Softmax layer, this means that most of the work of predicting the next work is done in the Softmax layer, while the pre-Softmax representation contains much information about the current input word. \newpage \section{Introduction} Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism \cite{bahdanau2014neural,sutskever2014sequence}. The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation \cite{luong-manning:iwslt15,bentivogli-EtAl:2016:EMNLP2016,toral-sanchezcartagena:2017:EACLlong}. \bigskip However, little is known about what and how much these models learn about each language and its features. Recent work has started exploring the role of the NMT encoder in learning source syntax \cite{shi-padhi-knight:2016:EMNLP2016}, but research studies are yet to answer important questions such as: \textit{(i)} what do NMT models learn about word morphology? \textit{(ii)} what is the effect on learning when translating into/from morphologically-rich languages? \mbox{\textit{(iii)} what} impact do different representations (character vs.\ word) have on learning? and \textit{(iv)} what do different modules learn about the syntactic and semantic structure of a language? Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring \textit{(i)}, \textit{(ii)}, and \textit{(iii)} by providing quantitative, data-driven answers to the following specific questions: \begin{itemize}%[itemsep=5pt,topsep=8pt] %[leftmargin=*] \item Which parts of the NMT architecture capture word structure? \item What is the division of labor between different components (e.g.\ different layers or %of encoder vs.\ decoder)? \item How do different word representations help learn better morphology and modeling of infrequent words? \item How does the target language affect the learning of word structure? \end{itemize} To achieve this, we follow a simple but effective procedure with three steps: \mbox{\textit{(i)} train} a neural MT system on a parallel corpus; \mbox{\textit{(ii)} use} the trained model to extract feature representations for words in a language of interest; and \mbox{\textit{(iii)} train} a classifier using extracted features to make predictions for another task. We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task. We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs.\ the decoder. We experiment with several languages with varying degrees of morphological richness: French, German, Czech, Arabic, and Hebrew.\ Our analysis reveals interesting insights such as: \begin{itemize}%[itemsep=3pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are much better for learning morphology, especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words. \item Lower layers of the encoder are better at capturing word structure, while deeper networks improve translation quality, suggesting that higher layers focus more on word meaning. \item The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores. \item The NMT encoder and decoder learn representations of similar quality. The attention mechanism affects the quality of the encoder representations more than that of the decoder representations. \end{itemize} \section{Data} \paragraph{Language pairs } We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden our analysis and study the effect of having morphologically-rich languages on both source and target sides, we also include Arabic-Hebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies. \paragraph{MT data} Our translation models are trained on the WIT$^3$ corpus of TED talks \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016} made available for IWSLT 2016. This allows for comparable and cross-linguistic analysis. Statistics about each language pair are given in Table \ref{tab:tagsets} (under Pred). We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets. \paragraph{Annotated data} We use two kinds of datasets to train POS and morphological classifiers: gold-standard and predicted tags. For predicted tags, we simply used freely available taggers to annotate the MT data. For gold tags, we use gold-annotated datasets. Table \ref{tab:tagsets} gives statistics for datasets with gold and predicted tags; see supplementary material for details on taggers and gold data. We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available. \section{Decoder Analysis} \label{sec:dec-analysis} So far we only looked at the encoder. However, the decoder \texttt{DEC} is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence. These features are used to train a classifier on POS or morphological tagging on the target side.\footnote{In this section we only experiment with predicted tags as there are no parallel data with gold POS/morphological tags that we are aware of.} % Note that in this case the decoder is given the correct target words one-by-one, similar to the usual NMT training regime. Table \ref{tab:pos-dec-enc-attn-nogold} (1st row) shows the results of using representations extracted with \texttt{ENC} and \texttt{DEC} from the Arabic-English and English-Arabic models, respectively. There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs.\ Arabic to English. We observed simmilar small drops with higher quality translation directions (Table~\ref{tab:decoder}, Appendix~\ref{sec:sup-results}). The little gap between encoder and decoder representations may sound surprising, when we consider the fundamental tasks of the two modules. The encoder's task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT \cite{johnson2016google}. The decoder's task is to use this representation to generate the target sentence in a specific language. One might conjecture that it would be sufficient for the decoder to learn a strong language model in order % to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. However, their performance seems more or less comparable. In the following section we investigate what the role of the attention mechanism in the division of labor between encoder and decoder. \subsection{Effect of attention} Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder's hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism might hurt the quality of the target word representations learned by the decoder. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. As Table~\ref{tab:pos-dec-enc-attn-nogold} shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations significantly, but only mildly hurts the quality of the decoder representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis. \subsection{Effect of word representation} We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations. Table~\ref{tab:pos-dec-enc-word-char-nogold} shows POS accuracy of word-based % vs.\ char-based representations in the encoder and decoder. In both bases, char-based representations perform better. BLEU scores behave differently: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon % is that the decoder's predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabic-to-English the char-based model reduces the number of generated unknown words % in the MT % test set by 25\%, while in English-to-Arabic the number of unknown words % remains roughly the same between word-based % and char-based models. \section{Related Work} \label{sec:related-work} \paragraph{Analysis of neural models} The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks that are trained for a given task \cite{elman1991distributed,karpathy2015visualizing,kadar2016representation,qian-qiu-huang:2016:EMNLP2016}. While such visualizations illuminate the inner workings of the network, they are often qualitative in nature and somewhat anecdotal. A different approach tries to provide a quantitative analysis by correlating parts of the neural % network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings \cite{kohn:2015:EMNLP,qian-qiu-huang:2016:P16-11}, through LSTM gates or states \cite{qian-qiu-huang:2016:EMNLP2016}, to sentence embeddings \cite{adi2016fine}. Our work is most similar to \newcite{shi-padhi-knight:2016:EMNLP2016}, who use hidden vectors from a neural MT encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria. \newcite{vylomova2016word} also analyze different % representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations. \paragraph{Word representations in MT} Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation \cite{C00-2162,E03-1076,Badr:2008:SES:1557690.1557732} and factored translation and reordering models \cite{koehn-hoang:2007:EMNLP-CoNLL2007,durrani-EtAl:2014:Coling}. Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich \cite{Luong:D10-1015} or closely related language pairs \cite{durrani-EtAl:2010:ACL,Nakov:Tiedemann:2012}. In neural MT, such units are obtained in a pre-processing step---e.g.\ by byte-pair encoding \cite{sennrich-haddow-birch:2016:P16-12} or the word-piece model \cite{wu2016google}---or learned during training using a character-based convolutional/recurrent sub-network \cite{costajussa-fonollosa:2016:P16-2,Luong:P16-1100,vylomova2016word}. The latter approach has the advantage of keeping the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation \cite{kim2015character,belinkov-glass:2016:SeMaT, costajussa-fonollosa:2016:P16-2,jozefowicz2016exploring,sajjad:2017:ACL}. We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system. \section{Encoder Analysis} \label{sec:enc-analysis} Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder \texttt{ENC} and a sentence $s$ with POS/morphology annotation, we generate word features $\texttt{ENC}_i(s)$ for every word in the sentence. We then train a classifier that uses the features $\texttt{ENC}_i(s)$ to predict POS or morphological tags. \subsection{Effect of word representation} In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training \cite{kim2015character,costajussa-fonollosa:2016:P16-2}; see appendix \ref{sec:sup-training} for specific settings. In both cases we run the encoder over these representations and use its output $\texttt{ENC}_i(s)$ as features for the classifier. Table~\ref{tab:results-all-pairs} shows POS tagging accuracy using features from different NMT encoders. Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively.\footnote{The results are not far below dedicated taggers (e.g.\ 95.1/84.1 on Arabic POS/morphology \cite{PASHA14.593.L14-1479}), indicating that NMT models learn quite good representations.} The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table~\ref{tab:results-all-pairs}. \paragraph{Impact of word frequency} Let us look more closely at an example case: Arabic POS and morphological tagging. Figure~\ref{fig:repr} shows the effect of using word-based vs.\ char-based feature representations, obtained from the encoder of the Arabic-Hebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is true for the overall accuracy (+14.3\% in POS, +14.5\% in morphology), but more so on OOV words (+37.6\% in POS, +32.7\% in morphology). Figure~\ref{fig:repr-freqs} shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples to make a decision. \paragraph{Analyzing specific tags} In Figure~\ref{fig:repr-pos-cm} we plot confusion matrices for POS tagging using word-based and char-based representations (from Arabic encoders). While the char-based representations are overall better, the two models still share similar misclassified tags. Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case, relatively many tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the char-based case, this hardly happens. This suggests that char-based representations are predictive of the presence of a determiner, which in Arabic is expressed as the prefix ``Al-'' (the definite article), a pattern easily captured by a char-based model. In Figure~\ref{fig:repr-pos-tag-freq} we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner occur more frequently in the training set and are better predicted by char-based compared to word-based representations. There are a few fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations: mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (\mbox{``-wn/yn''} for masc. plural, \mbox{``-At''} for fem. plural). The char-based model is thus especially good with frequent tags and infrequent words, which is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs. \subsection{Effect of encoder depth} Modern NMT systems use very deep architectures with up to 8 or 16 layers \cite{wu2016google,TACL863}. We would like to understand what kind of information different layers capture. Given a trained model with multiple layers, we extract representations from the different layers in the encoder. Let $\texttt{ENC}^l_i(s)$ denote the encoded representation of word $w_i$ after the $l$-th layer. We vary $l$ and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder for simplicity ($l \in \{1,2\}$). Figure~\ref{fig:layer-effect-all-langs} shows POS tagging results using representations from different encoding layers across five language pairs. The general trend is that passing word vectors through the encoder improves POS tagging, which can be explained by contextual information contained in the representations after one layer. However, it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure. Figure~\ref{fig:layer-effect-lines} shows that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.\footnote{We found this result to be also true in French, German, and Czech experiments (see the supplementary material). } In contrast, BLEU scores actually increase when training \mbox{2-layer} vs.\ \mbox{1-layer} models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models). Thus translation quality improves when adding layers but morphology quality degrades. Intuitively, it seems that lower layers of the network learn to represent word structure while higher layers focus more on word meaning. A similar pattern was recently observed in a joint language-vision deep recurrent net~\cite{gelderloos-chrupala:2016:COLING}. \subsection{Effect of target language} While translating from morphologically-rich languages is challenging, translating into such languages is even harder. For instance, our basic system obtains BLEU of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech. How does the target language affect the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models on different target languages. For example, given an Arabic source we train Arabic-to-English/Hebrew/German systems. These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German). To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is completely identical: the models are trained on the same Arabic sentences with different translations. Figure~\ref{fig:target-lang} shows POS and morphology accuracy of word-based representations from the NMT encoders, as well as corresponding BLEU scores. As expected, translating to English is easier than translating to the morphologically-richer Hebrew and German, resulting in higher BLEU. Despite their similar morphologies, translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the richer Hebrew morphology compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating to English are better for predicting POS or morphology than those learned when translating to German, which are in turn better than those learned when translating to Hebrew. This is remarkable given that English is a morphologically-poor language that does not display many of the morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them would make the model learn more about morphology. A possible explanation for this phenomenon is that the Arabic-English model is simply better than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table \ref{tab:results-all-pairs}. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. We found that it learns to recreate the test sentences extremely well, with very high BLEU scores (Figure~\ref{fig:target-lang}). However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 3: POS tagging accuracy using encoder and decoder representations with/without attention.
[ "Attn", "POS Accuracy ENC", "POS Accuracy DEC", "BLEU Ar-En", "BLEU En-Ar" ]
[ [ "✓", "89.62", "86.71", "24.69", "13.37" ], [ "✗", "74.10", "85.54", "11.88", "5.04" ] ]
There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs. Arabic to English. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis.
\section{Motivation} \label{sec:motivation} Translating morphologically-rich languages is especially difficult due to a large vocabulary size and a high level of sparsity. Different solutions have been proposed to deal with this problem, for example factored models in phrase-based MT~\cite{koehn-hoang:2007:EMNLP-CoNLL2007} or softmax approximations in neural MT~\cite{ruder-softmax}. More recently, neural MT systems have shown significant gains by exploiting characters and other sub-word units~\cite{costajussa-fonollosa:2016:P16-2,sennrich-haddow-birch:2016:P16-12,wu2016google}. Presumably, such models are better than word-based models in representing the structure of rare and unseen words. Indeed, \newcite{sennrich-haddow-birch:2016:P16-12} have found that the unigram translation accuracy of words decreases for lower-frequency words. They also observed somewhat different behavior when translating into different languages. It is less clear, however, what and how neural translation models learn about word structure. In this work we are interested in answering questions such as the following: \begin{itemize} \item How do character-based models improve neural MT? \item What components of the NMT system encoder word structure? \item How does the target language affect the learning of word structure? \item What is the role of the decoder in learning word representations? \end{itemize} In the next section, we describe our data-driven approach for addressing such questions. We aim to obtain quantitative answers that will lead to generalizable conclusions. \section{Methodology} Given a source sentence $s = \{w_1, w_2, ..., w_N\}$ and a target sentence $t=\{u_1, u_2, ..., u_M\}$, we first generate a vector representation for the source sentence using an encoder (Eqn.\ \ref{eq:enc}) and then map this vector to the target sentence using a decoder (Eqn.\ \ref{eq:dec}) \cite{sutskever2014sequence}: \begin{align} &\texttt{ENC}: s=\{w_1, w_2, ..., w_N\} \mapsto \sss \in \reals^k \label{eq:enc} \\ &\texttt{DEC} : \sss \in \reals^k \mapsto t=\{u_1, u_2, ..., u_M\} \label{eq:dec} \end{align} In this work, we use long short-term memory (LSTM) \cite{hochreiter1997long} encoder-decoders with attention \cite{bahdanau2014neural}, which we train on parallel data. After training the NMT system, we freeze the parameters of the encoder and use \texttt{ENC} as a feature extractor to generate vectors representing words in the sentence. Let $\texttt{ENC}_i(s)$ denote the encoded representation of word $w_i$. For example, this may be the output of the LSTM after word $w_i$. We feed $\texttt{ENC}_i(s)$ to a neural classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier. By comparing the performance of classifiers trained with features from different instantiations of \texttt{ENC}, we can evaluate what MT encoders learn about word structure. Figure \ref{fig:approach} illustrates this process. We follow a similar procedure for analyzing representation learning in $\texttt{DEC}$. The classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system, we choose to model the classifier as a simple feed-forward network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.\footnote{We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; \newcite{qian-qiu-huang:2016:P16-11} reported similar findings.} We emphasize that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models % learn about morphology. The classifier is trained with a cross-entropy loss; more details on its architecture are in the supplementary material. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[normalem]{ulem} % http://ctan.org/pkg/pifont \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \newcommand\alert[1]{{\textcolor{red}{#1}}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{496} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\reals}{\mathbb{R}} \newcommand{\xx}{\mathbf{x}} \newcommand{\ii}{\mathbf{i}} \newcommand{\ff}{\mathbf{f}} \newcommand{\oo}{\mathbf{o}} \newcommand{\cc}{\mathbf{c}} \newcommand{\bb}{\mathbf{b}} \newcommand{\hh}{\mathbf{h}} \newcommand{\uu}{\mathbf{u}} \newcommand{\ww}{\mathbf{w}} % word representation \newcommand{\sss}{\mathbf{s}} % sentence representation \newcommand{\WW}{\mathbf{W}} \newcommand{\mm}{\mathbf{m}} % memory \newcommand{\aaa}{\mathbf{a}} % attention \newcommand{\rr}{\mathbf{r}} % attention \newcommand{\zz}{\mathbf{z}} % noise \title{What do Neural Machine Translation Models Learn about Morphology?} \author{Yonatan Belinkov$^1$ ~~ Nadir Durrani$^2$ ~~ Fahim Dalvi$^2$ ~~ Hassan Sajjad$^2$ ~~ James Glass$^1$ \\\\ $^1$MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA \\ {\tt \{belinkov, glass\}@mit.edu} \\ $^2$Qatar Computing Research Institute, HBKU, Doha, Qatar \\ {\tt \{ndurrani, faimaduddin, hsajjad\}@qf.org.qa} } \date{} \begin{document} \maketitle \begin{framed} \noindent This is a modified version of a paper originally published at ACL 2017 with updated results and discussion in section 5. \end{framed} \begin{abstract} Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs.\ character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs.\ decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.\footnote{Our code is available at \url{https://github.com/boknilev/nmt-repr-analysis}.} \end{abstract} \input{introduction} \input{methodology} \input{data} \input{encoder-analysis} \input{decoder-analysis} \input{related-work} \input{conclusion} \section*{Acknowledgments} We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions. This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). \bibliographystyle{acl_natbib} \newpage \appendix \input{supplement} \end{document} \section{Conclusion} Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work, we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions: \begin{itemize}%[itemsep=1pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words. \item Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning. \item Translating into morphologically-poorer languages leads to better source-side representations. This is partly, but not completely, % correlated with BLEU scores. % \item There are only little differences between encoder and decoder representation quality. The attention mechanism does not seem to significantly affect the quality of the decoder representations, while it is important for the encoder representations. \end{itemize} These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation. Another area for future work is to extend the analysis to other word % representations (e.g.\ byte-pair encoding), deeper networks, and more semantically-oriented tasks such as semantic role-labeling or semantic parsing. \section{Supplementary Material} \label{sec:supplemental} \subsection{Training Details} \label{sec:sup-training} \paragraph{POS/Morphological classifier} The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout ($\rho=0.5$), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder's hidden state (typically 500 dimensions). We use Adam \cite{kingma2014adam} with default parameters to minimize the cross-entropy objective. Training is run with mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs. \paragraph{Neural MT system} We train a 2-layer LSTM encoder-decoder with attention. We use the \texttt{seq2seq-attn} implementation \cite{kim2016} with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The character-based model is a CNN with a highway network over characters \cite{kim2015character} with 1000 feature maps and a kernel width of 6 characters. This model was found to be useful for translating morphologically-rich languages \cite{costajussa-fonollosa:2016:P16-2}. The MT system is trained for 20 epochs, and the model with the best dev loss is used for extracting features for the classifier. \subsection{Data and Taggers} \label{sec:sup-data} \paragraph{Datasets} All of the translation models are trained on the Ted talks corpus included in WIT$^3$ \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016}. Statistics about each language pair are available on the WIT$^3$ website: \url{https://wit3.fbk.eu}. For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual \cite{PASHA14.593.L14-1479}) and the Tiger corpus for German.\footnote{\url{http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html}} \paragraph{POS and morphological taggers} We used the following tools to annotate the MT corpora: MADAMIRA \cite{PASHA14.593.L14-1479} for Arabic POS and morphological tags, Tree-Tagger \cite{schmid:2004:PAPERS} for Czech and French POS tags, LoPar \cite{schmid:00a} for German POS and morphological tags, and MXPOST \cite{ratnaparkhi98maximum} for English POS tags. These tools are recommended on the Moses website.\footnote{\url{http://www.statmt.org/moses/?n=Moses.ExternalTools}} As mentioned before, our goal is not to achieve state-of-the-art results, but rather to study what different components of the NMT architecture learn about word morphology. Please refer to \newcite{mueller-schmid-schutze:2013:EMNLP} for representative POS and morphological tagging accuracies. \subsection{Supplementary Results} \label{sec:sup-results} We report here results that were omitted from the paper due to the space limit. Table \ref{tab:different_layers} shows encoder results using different layers, languages, and representations (word/char-based). As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology. Table \ref{tab:different_language} shows that translating into a morphologically-poor language (English) leads to better source representations, and Table \ref{tab:decoder} provides additional decoder results. Table~\ref{tab:decoder-old} shows POS tagging accuracy using decoder representations, where the current word representation was used to predict the next word's tag. The idea is to evaluate whether the current word representation contains POS information about the output of the decoder. Clearly, the current word representation cannot be used to predict the next word's tag. This also holds when removing the attention (En-Ar, 85.54\%) or using character-based representations (En-Ar, 44.5\%). Since the decoder representation is in the pre-Softmax layer, this means that most of the work of predicting the next work is done in the Softmax layer, while the pre-Softmax representation contains much information about the current input word. \newpage \section{Introduction} Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism \cite{bahdanau2014neural,sutskever2014sequence}. The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation \cite{luong-manning:iwslt15,bentivogli-EtAl:2016:EMNLP2016,toral-sanchezcartagena:2017:EACLlong}. \bigskip However, little is known about what and how much these models learn about each language and its features. Recent work has started exploring the role of the NMT encoder in learning source syntax \cite{shi-padhi-knight:2016:EMNLP2016}, but research studies are yet to answer important questions such as: \textit{(i)} what do NMT models learn about word morphology? \textit{(ii)} what is the effect on learning when translating into/from morphologically-rich languages? \mbox{\textit{(iii)} what} impact do different representations (character vs.\ word) have on learning? and \textit{(iv)} what do different modules learn about the syntactic and semantic structure of a language? Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring \textit{(i)}, \textit{(ii)}, and \textit{(iii)} by providing quantitative, data-driven answers to the following specific questions: \begin{itemize}%[itemsep=5pt,topsep=8pt] %[leftmargin=*] \item Which parts of the NMT architecture capture word structure? \item What is the division of labor between different components (e.g.\ different layers or %of encoder vs.\ decoder)? \item How do different word representations help learn better morphology and modeling of infrequent words? \item How does the target language affect the learning of word structure? \end{itemize} To achieve this, we follow a simple but effective procedure with three steps: \mbox{\textit{(i)} train} a neural MT system on a parallel corpus; \mbox{\textit{(ii)} use} the trained model to extract feature representations for words in a language of interest; and \mbox{\textit{(iii)} train} a classifier using extracted features to make predictions for another task. We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task. We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs.\ the decoder. We experiment with several languages with varying degrees of morphological richness: French, German, Czech, Arabic, and Hebrew.\ Our analysis reveals interesting insights such as: \begin{itemize}%[itemsep=3pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are much better for learning morphology, especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words. \item Lower layers of the encoder are better at capturing word structure, while deeper networks improve translation quality, suggesting that higher layers focus more on word meaning. \item The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores. \item The NMT encoder and decoder learn representations of similar quality. The attention mechanism affects the quality of the encoder representations more than that of the decoder representations. \end{itemize} \section{Data} \paragraph{Language pairs } We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden our analysis and study the effect of having morphologically-rich languages on both source and target sides, we also include Arabic-Hebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies. \paragraph{MT data} Our translation models are trained on the WIT$^3$ corpus of TED talks \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016} made available for IWSLT 2016. This allows for comparable and cross-linguistic analysis. Statistics about each language pair are given in Table \ref{tab:tagsets} (under Pred). We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets. \paragraph{Annotated data} We use two kinds of datasets to train POS and morphological classifiers: gold-standard and predicted tags. For predicted tags, we simply used freely available taggers to annotate the MT data. For gold tags, we use gold-annotated datasets. Table \ref{tab:tagsets} gives statistics for datasets with gold and predicted tags; see supplementary material for details on taggers and gold data. We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available. \section{Decoder Analysis} \label{sec:dec-analysis} So far we only looked at the encoder. However, the decoder \texttt{DEC} is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence. These features are used to train a classifier on POS or morphological tagging on the target side.\footnote{In this section we only experiment with predicted tags as there are no parallel data with gold POS/morphological tags that we are aware of.} % Note that in this case the decoder is given the correct target words one-by-one, similar to the usual NMT training regime. Table \ref{tab:pos-dec-enc-attn-nogold} (1st row) shows the results of using representations extracted with \texttt{ENC} and \texttt{DEC} from the Arabic-English and English-Arabic models, respectively. There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs.\ Arabic to English. We observed simmilar small drops with higher quality translation directions (Table~\ref{tab:decoder}, Appendix~\ref{sec:sup-results}). The little gap between encoder and decoder representations may sound surprising, when we consider the fundamental tasks of the two modules. The encoder's task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT \cite{johnson2016google}. The decoder's task is to use this representation to generate the target sentence in a specific language. One might conjecture that it would be sufficient for the decoder to learn a strong language model in order % to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. However, their performance seems more or less comparable. In the following section we investigate what the role of the attention mechanism in the division of labor between encoder and decoder. \subsection{Effect of attention} Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder's hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism might hurt the quality of the target word representations learned by the decoder. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. As Table~\ref{tab:pos-dec-enc-attn-nogold} shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations significantly, but only mildly hurts the quality of the decoder representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis. \subsection{Effect of word representation} We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations. Table~\ref{tab:pos-dec-enc-word-char-nogold} shows POS accuracy of word-based % vs.\ char-based representations in the encoder and decoder. In both bases, char-based representations perform better. BLEU scores behave differently: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon % is that the decoder's predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabic-to-English the char-based model reduces the number of generated unknown words % in the MT % test set by 25\%, while in English-to-Arabic the number of unknown words % remains roughly the same between word-based % and char-based models. \section{Related Work} \label{sec:related-work} \paragraph{Analysis of neural models} The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks that are trained for a given task \cite{elman1991distributed,karpathy2015visualizing,kadar2016representation,qian-qiu-huang:2016:EMNLP2016}. While such visualizations illuminate the inner workings of the network, they are often qualitative in nature and somewhat anecdotal. A different approach tries to provide a quantitative analysis by correlating parts of the neural % network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings \cite{kohn:2015:EMNLP,qian-qiu-huang:2016:P16-11}, through LSTM gates or states \cite{qian-qiu-huang:2016:EMNLP2016}, to sentence embeddings \cite{adi2016fine}. Our work is most similar to \newcite{shi-padhi-knight:2016:EMNLP2016}, who use hidden vectors from a neural MT encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria. \newcite{vylomova2016word} also analyze different % representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations. \paragraph{Word representations in MT} Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation \cite{C00-2162,E03-1076,Badr:2008:SES:1557690.1557732} and factored translation and reordering models \cite{koehn-hoang:2007:EMNLP-CoNLL2007,durrani-EtAl:2014:Coling}. Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich \cite{Luong:D10-1015} or closely related language pairs \cite{durrani-EtAl:2010:ACL,Nakov:Tiedemann:2012}. In neural MT, such units are obtained in a pre-processing step---e.g.\ by byte-pair encoding \cite{sennrich-haddow-birch:2016:P16-12} or the word-piece model \cite{wu2016google}---or learned during training using a character-based convolutional/recurrent sub-network \cite{costajussa-fonollosa:2016:P16-2,Luong:P16-1100,vylomova2016word}. The latter approach has the advantage of keeping the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation \cite{kim2015character,belinkov-glass:2016:SeMaT, costajussa-fonollosa:2016:P16-2,jozefowicz2016exploring,sajjad:2017:ACL}. We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system. \section{Encoder Analysis} \label{sec:enc-analysis} Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder \texttt{ENC} and a sentence $s$ with POS/morphology annotation, we generate word features $\texttt{ENC}_i(s)$ for every word in the sentence. We then train a classifier that uses the features $\texttt{ENC}_i(s)$ to predict POS or morphological tags. \subsection{Effect of word representation} In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training \cite{kim2015character,costajussa-fonollosa:2016:P16-2}; see appendix \ref{sec:sup-training} for specific settings. In both cases we run the encoder over these representations and use its output $\texttt{ENC}_i(s)$ as features for the classifier. Table~\ref{tab:results-all-pairs} shows POS tagging accuracy using features from different NMT encoders. Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively.\footnote{The results are not far below dedicated taggers (e.g.\ 95.1/84.1 on Arabic POS/morphology \cite{PASHA14.593.L14-1479}), indicating that NMT models learn quite good representations.} The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table~\ref{tab:results-all-pairs}. \paragraph{Impact of word frequency} Let us look more closely at an example case: Arabic POS and morphological tagging. Figure~\ref{fig:repr} shows the effect of using word-based vs.\ char-based feature representations, obtained from the encoder of the Arabic-Hebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is true for the overall accuracy (+14.3\% in POS, +14.5\% in morphology), but more so on OOV words (+37.6\% in POS, +32.7\% in morphology). Figure~\ref{fig:repr-freqs} shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples to make a decision. \paragraph{Analyzing specific tags} In Figure~\ref{fig:repr-pos-cm} we plot confusion matrices for POS tagging using word-based and char-based representations (from Arabic encoders). While the char-based representations are overall better, the two models still share similar misclassified tags. Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case, relatively many tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the char-based case, this hardly happens. This suggests that char-based representations are predictive of the presence of a determiner, which in Arabic is expressed as the prefix ``Al-'' (the definite article), a pattern easily captured by a char-based model. In Figure~\ref{fig:repr-pos-tag-freq} we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner occur more frequently in the training set and are better predicted by char-based compared to word-based representations. There are a few fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations: mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (\mbox{``-wn/yn''} for masc. plural, \mbox{``-At''} for fem. plural). The char-based model is thus especially good with frequent tags and infrequent words, which is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs. \subsection{Effect of encoder depth} Modern NMT systems use very deep architectures with up to 8 or 16 layers \cite{wu2016google,TACL863}. We would like to understand what kind of information different layers capture. Given a trained model with multiple layers, we extract representations from the different layers in the encoder. Let $\texttt{ENC}^l_i(s)$ denote the encoded representation of word $w_i$ after the $l$-th layer. We vary $l$ and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder for simplicity ($l \in \{1,2\}$). Figure~\ref{fig:layer-effect-all-langs} shows POS tagging results using representations from different encoding layers across five language pairs. The general trend is that passing word vectors through the encoder improves POS tagging, which can be explained by contextual information contained in the representations after one layer. However, it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure. Figure~\ref{fig:layer-effect-lines} shows that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.\footnote{We found this result to be also true in French, German, and Czech experiments (see the supplementary material). } In contrast, BLEU scores actually increase when training \mbox{2-layer} vs.\ \mbox{1-layer} models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models). Thus translation quality improves when adding layers but morphology quality degrades. Intuitively, it seems that lower layers of the network learn to represent word structure while higher layers focus more on word meaning. A similar pattern was recently observed in a joint language-vision deep recurrent net~\cite{gelderloos-chrupala:2016:COLING}. \subsection{Effect of target language} While translating from morphologically-rich languages is challenging, translating into such languages is even harder. For instance, our basic system obtains BLEU of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech. How does the target language affect the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models on different target languages. For example, given an Arabic source we train Arabic-to-English/Hebrew/German systems. These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German). To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is completely identical: the models are trained on the same Arabic sentences with different translations. Figure~\ref{fig:target-lang} shows POS and morphology accuracy of word-based representations from the NMT encoders, as well as corresponding BLEU scores. As expected, translating to English is easier than translating to the morphologically-richer Hebrew and German, resulting in higher BLEU. Despite their similar morphologies, translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the richer Hebrew morphology compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating to English are better for predicting POS or morphology than those learned when translating to German, which are in turn better than those learned when translating to Hebrew. This is remarkable given that English is a morphologically-poor language that does not display many of the morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them would make the model learn more about morphology. A possible explanation for this phenomenon is that the Arabic-English model is simply better than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table \ref{tab:results-all-pairs}. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. We found that it learns to recreate the test sentences extremely well, with very high BLEU scores (Figure~\ref{fig:target-lang}). However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 4: POS tagging accuracy using word-based and char-based encoder/decoder representations.
[ "[EMPTY]", "POS Accuracy ENC", "POS Accuracy DEC", "BLEU Ar-En", "BLEU En-Ar" ]
[ [ "Word", "89.62", "86.71", "24.69", "13.37" ], [ "Char", "95.35", "91.11", "28.42", "13.00" ] ]
In both bases, char-based representations perform better. BLEU scores behave differently: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon is that the decoder’s predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabic-to-English the char-based model reduces the number of generated unknown words in the MT test set by 25%, while in English-to-Arabic the number of unknown words remains roughly the same between word-based and char-based models.
\section{Motivation} \label{sec:motivation} Translating morphologically-rich languages is especially difficult due to a large vocabulary size and a high level of sparsity. Different solutions have been proposed to deal with this problem, for example factored models in phrase-based MT~\cite{koehn-hoang:2007:EMNLP-CoNLL2007} or softmax approximations in neural MT~\cite{ruder-softmax}. More recently, neural MT systems have shown significant gains by exploiting characters and other sub-word units~\cite{costajussa-fonollosa:2016:P16-2,sennrich-haddow-birch:2016:P16-12,wu2016google}. Presumably, such models are better than word-based models in representing the structure of rare and unseen words. Indeed, \newcite{sennrich-haddow-birch:2016:P16-12} have found that the unigram translation accuracy of words decreases for lower-frequency words. They also observed somewhat different behavior when translating into different languages. It is less clear, however, what and how neural translation models learn about word structure. In this work we are interested in answering questions such as the following: \begin{itemize} \item How do character-based models improve neural MT? \item What components of the NMT system encoder word structure? \item How does the target language affect the learning of word structure? \item What is the role of the decoder in learning word representations? \end{itemize} In the next section, we describe our data-driven approach for addressing such questions. We aim to obtain quantitative answers that will lead to generalizable conclusions. \section{Methodology} Given a source sentence $s = \{w_1, w_2, ..., w_N\}$ and a target sentence $t=\{u_1, u_2, ..., u_M\}$, we first generate a vector representation for the source sentence using an encoder (Eqn.\ \ref{eq:enc}) and then map this vector to the target sentence using a decoder (Eqn.\ \ref{eq:dec}) \cite{sutskever2014sequence}: \begin{align} &\texttt{ENC}: s=\{w_1, w_2, ..., w_N\} \mapsto \sss \in \reals^k \label{eq:enc} \\ &\texttt{DEC} : \sss \in \reals^k \mapsto t=\{u_1, u_2, ..., u_M\} \label{eq:dec} \end{align} In this work, we use long short-term memory (LSTM) \cite{hochreiter1997long} encoder-decoders with attention \cite{bahdanau2014neural}, which we train on parallel data. After training the NMT system, we freeze the parameters of the encoder and use \texttt{ENC} as a feature extractor to generate vectors representing words in the sentence. Let $\texttt{ENC}_i(s)$ denote the encoded representation of word $w_i$. For example, this may be the output of the LSTM after word $w_i$. We feed $\texttt{ENC}_i(s)$ to a neural classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier. By comparing the performance of classifiers trained with features from different instantiations of \texttt{ENC}, we can evaluate what MT encoders learn about word structure. Figure \ref{fig:approach} illustrates this process. We follow a similar procedure for analyzing representation learning in $\texttt{DEC}$. The classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system, we choose to model the classifier as a simple feed-forward network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.\footnote{We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; \newcite{qian-qiu-huang:2016:P16-11} reported similar findings.} We emphasize that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models % learn about morphology. The classifier is trained with a cross-entropy loss; more details on its architecture are in the supplementary material. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[normalem]{ulem} % http://ctan.org/pkg/pifont \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \newcommand\alert[1]{{\textcolor{red}{#1}}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{496} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\reals}{\mathbb{R}} \newcommand{\xx}{\mathbf{x}} \newcommand{\ii}{\mathbf{i}} \newcommand{\ff}{\mathbf{f}} \newcommand{\oo}{\mathbf{o}} \newcommand{\cc}{\mathbf{c}} \newcommand{\bb}{\mathbf{b}} \newcommand{\hh}{\mathbf{h}} \newcommand{\uu}{\mathbf{u}} \newcommand{\ww}{\mathbf{w}} % word representation \newcommand{\sss}{\mathbf{s}} % sentence representation \newcommand{\WW}{\mathbf{W}} \newcommand{\mm}{\mathbf{m}} % memory \newcommand{\aaa}{\mathbf{a}} % attention \newcommand{\rr}{\mathbf{r}} % attention \newcommand{\zz}{\mathbf{z}} % noise \title{What do Neural Machine Translation Models Learn about Morphology?} \author{Yonatan Belinkov$^1$ ~~ Nadir Durrani$^2$ ~~ Fahim Dalvi$^2$ ~~ Hassan Sajjad$^2$ ~~ James Glass$^1$ \\\\ $^1$MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA \\ {\tt \{belinkov, glass\}@mit.edu} \\ $^2$Qatar Computing Research Institute, HBKU, Doha, Qatar \\ {\tt \{ndurrani, faimaduddin, hsajjad\}@qf.org.qa} } \date{} \begin{document} \maketitle \begin{framed} \noindent This is a modified version of a paper originally published at ACL 2017 with updated results and discussion in section 5. \end{framed} \begin{abstract} Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs.\ character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs.\ decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.\footnote{Our code is available at \url{https://github.com/boknilev/nmt-repr-analysis}.} \end{abstract} \input{introduction} \input{methodology} \input{data} \input{encoder-analysis} \input{decoder-analysis} \input{related-work} \input{conclusion} \section*{Acknowledgments} We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions. This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). \bibliographystyle{acl_natbib} \newpage \appendix \input{supplement} \end{document} \section{Conclusion} Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work, we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions: \begin{itemize}%[itemsep=1pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words. \item Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning. \item Translating into morphologically-poorer languages leads to better source-side representations. This is partly, but not completely, % correlated with BLEU scores. % \item There are only little differences between encoder and decoder representation quality. The attention mechanism does not seem to significantly affect the quality of the decoder representations, while it is important for the encoder representations. \end{itemize} These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation. Another area for future work is to extend the analysis to other word % representations (e.g.\ byte-pair encoding), deeper networks, and more semantically-oriented tasks such as semantic role-labeling or semantic parsing. \section{Supplementary Material} \label{sec:supplemental} \subsection{Training Details} \label{sec:sup-training} \paragraph{POS/Morphological classifier} The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout ($\rho=0.5$), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder's hidden state (typically 500 dimensions). We use Adam \cite{kingma2014adam} with default parameters to minimize the cross-entropy objective. Training is run with mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs. \paragraph{Neural MT system} We train a 2-layer LSTM encoder-decoder with attention. We use the \texttt{seq2seq-attn} implementation \cite{kim2016} with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The character-based model is a CNN with a highway network over characters \cite{kim2015character} with 1000 feature maps and a kernel width of 6 characters. This model was found to be useful for translating morphologically-rich languages \cite{costajussa-fonollosa:2016:P16-2}. The MT system is trained for 20 epochs, and the model with the best dev loss is used for extracting features for the classifier. \subsection{Data and Taggers} \label{sec:sup-data} \paragraph{Datasets} All of the translation models are trained on the Ted talks corpus included in WIT$^3$ \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016}. Statistics about each language pair are available on the WIT$^3$ website: \url{https://wit3.fbk.eu}. For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual \cite{PASHA14.593.L14-1479}) and the Tiger corpus for German.\footnote{\url{http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html}} \paragraph{POS and morphological taggers} We used the following tools to annotate the MT corpora: MADAMIRA \cite{PASHA14.593.L14-1479} for Arabic POS and morphological tags, Tree-Tagger \cite{schmid:2004:PAPERS} for Czech and French POS tags, LoPar \cite{schmid:00a} for German POS and morphological tags, and MXPOST \cite{ratnaparkhi98maximum} for English POS tags. These tools are recommended on the Moses website.\footnote{\url{http://www.statmt.org/moses/?n=Moses.ExternalTools}} As mentioned before, our goal is not to achieve state-of-the-art results, but rather to study what different components of the NMT architecture learn about word morphology. Please refer to \newcite{mueller-schmid-schutze:2013:EMNLP} for representative POS and morphological tagging accuracies. \subsection{Supplementary Results} \label{sec:sup-results} We report here results that were omitted from the paper due to the space limit. Table \ref{tab:different_layers} shows encoder results using different layers, languages, and representations (word/char-based). As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology. Table \ref{tab:different_language} shows that translating into a morphologically-poor language (English) leads to better source representations, and Table \ref{tab:decoder} provides additional decoder results. Table~\ref{tab:decoder-old} shows POS tagging accuracy using decoder representations, where the current word representation was used to predict the next word's tag. The idea is to evaluate whether the current word representation contains POS information about the output of the decoder. Clearly, the current word representation cannot be used to predict the next word's tag. This also holds when removing the attention (En-Ar, 85.54\%) or using character-based representations (En-Ar, 44.5\%). Since the decoder representation is in the pre-Softmax layer, this means that most of the work of predicting the next work is done in the Softmax layer, while the pre-Softmax representation contains much information about the current input word. \newpage \section{Introduction} Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism \cite{bahdanau2014neural,sutskever2014sequence}. The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation \cite{luong-manning:iwslt15,bentivogli-EtAl:2016:EMNLP2016,toral-sanchezcartagena:2017:EACLlong}. \bigskip However, little is known about what and how much these models learn about each language and its features. Recent work has started exploring the role of the NMT encoder in learning source syntax \cite{shi-padhi-knight:2016:EMNLP2016}, but research studies are yet to answer important questions such as: \textit{(i)} what do NMT models learn about word morphology? \textit{(ii)} what is the effect on learning when translating into/from morphologically-rich languages? \mbox{\textit{(iii)} what} impact do different representations (character vs.\ word) have on learning? and \textit{(iv)} what do different modules learn about the syntactic and semantic structure of a language? Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring \textit{(i)}, \textit{(ii)}, and \textit{(iii)} by providing quantitative, data-driven answers to the following specific questions: \begin{itemize}%[itemsep=5pt,topsep=8pt] %[leftmargin=*] \item Which parts of the NMT architecture capture word structure? \item What is the division of labor between different components (e.g.\ different layers or %of encoder vs.\ decoder)? \item How do different word representations help learn better morphology and modeling of infrequent words? \item How does the target language affect the learning of word structure? \end{itemize} To achieve this, we follow a simple but effective procedure with three steps: \mbox{\textit{(i)} train} a neural MT system on a parallel corpus; \mbox{\textit{(ii)} use} the trained model to extract feature representations for words in a language of interest; and \mbox{\textit{(iii)} train} a classifier using extracted features to make predictions for another task. We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task. We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs.\ the decoder. We experiment with several languages with varying degrees of morphological richness: French, German, Czech, Arabic, and Hebrew.\ Our analysis reveals interesting insights such as: \begin{itemize}%[itemsep=3pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are much better for learning morphology, especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words. \item Lower layers of the encoder are better at capturing word structure, while deeper networks improve translation quality, suggesting that higher layers focus more on word meaning. \item The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores. \item The NMT encoder and decoder learn representations of similar quality. The attention mechanism affects the quality of the encoder representations more than that of the decoder representations. \end{itemize} \section{Data} \paragraph{Language pairs } We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden our analysis and study the effect of having morphologically-rich languages on both source and target sides, we also include Arabic-Hebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies. \paragraph{MT data} Our translation models are trained on the WIT$^3$ corpus of TED talks \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016} made available for IWSLT 2016. This allows for comparable and cross-linguistic analysis. Statistics about each language pair are given in Table \ref{tab:tagsets} (under Pred). We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets. \paragraph{Annotated data} We use two kinds of datasets to train POS and morphological classifiers: gold-standard and predicted tags. For predicted tags, we simply used freely available taggers to annotate the MT data. For gold tags, we use gold-annotated datasets. Table \ref{tab:tagsets} gives statistics for datasets with gold and predicted tags; see supplementary material for details on taggers and gold data. We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available. \section{Decoder Analysis} \label{sec:dec-analysis} So far we only looked at the encoder. However, the decoder \texttt{DEC} is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence. These features are used to train a classifier on POS or morphological tagging on the target side.\footnote{In this section we only experiment with predicted tags as there are no parallel data with gold POS/morphological tags that we are aware of.} % Note that in this case the decoder is given the correct target words one-by-one, similar to the usual NMT training regime. Table \ref{tab:pos-dec-enc-attn-nogold} (1st row) shows the results of using representations extracted with \texttt{ENC} and \texttt{DEC} from the Arabic-English and English-Arabic models, respectively. There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs.\ Arabic to English. We observed simmilar small drops with higher quality translation directions (Table~\ref{tab:decoder}, Appendix~\ref{sec:sup-results}). The little gap between encoder and decoder representations may sound surprising, when we consider the fundamental tasks of the two modules. The encoder's task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT \cite{johnson2016google}. The decoder's task is to use this representation to generate the target sentence in a specific language. One might conjecture that it would be sufficient for the decoder to learn a strong language model in order % to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. However, their performance seems more or less comparable. In the following section we investigate what the role of the attention mechanism in the division of labor between encoder and decoder. \subsection{Effect of attention} Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder's hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism might hurt the quality of the target word representations learned by the decoder. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. As Table~\ref{tab:pos-dec-enc-attn-nogold} shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations significantly, but only mildly hurts the quality of the decoder representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis. \subsection{Effect of word representation} We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations. Table~\ref{tab:pos-dec-enc-word-char-nogold} shows POS accuracy of word-based % vs.\ char-based representations in the encoder and decoder. In both bases, char-based representations perform better. BLEU scores behave differently: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon % is that the decoder's predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabic-to-English the char-based model reduces the number of generated unknown words % in the MT % test set by 25\%, while in English-to-Arabic the number of unknown words % remains roughly the same between word-based % and char-based models. \section{Related Work} \label{sec:related-work} \paragraph{Analysis of neural models} The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks that are trained for a given task \cite{elman1991distributed,karpathy2015visualizing,kadar2016representation,qian-qiu-huang:2016:EMNLP2016}. While such visualizations illuminate the inner workings of the network, they are often qualitative in nature and somewhat anecdotal. A different approach tries to provide a quantitative analysis by correlating parts of the neural % network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings \cite{kohn:2015:EMNLP,qian-qiu-huang:2016:P16-11}, through LSTM gates or states \cite{qian-qiu-huang:2016:EMNLP2016}, to sentence embeddings \cite{adi2016fine}. Our work is most similar to \newcite{shi-padhi-knight:2016:EMNLP2016}, who use hidden vectors from a neural MT encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria. \newcite{vylomova2016word} also analyze different % representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations. \paragraph{Word representations in MT} Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation \cite{C00-2162,E03-1076,Badr:2008:SES:1557690.1557732} and factored translation and reordering models \cite{koehn-hoang:2007:EMNLP-CoNLL2007,durrani-EtAl:2014:Coling}. Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich \cite{Luong:D10-1015} or closely related language pairs \cite{durrani-EtAl:2010:ACL,Nakov:Tiedemann:2012}. In neural MT, such units are obtained in a pre-processing step---e.g.\ by byte-pair encoding \cite{sennrich-haddow-birch:2016:P16-12} or the word-piece model \cite{wu2016google}---or learned during training using a character-based convolutional/recurrent sub-network \cite{costajussa-fonollosa:2016:P16-2,Luong:P16-1100,vylomova2016word}. The latter approach has the advantage of keeping the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation \cite{kim2015character,belinkov-glass:2016:SeMaT, costajussa-fonollosa:2016:P16-2,jozefowicz2016exploring,sajjad:2017:ACL}. We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system. \section{Encoder Analysis} \label{sec:enc-analysis} Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder \texttt{ENC} and a sentence $s$ with POS/morphology annotation, we generate word features $\texttt{ENC}_i(s)$ for every word in the sentence. We then train a classifier that uses the features $\texttt{ENC}_i(s)$ to predict POS or morphological tags. \subsection{Effect of word representation} In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training \cite{kim2015character,costajussa-fonollosa:2016:P16-2}; see appendix \ref{sec:sup-training} for specific settings. In both cases we run the encoder over these representations and use its output $\texttt{ENC}_i(s)$ as features for the classifier. Table~\ref{tab:results-all-pairs} shows POS tagging accuracy using features from different NMT encoders. Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively.\footnote{The results are not far below dedicated taggers (e.g.\ 95.1/84.1 on Arabic POS/morphology \cite{PASHA14.593.L14-1479}), indicating that NMT models learn quite good representations.} The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table~\ref{tab:results-all-pairs}. \paragraph{Impact of word frequency} Let us look more closely at an example case: Arabic POS and morphological tagging. Figure~\ref{fig:repr} shows the effect of using word-based vs.\ char-based feature representations, obtained from the encoder of the Arabic-Hebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is true for the overall accuracy (+14.3\% in POS, +14.5\% in morphology), but more so on OOV words (+37.6\% in POS, +32.7\% in morphology). Figure~\ref{fig:repr-freqs} shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples to make a decision. \paragraph{Analyzing specific tags} In Figure~\ref{fig:repr-pos-cm} we plot confusion matrices for POS tagging using word-based and char-based representations (from Arabic encoders). While the char-based representations are overall better, the two models still share similar misclassified tags. Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case, relatively many tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the char-based case, this hardly happens. This suggests that char-based representations are predictive of the presence of a determiner, which in Arabic is expressed as the prefix ``Al-'' (the definite article), a pattern easily captured by a char-based model. In Figure~\ref{fig:repr-pos-tag-freq} we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner occur more frequently in the training set and are better predicted by char-based compared to word-based representations. There are a few fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations: mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (\mbox{``-wn/yn''} for masc. plural, \mbox{``-At''} for fem. plural). The char-based model is thus especially good with frequent tags and infrequent words, which is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs. \subsection{Effect of encoder depth} Modern NMT systems use very deep architectures with up to 8 or 16 layers \cite{wu2016google,TACL863}. We would like to understand what kind of information different layers capture. Given a trained model with multiple layers, we extract representations from the different layers in the encoder. Let $\texttt{ENC}^l_i(s)$ denote the encoded representation of word $w_i$ after the $l$-th layer. We vary $l$ and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder for simplicity ($l \in \{1,2\}$). Figure~\ref{fig:layer-effect-all-langs} shows POS tagging results using representations from different encoding layers across five language pairs. The general trend is that passing word vectors through the encoder improves POS tagging, which can be explained by contextual information contained in the representations after one layer. However, it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure. Figure~\ref{fig:layer-effect-lines} shows that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.\footnote{We found this result to be also true in French, German, and Czech experiments (see the supplementary material). } In contrast, BLEU scores actually increase when training \mbox{2-layer} vs.\ \mbox{1-layer} models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models). Thus translation quality improves when adding layers but morphology quality degrades. Intuitively, it seems that lower layers of the network learn to represent word structure while higher layers focus more on word meaning. A similar pattern was recently observed in a joint language-vision deep recurrent net~\cite{gelderloos-chrupala:2016:COLING}. \subsection{Effect of target language} While translating from morphologically-rich languages is challenging, translating into such languages is even harder. For instance, our basic system obtains BLEU of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech. How does the target language affect the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models on different target languages. For example, given an Arabic source we train Arabic-to-English/Hebrew/German systems. These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German). To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is completely identical: the models are trained on the same Arabic sentences with different translations. Figure~\ref{fig:target-lang} shows POS and morphology accuracy of word-based representations from the NMT encoders, as well as corresponding BLEU scores. As expected, translating to English is easier than translating to the morphologically-richer Hebrew and German, resulting in higher BLEU. Despite their similar morphologies, translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the richer Hebrew morphology compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating to English are better for predicting POS or morphology than those learned when translating to German, which are in turn better than those learned when translating to Hebrew. This is remarkable given that English is a morphologically-poor language that does not display many of the morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them would make the model learn more about morphology. A possible explanation for this phenomenon is that the Arabic-English model is simply better than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table \ref{tab:results-all-pairs}. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. We found that it learns to recreate the test sentences extremely well, with very high BLEU scores (Figure~\ref{fig:target-lang}). However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 5: POS and morphology accuracy on predicted tags using word- and char-based representations from different layers of *-to-En systems.
[ "[EMPTY]", "Layer 0", "Layer 1", "Layer 2" ]
[ [ "[EMPTY]", "Word/Char (POS)", "Word/Char (POS)", "Word/Char (POS)" ], [ "De", "91.1/92.0", "93.6/95.2", "93.5/94.6" ], [ "Fr", "92.1/92.9", "95.1/95.9", "94.6/95.6" ], [ "Cz", "76.3/78.3", "77.0/79.1", "75.7/80.6" ], [ "[EMPTY]", "Word/Char (Morphology)", "Word/Char (Morphology)", "Word/Char (Morphology)" ], [ "De", "87.6/88.8", "89.5/91.2", "88.7/90.5" ] ]
We report here results that were omitted from the paper due to the space limit. As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology.
\section{Motivation} \label{sec:motivation} Translating morphologically-rich languages is especially difficult due to a large vocabulary size and a high level of sparsity. Different solutions have been proposed to deal with this problem, for example factored models in phrase-based MT~\cite{koehn-hoang:2007:EMNLP-CoNLL2007} or softmax approximations in neural MT~\cite{ruder-softmax}. More recently, neural MT systems have shown significant gains by exploiting characters and other sub-word units~\cite{costajussa-fonollosa:2016:P16-2,sennrich-haddow-birch:2016:P16-12,wu2016google}. Presumably, such models are better than word-based models in representing the structure of rare and unseen words. Indeed, \newcite{sennrich-haddow-birch:2016:P16-12} have found that the unigram translation accuracy of words decreases for lower-frequency words. They also observed somewhat different behavior when translating into different languages. It is less clear, however, what and how neural translation models learn about word structure. In this work we are interested in answering questions such as the following: \begin{itemize} \item How do character-based models improve neural MT? \item What components of the NMT system encoder word structure? \item How does the target language affect the learning of word structure? \item What is the role of the decoder in learning word representations? \end{itemize} In the next section, we describe our data-driven approach for addressing such questions. We aim to obtain quantitative answers that will lead to generalizable conclusions. \section{Methodology} Given a source sentence $s = \{w_1, w_2, ..., w_N\}$ and a target sentence $t=\{u_1, u_2, ..., u_M\}$, we first generate a vector representation for the source sentence using an encoder (Eqn.\ \ref{eq:enc}) and then map this vector to the target sentence using a decoder (Eqn.\ \ref{eq:dec}) \cite{sutskever2014sequence}: \begin{align} &\texttt{ENC}: s=\{w_1, w_2, ..., w_N\} \mapsto \sss \in \reals^k \label{eq:enc} \\ &\texttt{DEC} : \sss \in \reals^k \mapsto t=\{u_1, u_2, ..., u_M\} \label{eq:dec} \end{align} In this work, we use long short-term memory (LSTM) \cite{hochreiter1997long} encoder-decoders with attention \cite{bahdanau2014neural}, which we train on parallel data. After training the NMT system, we freeze the parameters of the encoder and use \texttt{ENC} as a feature extractor to generate vectors representing words in the sentence. Let $\texttt{ENC}_i(s)$ denote the encoded representation of word $w_i$. For example, this may be the output of the LSTM after word $w_i$. We feed $\texttt{ENC}_i(s)$ to a neural classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier. By comparing the performance of classifiers trained with features from different instantiations of \texttt{ENC}, we can evaluate what MT encoders learn about word structure. Figure \ref{fig:approach} illustrates this process. We follow a similar procedure for analyzing representation learning in $\texttt{DEC}$. The classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system, we choose to model the classifier as a simple feed-forward network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.\footnote{We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; \newcite{qian-qiu-huang:2016:P16-11} reported similar findings.} We emphasize that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models % learn about morphology. The classifier is trained with a cross-entropy loss; more details on its architecture are in the supplementary material. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[normalem]{ulem} % http://ctan.org/pkg/pifont \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \newcommand\alert[1]{{\textcolor{red}{#1}}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{496} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\reals}{\mathbb{R}} \newcommand{\xx}{\mathbf{x}} \newcommand{\ii}{\mathbf{i}} \newcommand{\ff}{\mathbf{f}} \newcommand{\oo}{\mathbf{o}} \newcommand{\cc}{\mathbf{c}} \newcommand{\bb}{\mathbf{b}} \newcommand{\hh}{\mathbf{h}} \newcommand{\uu}{\mathbf{u}} \newcommand{\ww}{\mathbf{w}} % word representation \newcommand{\sss}{\mathbf{s}} % sentence representation \newcommand{\WW}{\mathbf{W}} \newcommand{\mm}{\mathbf{m}} % memory \newcommand{\aaa}{\mathbf{a}} % attention \newcommand{\rr}{\mathbf{r}} % attention \newcommand{\zz}{\mathbf{z}} % noise \title{What do Neural Machine Translation Models Learn about Morphology?} \author{Yonatan Belinkov$^1$ ~~ Nadir Durrani$^2$ ~~ Fahim Dalvi$^2$ ~~ Hassan Sajjad$^2$ ~~ James Glass$^1$ \\\\ $^1$MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA \\ {\tt \{belinkov, glass\}@mit.edu} \\ $^2$Qatar Computing Research Institute, HBKU, Doha, Qatar \\ {\tt \{ndurrani, faimaduddin, hsajjad\}@qf.org.qa} } \date{} \begin{document} \maketitle \begin{framed} \noindent This is a modified version of a paper originally published at ACL 2017 with updated results and discussion in section 5. \end{framed} \begin{abstract} Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs.\ character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs.\ decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.\footnote{Our code is available at \url{https://github.com/boknilev/nmt-repr-analysis}.} \end{abstract} \input{introduction} \input{methodology} \input{data} \input{encoder-analysis} \input{decoder-analysis} \input{related-work} \input{conclusion} \section*{Acknowledgments} We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions. This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). \bibliographystyle{acl_natbib} \newpage \appendix \input{supplement} \end{document} \section{Conclusion} Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work, we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions: \begin{itemize}%[itemsep=1pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words. \item Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning. \item Translating into morphologically-poorer languages leads to better source-side representations. This is partly, but not completely, % correlated with BLEU scores. % \item There are only little differences between encoder and decoder representation quality. The attention mechanism does not seem to significantly affect the quality of the decoder representations, while it is important for the encoder representations. \end{itemize} These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation. Another area for future work is to extend the analysis to other word % representations (e.g.\ byte-pair encoding), deeper networks, and more semantically-oriented tasks such as semantic role-labeling or semantic parsing. \section{Supplementary Material} \label{sec:supplemental} \subsection{Training Details} \label{sec:sup-training} \paragraph{POS/Morphological classifier} The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout ($\rho=0.5$), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder's hidden state (typically 500 dimensions). We use Adam \cite{kingma2014adam} with default parameters to minimize the cross-entropy objective. Training is run with mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs. \paragraph{Neural MT system} We train a 2-layer LSTM encoder-decoder with attention. We use the \texttt{seq2seq-attn} implementation \cite{kim2016} with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The character-based model is a CNN with a highway network over characters \cite{kim2015character} with 1000 feature maps and a kernel width of 6 characters. This model was found to be useful for translating morphologically-rich languages \cite{costajussa-fonollosa:2016:P16-2}. The MT system is trained for 20 epochs, and the model with the best dev loss is used for extracting features for the classifier. \subsection{Data and Taggers} \label{sec:sup-data} \paragraph{Datasets} All of the translation models are trained on the Ted talks corpus included in WIT$^3$ \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016}. Statistics about each language pair are available on the WIT$^3$ website: \url{https://wit3.fbk.eu}. For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual \cite{PASHA14.593.L14-1479}) and the Tiger corpus for German.\footnote{\url{http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html}} \paragraph{POS and morphological taggers} We used the following tools to annotate the MT corpora: MADAMIRA \cite{PASHA14.593.L14-1479} for Arabic POS and morphological tags, Tree-Tagger \cite{schmid:2004:PAPERS} for Czech and French POS tags, LoPar \cite{schmid:00a} for German POS and morphological tags, and MXPOST \cite{ratnaparkhi98maximum} for English POS tags. These tools are recommended on the Moses website.\footnote{\url{http://www.statmt.org/moses/?n=Moses.ExternalTools}} As mentioned before, our goal is not to achieve state-of-the-art results, but rather to study what different components of the NMT architecture learn about word morphology. Please refer to \newcite{mueller-schmid-schutze:2013:EMNLP} for representative POS and morphological tagging accuracies. \subsection{Supplementary Results} \label{sec:sup-results} We report here results that were omitted from the paper due to the space limit. Table \ref{tab:different_layers} shows encoder results using different layers, languages, and representations (word/char-based). As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology. Table \ref{tab:different_language} shows that translating into a morphologically-poor language (English) leads to better source representations, and Table \ref{tab:decoder} provides additional decoder results. Table~\ref{tab:decoder-old} shows POS tagging accuracy using decoder representations, where the current word representation was used to predict the next word's tag. The idea is to evaluate whether the current word representation contains POS information about the output of the decoder. Clearly, the current word representation cannot be used to predict the next word's tag. This also holds when removing the attention (En-Ar, 85.54\%) or using character-based representations (En-Ar, 44.5\%). Since the decoder representation is in the pre-Softmax layer, this means that most of the work of predicting the next work is done in the Softmax layer, while the pre-Softmax representation contains much information about the current input word. \newpage \section{Introduction} Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism \cite{bahdanau2014neural,sutskever2014sequence}. The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation \cite{luong-manning:iwslt15,bentivogli-EtAl:2016:EMNLP2016,toral-sanchezcartagena:2017:EACLlong}. \bigskip However, little is known about what and how much these models learn about each language and its features. Recent work has started exploring the role of the NMT encoder in learning source syntax \cite{shi-padhi-knight:2016:EMNLP2016}, but research studies are yet to answer important questions such as: \textit{(i)} what do NMT models learn about word morphology? \textit{(ii)} what is the effect on learning when translating into/from morphologically-rich languages? \mbox{\textit{(iii)} what} impact do different representations (character vs.\ word) have on learning? and \textit{(iv)} what do different modules learn about the syntactic and semantic structure of a language? Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring \textit{(i)}, \textit{(ii)}, and \textit{(iii)} by providing quantitative, data-driven answers to the following specific questions: \begin{itemize}%[itemsep=5pt,topsep=8pt] %[leftmargin=*] \item Which parts of the NMT architecture capture word structure? \item What is the division of labor between different components (e.g.\ different layers or %of encoder vs.\ decoder)? \item How do different word representations help learn better morphology and modeling of infrequent words? \item How does the target language affect the learning of word structure? \end{itemize} To achieve this, we follow a simple but effective procedure with three steps: \mbox{\textit{(i)} train} a neural MT system on a parallel corpus; \mbox{\textit{(ii)} use} the trained model to extract feature representations for words in a language of interest; and \mbox{\textit{(iii)} train} a classifier using extracted features to make predictions for another task. We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task. We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs.\ the decoder. We experiment with several languages with varying degrees of morphological richness: French, German, Czech, Arabic, and Hebrew.\ Our analysis reveals interesting insights such as: \begin{itemize}%[itemsep=3pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are much better for learning morphology, especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words. \item Lower layers of the encoder are better at capturing word structure, while deeper networks improve translation quality, suggesting that higher layers focus more on word meaning. \item The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores. \item The NMT encoder and decoder learn representations of similar quality. The attention mechanism affects the quality of the encoder representations more than that of the decoder representations. \end{itemize} \section{Data} \paragraph{Language pairs } We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden our analysis and study the effect of having morphologically-rich languages on both source and target sides, we also include Arabic-Hebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies. \paragraph{MT data} Our translation models are trained on the WIT$^3$ corpus of TED talks \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016} made available for IWSLT 2016. This allows for comparable and cross-linguistic analysis. Statistics about each language pair are given in Table \ref{tab:tagsets} (under Pred). We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets. \paragraph{Annotated data} We use two kinds of datasets to train POS and morphological classifiers: gold-standard and predicted tags. For predicted tags, we simply used freely available taggers to annotate the MT data. For gold tags, we use gold-annotated datasets. Table \ref{tab:tagsets} gives statistics for datasets with gold and predicted tags; see supplementary material for details on taggers and gold data. We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available. \section{Decoder Analysis} \label{sec:dec-analysis} So far we only looked at the encoder. However, the decoder \texttt{DEC} is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence. These features are used to train a classifier on POS or morphological tagging on the target side.\footnote{In this section we only experiment with predicted tags as there are no parallel data with gold POS/morphological tags that we are aware of.} % Note that in this case the decoder is given the correct target words one-by-one, similar to the usual NMT training regime. Table \ref{tab:pos-dec-enc-attn-nogold} (1st row) shows the results of using representations extracted with \texttt{ENC} and \texttt{DEC} from the Arabic-English and English-Arabic models, respectively. There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs.\ Arabic to English. We observed simmilar small drops with higher quality translation directions (Table~\ref{tab:decoder}, Appendix~\ref{sec:sup-results}). The little gap between encoder and decoder representations may sound surprising, when we consider the fundamental tasks of the two modules. The encoder's task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT \cite{johnson2016google}. The decoder's task is to use this representation to generate the target sentence in a specific language. One might conjecture that it would be sufficient for the decoder to learn a strong language model in order % to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. However, their performance seems more or less comparable. In the following section we investigate what the role of the attention mechanism in the division of labor between encoder and decoder. \subsection{Effect of attention} Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder's hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism might hurt the quality of the target word representations learned by the decoder. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. As Table~\ref{tab:pos-dec-enc-attn-nogold} shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations significantly, but only mildly hurts the quality of the decoder representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis. \subsection{Effect of word representation} We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations. Table~\ref{tab:pos-dec-enc-word-char-nogold} shows POS accuracy of word-based % vs.\ char-based representations in the encoder and decoder. In both bases, char-based representations perform better. BLEU scores behave differently: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon % is that the decoder's predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabic-to-English the char-based model reduces the number of generated unknown words % in the MT % test set by 25\%, while in English-to-Arabic the number of unknown words % remains roughly the same between word-based % and char-based models. \section{Related Work} \label{sec:related-work} \paragraph{Analysis of neural models} The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks that are trained for a given task \cite{elman1991distributed,karpathy2015visualizing,kadar2016representation,qian-qiu-huang:2016:EMNLP2016}. While such visualizations illuminate the inner workings of the network, they are often qualitative in nature and somewhat anecdotal. A different approach tries to provide a quantitative analysis by correlating parts of the neural % network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings \cite{kohn:2015:EMNLP,qian-qiu-huang:2016:P16-11}, through LSTM gates or states \cite{qian-qiu-huang:2016:EMNLP2016}, to sentence embeddings \cite{adi2016fine}. Our work is most similar to \newcite{shi-padhi-knight:2016:EMNLP2016}, who use hidden vectors from a neural MT encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria. \newcite{vylomova2016word} also analyze different % representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations. \paragraph{Word representations in MT} Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation \cite{C00-2162,E03-1076,Badr:2008:SES:1557690.1557732} and factored translation and reordering models \cite{koehn-hoang:2007:EMNLP-CoNLL2007,durrani-EtAl:2014:Coling}. Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich \cite{Luong:D10-1015} or closely related language pairs \cite{durrani-EtAl:2010:ACL,Nakov:Tiedemann:2012}. In neural MT, such units are obtained in a pre-processing step---e.g.\ by byte-pair encoding \cite{sennrich-haddow-birch:2016:P16-12} or the word-piece model \cite{wu2016google}---or learned during training using a character-based convolutional/recurrent sub-network \cite{costajussa-fonollosa:2016:P16-2,Luong:P16-1100,vylomova2016word}. The latter approach has the advantage of keeping the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation \cite{kim2015character,belinkov-glass:2016:SeMaT, costajussa-fonollosa:2016:P16-2,jozefowicz2016exploring,sajjad:2017:ACL}. We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system. \section{Encoder Analysis} \label{sec:enc-analysis} Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder \texttt{ENC} and a sentence $s$ with POS/morphology annotation, we generate word features $\texttt{ENC}_i(s)$ for every word in the sentence. We then train a classifier that uses the features $\texttt{ENC}_i(s)$ to predict POS or morphological tags. \subsection{Effect of word representation} In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training \cite{kim2015character,costajussa-fonollosa:2016:P16-2}; see appendix \ref{sec:sup-training} for specific settings. In both cases we run the encoder over these representations and use its output $\texttt{ENC}_i(s)$ as features for the classifier. Table~\ref{tab:results-all-pairs} shows POS tagging accuracy using features from different NMT encoders. Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively.\footnote{The results are not far below dedicated taggers (e.g.\ 95.1/84.1 on Arabic POS/morphology \cite{PASHA14.593.L14-1479}), indicating that NMT models learn quite good representations.} The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table~\ref{tab:results-all-pairs}. \paragraph{Impact of word frequency} Let us look more closely at an example case: Arabic POS and morphological tagging. Figure~\ref{fig:repr} shows the effect of using word-based vs.\ char-based feature representations, obtained from the encoder of the Arabic-Hebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is true for the overall accuracy (+14.3\% in POS, +14.5\% in morphology), but more so on OOV words (+37.6\% in POS, +32.7\% in morphology). Figure~\ref{fig:repr-freqs} shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples to make a decision. \paragraph{Analyzing specific tags} In Figure~\ref{fig:repr-pos-cm} we plot confusion matrices for POS tagging using word-based and char-based representations (from Arabic encoders). While the char-based representations are overall better, the two models still share similar misclassified tags. Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case, relatively many tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the char-based case, this hardly happens. This suggests that char-based representations are predictive of the presence of a determiner, which in Arabic is expressed as the prefix ``Al-'' (the definite article), a pattern easily captured by a char-based model. In Figure~\ref{fig:repr-pos-tag-freq} we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner occur more frequently in the training set and are better predicted by char-based compared to word-based representations. There are a few fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations: mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (\mbox{``-wn/yn''} for masc. plural, \mbox{``-At''} for fem. plural). The char-based model is thus especially good with frequent tags and infrequent words, which is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs. \subsection{Effect of encoder depth} Modern NMT systems use very deep architectures with up to 8 or 16 layers \cite{wu2016google,TACL863}. We would like to understand what kind of information different layers capture. Given a trained model with multiple layers, we extract representations from the different layers in the encoder. Let $\texttt{ENC}^l_i(s)$ denote the encoded representation of word $w_i$ after the $l$-th layer. We vary $l$ and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder for simplicity ($l \in \{1,2\}$). Figure~\ref{fig:layer-effect-all-langs} shows POS tagging results using representations from different encoding layers across five language pairs. The general trend is that passing word vectors through the encoder improves POS tagging, which can be explained by contextual information contained in the representations after one layer. However, it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure. Figure~\ref{fig:layer-effect-lines} shows that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.\footnote{We found this result to be also true in French, German, and Czech experiments (see the supplementary material). } In contrast, BLEU scores actually increase when training \mbox{2-layer} vs.\ \mbox{1-layer} models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models). Thus translation quality improves when adding layers but morphology quality degrades. Intuitively, it seems that lower layers of the network learn to represent word structure while higher layers focus more on word meaning. A similar pattern was recently observed in a joint language-vision deep recurrent net~\cite{gelderloos-chrupala:2016:COLING}. \subsection{Effect of target language} While translating from morphologically-rich languages is challenging, translating into such languages is even harder. For instance, our basic system obtains BLEU of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech. How does the target language affect the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models on different target languages. For example, given an Arabic source we train Arabic-to-English/Hebrew/German systems. These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German). To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is completely identical: the models are trained on the same Arabic sentences with different translations. Figure~\ref{fig:target-lang} shows POS and morphology accuracy of word-based representations from the NMT encoders, as well as corresponding BLEU scores. As expected, translating to English is easier than translating to the morphologically-richer Hebrew and German, resulting in higher BLEU. Despite their similar morphologies, translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the richer Hebrew morphology compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating to English are better for predicting POS or morphology than those learned when translating to German, which are in turn better than those learned when translating to Hebrew. This is remarkable given that English is a morphologically-poor language that does not display many of the morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them would make the model learn more about morphology. A possible explanation for this phenomenon is that the Arabic-English model is simply better than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table \ref{tab:results-all-pairs}. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. We found that it learns to recreate the test sentences extremely well, with very high BLEU scores (Figure~\ref{fig:target-lang}). However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 7: POS accuracy and BLEU using decoder representations from different language pairs.
[ "[EMPTY]", "En-De", "En-Cz", "De-En", "Fr-En" ]
[ [ "POS", "94.3", "71.9", "93.3", "94.4" ], [ "BLEU", "23.4", "13.9", "29.6", "37.8" ] ]
There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs. Arabic to English. We report here results that were omitted from the paper due to the space limit. As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology.
\section{Motivation} \label{sec:motivation} Translating morphologically-rich languages is especially difficult due to a large vocabulary size and a high level of sparsity. Different solutions have been proposed to deal with this problem, for example factored models in phrase-based MT~\cite{koehn-hoang:2007:EMNLP-CoNLL2007} or softmax approximations in neural MT~\cite{ruder-softmax}. More recently, neural MT systems have shown significant gains by exploiting characters and other sub-word units~\cite{costajussa-fonollosa:2016:P16-2,sennrich-haddow-birch:2016:P16-12,wu2016google}. Presumably, such models are better than word-based models in representing the structure of rare and unseen words. Indeed, \newcite{sennrich-haddow-birch:2016:P16-12} have found that the unigram translation accuracy of words decreases for lower-frequency words. They also observed somewhat different behavior when translating into different languages. It is less clear, however, what and how neural translation models learn about word structure. In this work we are interested in answering questions such as the following: \begin{itemize} \item How do character-based models improve neural MT? \item What components of the NMT system encoder word structure? \item How does the target language affect the learning of word structure? \item What is the role of the decoder in learning word representations? \end{itemize} In the next section, we describe our data-driven approach for addressing such questions. We aim to obtain quantitative answers that will lead to generalizable conclusions. \section{Methodology} Given a source sentence $s = \{w_1, w_2, ..., w_N\}$ and a target sentence $t=\{u_1, u_2, ..., u_M\}$, we first generate a vector representation for the source sentence using an encoder (Eqn.\ \ref{eq:enc}) and then map this vector to the target sentence using a decoder (Eqn.\ \ref{eq:dec}) \cite{sutskever2014sequence}: \begin{align} &\texttt{ENC}: s=\{w_1, w_2, ..., w_N\} \mapsto \sss \in \reals^k \label{eq:enc} \\ &\texttt{DEC} : \sss \in \reals^k \mapsto t=\{u_1, u_2, ..., u_M\} \label{eq:dec} \end{align} In this work, we use long short-term memory (LSTM) \cite{hochreiter1997long} encoder-decoders with attention \cite{bahdanau2014neural}, which we train on parallel data. After training the NMT system, we freeze the parameters of the encoder and use \texttt{ENC} as a feature extractor to generate vectors representing words in the sentence. Let $\texttt{ENC}_i(s)$ denote the encoded representation of word $w_i$. For example, this may be the output of the LSTM after word $w_i$. We feed $\texttt{ENC}_i(s)$ to a neural classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier. By comparing the performance of classifiers trained with features from different instantiations of \texttt{ENC}, we can evaluate what MT encoders learn about word structure. Figure \ref{fig:approach} illustrates this process. We follow a similar procedure for analyzing representation learning in $\texttt{DEC}$. The classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system, we choose to model the classifier as a simple feed-forward network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.\footnote{We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; \newcite{qian-qiu-huang:2016:P16-11} reported similar findings.} We emphasize that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models % learn about morphology. The classifier is trained with a cross-entropy loss; more details on its architecture are in the supplementary material. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[normalem]{ulem} % http://ctan.org/pkg/pifont \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \newcommand\alert[1]{{\textcolor{red}{#1}}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{496} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\reals}{\mathbb{R}} \newcommand{\xx}{\mathbf{x}} \newcommand{\ii}{\mathbf{i}} \newcommand{\ff}{\mathbf{f}} \newcommand{\oo}{\mathbf{o}} \newcommand{\cc}{\mathbf{c}} \newcommand{\bb}{\mathbf{b}} \newcommand{\hh}{\mathbf{h}} \newcommand{\uu}{\mathbf{u}} \newcommand{\ww}{\mathbf{w}} % word representation \newcommand{\sss}{\mathbf{s}} % sentence representation \newcommand{\WW}{\mathbf{W}} \newcommand{\mm}{\mathbf{m}} % memory \newcommand{\aaa}{\mathbf{a}} % attention \newcommand{\rr}{\mathbf{r}} % attention \newcommand{\zz}{\mathbf{z}} % noise \title{What do Neural Machine Translation Models Learn about Morphology?} \author{Yonatan Belinkov$^1$ ~~ Nadir Durrani$^2$ ~~ Fahim Dalvi$^2$ ~~ Hassan Sajjad$^2$ ~~ James Glass$^1$ \\\\ $^1$MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA \\ {\tt \{belinkov, glass\}@mit.edu} \\ $^2$Qatar Computing Research Institute, HBKU, Doha, Qatar \\ {\tt \{ndurrani, faimaduddin, hsajjad\}@qf.org.qa} } \date{} \begin{document} \maketitle \begin{framed} \noindent This is a modified version of a paper originally published at ACL 2017 with updated results and discussion in section 5. \end{framed} \begin{abstract} Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs.\ character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs.\ decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.\footnote{Our code is available at \url{https://github.com/boknilev/nmt-repr-analysis}.} \end{abstract} \input{introduction} \input{methodology} \input{data} \input{encoder-analysis} \input{decoder-analysis} \input{related-work} \input{conclusion} \section*{Acknowledgments} We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions. This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). \bibliographystyle{acl_natbib} \newpage \appendix \input{supplement} \end{document} \section{Conclusion} Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work, we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions: \begin{itemize}%[itemsep=1pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words. \item Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning. \item Translating into morphologically-poorer languages leads to better source-side representations. This is partly, but not completely, % correlated with BLEU scores. % \item There are only little differences between encoder and decoder representation quality. The attention mechanism does not seem to significantly affect the quality of the decoder representations, while it is important for the encoder representations. \end{itemize} These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation. Another area for future work is to extend the analysis to other word % representations (e.g.\ byte-pair encoding), deeper networks, and more semantically-oriented tasks such as semantic role-labeling or semantic parsing. \section{Supplementary Material} \label{sec:supplemental} \subsection{Training Details} \label{sec:sup-training} \paragraph{POS/Morphological classifier} The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout ($\rho=0.5$), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder's hidden state (typically 500 dimensions). We use Adam \cite{kingma2014adam} with default parameters to minimize the cross-entropy objective. Training is run with mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs. \paragraph{Neural MT system} We train a 2-layer LSTM encoder-decoder with attention. We use the \texttt{seq2seq-attn} implementation \cite{kim2016} with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The character-based model is a CNN with a highway network over characters \cite{kim2015character} with 1000 feature maps and a kernel width of 6 characters. This model was found to be useful for translating morphologically-rich languages \cite{costajussa-fonollosa:2016:P16-2}. The MT system is trained for 20 epochs, and the model with the best dev loss is used for extracting features for the classifier. \subsection{Data and Taggers} \label{sec:sup-data} \paragraph{Datasets} All of the translation models are trained on the Ted talks corpus included in WIT$^3$ \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016}. Statistics about each language pair are available on the WIT$^3$ website: \url{https://wit3.fbk.eu}. For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual \cite{PASHA14.593.L14-1479}) and the Tiger corpus for German.\footnote{\url{http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html}} \paragraph{POS and morphological taggers} We used the following tools to annotate the MT corpora: MADAMIRA \cite{PASHA14.593.L14-1479} for Arabic POS and morphological tags, Tree-Tagger \cite{schmid:2004:PAPERS} for Czech and French POS tags, LoPar \cite{schmid:00a} for German POS and morphological tags, and MXPOST \cite{ratnaparkhi98maximum} for English POS tags. These tools are recommended on the Moses website.\footnote{\url{http://www.statmt.org/moses/?n=Moses.ExternalTools}} As mentioned before, our goal is not to achieve state-of-the-art results, but rather to study what different components of the NMT architecture learn about word morphology. Please refer to \newcite{mueller-schmid-schutze:2013:EMNLP} for representative POS and morphological tagging accuracies. \subsection{Supplementary Results} \label{sec:sup-results} We report here results that were omitted from the paper due to the space limit. Table \ref{tab:different_layers} shows encoder results using different layers, languages, and representations (word/char-based). As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology. Table \ref{tab:different_language} shows that translating into a morphologically-poor language (English) leads to better source representations, and Table \ref{tab:decoder} provides additional decoder results. Table~\ref{tab:decoder-old} shows POS tagging accuracy using decoder representations, where the current word representation was used to predict the next word's tag. The idea is to evaluate whether the current word representation contains POS information about the output of the decoder. Clearly, the current word representation cannot be used to predict the next word's tag. This also holds when removing the attention (En-Ar, 85.54\%) or using character-based representations (En-Ar, 44.5\%). Since the decoder representation is in the pre-Softmax layer, this means that most of the work of predicting the next work is done in the Softmax layer, while the pre-Softmax representation contains much information about the current input word. \newpage \section{Introduction} Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism \cite{bahdanau2014neural,sutskever2014sequence}. The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation \cite{luong-manning:iwslt15,bentivogli-EtAl:2016:EMNLP2016,toral-sanchezcartagena:2017:EACLlong}. \bigskip However, little is known about what and how much these models learn about each language and its features. Recent work has started exploring the role of the NMT encoder in learning source syntax \cite{shi-padhi-knight:2016:EMNLP2016}, but research studies are yet to answer important questions such as: \textit{(i)} what do NMT models learn about word morphology? \textit{(ii)} what is the effect on learning when translating into/from morphologically-rich languages? \mbox{\textit{(iii)} what} impact do different representations (character vs.\ word) have on learning? and \textit{(iv)} what do different modules learn about the syntactic and semantic structure of a language? Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring \textit{(i)}, \textit{(ii)}, and \textit{(iii)} by providing quantitative, data-driven answers to the following specific questions: \begin{itemize}%[itemsep=5pt,topsep=8pt] %[leftmargin=*] \item Which parts of the NMT architecture capture word structure? \item What is the division of labor between different components (e.g.\ different layers or %of encoder vs.\ decoder)? \item How do different word representations help learn better morphology and modeling of infrequent words? \item How does the target language affect the learning of word structure? \end{itemize} To achieve this, we follow a simple but effective procedure with three steps: \mbox{\textit{(i)} train} a neural MT system on a parallel corpus; \mbox{\textit{(ii)} use} the trained model to extract feature representations for words in a language of interest; and \mbox{\textit{(iii)} train} a classifier using extracted features to make predictions for another task. We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task. We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs.\ the decoder. We experiment with several languages with varying degrees of morphological richness: French, German, Czech, Arabic, and Hebrew.\ Our analysis reveals interesting insights such as: \begin{itemize}%[itemsep=3pt,topsep=5pt]%[leftmargin=*] \item Character-based representations are much better for learning morphology, especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words. \item Lower layers of the encoder are better at capturing word structure, while deeper networks improve translation quality, suggesting that higher layers focus more on word meaning. \item The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores. \item The NMT encoder and decoder learn representations of similar quality. The attention mechanism affects the quality of the encoder representations more than that of the decoder representations. \end{itemize} \section{Data} \paragraph{Language pairs } We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden our analysis and study the effect of having morphologically-rich languages on both source and target sides, we also include Arabic-Hebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies. \paragraph{MT data} Our translation models are trained on the WIT$^3$ corpus of TED talks \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016} made available for IWSLT 2016. This allows for comparable and cross-linguistic analysis. Statistics about each language pair are given in Table \ref{tab:tagsets} (under Pred). We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets. \paragraph{Annotated data} We use two kinds of datasets to train POS and morphological classifiers: gold-standard and predicted tags. For predicted tags, we simply used freely available taggers to annotate the MT data. For gold tags, we use gold-annotated datasets. Table \ref{tab:tagsets} gives statistics for datasets with gold and predicted tags; see supplementary material for details on taggers and gold data. We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available. \section{Decoder Analysis} \label{sec:dec-analysis} So far we only looked at the encoder. However, the decoder \texttt{DEC} is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence. These features are used to train a classifier on POS or morphological tagging on the target side.\footnote{In this section we only experiment with predicted tags as there are no parallel data with gold POS/morphological tags that we are aware of.} % Note that in this case the decoder is given the correct target words one-by-one, similar to the usual NMT training regime. Table \ref{tab:pos-dec-enc-attn-nogold} (1st row) shows the results of using representations extracted with \texttt{ENC} and \texttt{DEC} from the Arabic-English and English-Arabic models, respectively. There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs.\ Arabic to English. We observed simmilar small drops with higher quality translation directions (Table~\ref{tab:decoder}, Appendix~\ref{sec:sup-results}). The little gap between encoder and decoder representations may sound surprising, when we consider the fundamental tasks of the two modules. The encoder's task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT \cite{johnson2016google}. The decoder's task is to use this representation to generate the target sentence in a specific language. One might conjecture that it would be sufficient for the decoder to learn a strong language model in order % to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. However, their performance seems more or less comparable. In the following section we investigate what the role of the attention mechanism in the division of labor between encoder and decoder. \subsection{Effect of attention} Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder's hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism might hurt the quality of the target word representations learned by the decoder. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. As Table~\ref{tab:pos-dec-enc-attn-nogold} shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations significantly, but only mildly hurts the quality of the decoder representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis. \subsection{Effect of word representation} We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations. Table~\ref{tab:pos-dec-enc-word-char-nogold} shows POS accuracy of word-based % vs.\ char-based representations in the encoder and decoder. In both bases, char-based representations perform better. BLEU scores behave differently: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon % is that the decoder's predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabic-to-English the char-based model reduces the number of generated unknown words % in the MT % test set by 25\%, while in English-to-Arabic the number of unknown words % remains roughly the same between word-based % and char-based models. \section{Related Work} \label{sec:related-work} \paragraph{Analysis of neural models} The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks that are trained for a given task \cite{elman1991distributed,karpathy2015visualizing,kadar2016representation,qian-qiu-huang:2016:EMNLP2016}. While such visualizations illuminate the inner workings of the network, they are often qualitative in nature and somewhat anecdotal. A different approach tries to provide a quantitative analysis by correlating parts of the neural % network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings \cite{kohn:2015:EMNLP,qian-qiu-huang:2016:P16-11}, through LSTM gates or states \cite{qian-qiu-huang:2016:EMNLP2016}, to sentence embeddings \cite{adi2016fine}. Our work is most similar to \newcite{shi-padhi-knight:2016:EMNLP2016}, who use hidden vectors from a neural MT encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria. \newcite{vylomova2016word} also analyze different % representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations. \paragraph{Word representations in MT} Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation \cite{C00-2162,E03-1076,Badr:2008:SES:1557690.1557732} and factored translation and reordering models \cite{koehn-hoang:2007:EMNLP-CoNLL2007,durrani-EtAl:2014:Coling}. Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich \cite{Luong:D10-1015} or closely related language pairs \cite{durrani-EtAl:2010:ACL,Nakov:Tiedemann:2012}. In neural MT, such units are obtained in a pre-processing step---e.g.\ by byte-pair encoding \cite{sennrich-haddow-birch:2016:P16-12} or the word-piece model \cite{wu2016google}---or learned during training using a character-based convolutional/recurrent sub-network \cite{costajussa-fonollosa:2016:P16-2,Luong:P16-1100,vylomova2016word}. The latter approach has the advantage of keeping the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation \cite{kim2015character,belinkov-glass:2016:SeMaT, costajussa-fonollosa:2016:P16-2,jozefowicz2016exploring,sajjad:2017:ACL}. We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system. \section{Encoder Analysis} \label{sec:enc-analysis} Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder \texttt{ENC} and a sentence $s$ with POS/morphology annotation, we generate word features $\texttt{ENC}_i(s)$ for every word in the sentence. We then train a classifier that uses the features $\texttt{ENC}_i(s)$ to predict POS or morphological tags. \subsection{Effect of word representation} In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training \cite{kim2015character,costajussa-fonollosa:2016:P16-2}; see appendix \ref{sec:sup-training} for specific settings. In both cases we run the encoder over these representations and use its output $\texttt{ENC}_i(s)$ as features for the classifier. Table~\ref{tab:results-all-pairs} shows POS tagging accuracy using features from different NMT encoders. Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively.\footnote{The results are not far below dedicated taggers (e.g.\ 95.1/84.1 on Arabic POS/morphology \cite{PASHA14.593.L14-1479}), indicating that NMT models learn quite good representations.} The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table~\ref{tab:results-all-pairs}. \paragraph{Impact of word frequency} Let us look more closely at an example case: Arabic POS and morphological tagging. Figure~\ref{fig:repr} shows the effect of using word-based vs.\ char-based feature representations, obtained from the encoder of the Arabic-Hebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is true for the overall accuracy (+14.3\% in POS, +14.5\% in morphology), but more so on OOV words (+37.6\% in POS, +32.7\% in morphology). Figure~\ref{fig:repr-freqs} shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples to make a decision. \paragraph{Analyzing specific tags} In Figure~\ref{fig:repr-pos-cm} we plot confusion matrices for POS tagging using word-based and char-based representations (from Arabic encoders). While the char-based representations are overall better, the two models still share similar misclassified tags. Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case, relatively many tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the char-based case, this hardly happens. This suggests that char-based representations are predictive of the presence of a determiner, which in Arabic is expressed as the prefix ``Al-'' (the definite article), a pattern easily captured by a char-based model. In Figure~\ref{fig:repr-pos-tag-freq} we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner occur more frequently in the training set and are better predicted by char-based compared to word-based representations. There are a few fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations: mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (\mbox{``-wn/yn''} for masc. plural, \mbox{``-At''} for fem. plural). The char-based model is thus especially good with frequent tags and infrequent words, which is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs. \subsection{Effect of encoder depth} Modern NMT systems use very deep architectures with up to 8 or 16 layers \cite{wu2016google,TACL863}. We would like to understand what kind of information different layers capture. Given a trained model with multiple layers, we extract representations from the different layers in the encoder. Let $\texttt{ENC}^l_i(s)$ denote the encoded representation of word $w_i$ after the $l$-th layer. We vary $l$ and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder for simplicity ($l \in \{1,2\}$). Figure~\ref{fig:layer-effect-all-langs} shows POS tagging results using representations from different encoding layers across five language pairs. The general trend is that passing word vectors through the encoder improves POS tagging, which can be explained by contextual information contained in the representations after one layer. However, it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure. Figure~\ref{fig:layer-effect-lines} shows that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.\footnote{We found this result to be also true in French, German, and Czech experiments (see the supplementary material). } In contrast, BLEU scores actually increase when training \mbox{2-layer} vs.\ \mbox{1-layer} models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models). Thus translation quality improves when adding layers but morphology quality degrades. Intuitively, it seems that lower layers of the network learn to represent word structure while higher layers focus more on word meaning. A similar pattern was recently observed in a joint language-vision deep recurrent net~\cite{gelderloos-chrupala:2016:COLING}. \subsection{Effect of target language} While translating from morphologically-rich languages is challenging, translating into such languages is even harder. For instance, our basic system obtains BLEU of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech. How does the target language affect the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models on different target languages. For example, given an Arabic source we train Arabic-to-English/Hebrew/German systems. These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German). To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is completely identical: the models are trained on the same Arabic sentences with different translations. Figure~\ref{fig:target-lang} shows POS and morphology accuracy of word-based representations from the NMT encoders, as well as corresponding BLEU scores. As expected, translating to English is easier than translating to the morphologically-richer Hebrew and German, resulting in higher BLEU. Despite their similar morphologies, translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the richer Hebrew morphology compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating to English are better for predicting POS or morphology than those learned when translating to German, which are in turn better than those learned when translating to Hebrew. This is remarkable given that English is a morphologically-poor language that does not display many of the morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them would make the model learn more about morphology. A possible explanation for this phenomenon is that the Arabic-English model is simply better than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table \ref{tab:results-all-pairs}. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. We found that it learns to recreate the test sentences extremely well, with very high BLEU scores (Figure~\ref{fig:target-lang}). However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs.
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
1701.06538
Table 8: Model comparison on 100 Billion Word Google News Dataset
[ "Model", "Test Perplexity", "Test Perplexity", "ops/timestep (millions)", "#Params excluding embed. & softmax", "Total #Params", "TFLOPS per GPU" ]
[ [ "[EMPTY]", ".1 epochs", "1 epoch", "[EMPTY]", "(millions)", "(billions)", "(observed)" ], [ "Kneser-Ney 5-gram", "67.1", "45.3", "0.00001", "[EMPTY]", "76.0", "[EMPTY]" ], [ "4xLSTM-512", "54.5", "47.0", "8.4", "8.4", "0.1", "[BOLD] 1.23" ], [ "MoE-32", "48.5", "40.4", "8.4", "37.8", "0.1", "0.83" ], [ "MoE-256-h", "42.8", "35.3", "8.4", "272.9", "0.4", "1.11" ], [ "MoE-1024-h", "40.3", "32.7", "8.5", "1079.0", "1.2", "1.14" ], [ "MoE-4096-h", "38.9", "30.9", "8.6", "4303.4", "4.4", "1.07" ], [ "MoE-16384-h", "[BOLD] 38.2", "29.7", "8.8", "17201.0", "17.3", "0.96" ], [ "MoE-65536-h", "[BOLD] 38.2", "[BOLD] 28.9", "9.2", "68791.0", "68.9", "0.72" ], [ "MoE-131072-h", "39.8", "29.2", "9.7", "137577.6", "137.7", "0.30" ] ]
: We evaluate our model using perplexity on a holdout dataset. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs.
\documentclass{article} % For LaTeX2e \pdfoutput=1 \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \newcommand{\gate}{BalancedSparseThreshold} \setlength\abovecaptionskip{0pt} \setlength{\textfloatsep}{8pt plus 1.0pt minus 2.0pt} \newcommand\revisions[1]{\textcolor{red}{#1}} \title{\Large Outrageously Large Neural Networks: \\ The Sparsely-Gated Mixture-of-Experts Layer} \author[1]{Noam Shazeer} \author[1]{Azalia Mirhoseini\thanks{Equally major contributors} \thanks{Work done as a member of the Google Brain Residency program (g.co/brainresidency)}~} \author[2]{Krzysztof Maziarz$^*$} \author[1]{Andy Davis} \author[1]{Quoc Le} \author[1]{Geoffrey Hinton} \author[1]{Jeff Dean} \affil[1]{Google Brain, \{noam,azalia,andydavis,qvl,geoffhinton,jeff\}@google.com} \affil[2]{Jagiellonian University, Cracow, krzysztof.maziarz@student.uj.edu.pl} \renewcommand\Authands{ and } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost. \end{abstract} \section{Introduction and Related Work} \subsection{Conditional Computation} Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text \citep{sutskever2014sequence,bahdanau2014neural,RafalNoam16,GNMT}, images \citep{Imagenet,qvl2012building}, and audio \citep{hinton2012deep,DeepSpeech2}. For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand. Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs \citep{Davis13:CondComp, Bengio13:CondComp, eigen2013learning, Denoyer14:CondComp, Cho14, Bengio15:CondComp, Almahairi15}. In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions. While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges: \begin{itemize} \item Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision. \item Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network. \item Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity. \item Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. \cite{Bengio15:CondComp} use three such terms. These issues can affect both model quality and load-balancing. \item Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters. \end{itemize} In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets. \subsection{Our Approach: The Sparsely-Gated Mixture-of-Experts Layer} Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure \ref{fig:moe}). All parts of the network are trained jointly by back-propagation. While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers \citep{Hochreiter:1997:LSM}, as in Figure \ref{fig:moe}. The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix \ref{sec:appendixmt} Table \ref{tab:experts}). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost. \subsection{Related work on Mixtures of Experts} Since its introduction more than two decades ago \citep{Jacobs91Adaptive,Jordan1994HME}, the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs \citep{Collobert02PMS}, Gaussian Processes \citep{Tresp2001Mixture,Theis2015Generative,Deisenroth15Distributed}, Dirichlet Processes \citep{Shahbaba09NMU}, and deep networks. Other work has focused on different expert configurations such as a hierarchical structure \citep{Yao09Hierarchical}, infinite numbers of experts \citep{Rasmussen02Infinite}, and adding experts sequentially \citep{Aljundi16}. \cite{Garmash2016ensemble} suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model. The works above concern top-level mixtures of experts. The mixture of experts is the whole model. \cite{eigen2013learning} introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation. Our work builds on this use of MoEs as a general purpose neural network component. While \cite{eigen2013learning} uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity. \section{The Structure of the Mixture-of-Experts layer}\label{sec:gating} The Mixture-of-Experts (MoE) layer consists of a set of $n$ ``expert networks" $E_1, \cdots, E_n$, and a ``gating network" $G$ whose output is a sparse $n$-dimensional vector. Figure \ref{fig:moe} shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters. Let us denote by $G(x)$ and $E_i(x)$ the output of the gating network and the output of the $i$-th expert network for a given input $x$. The output $y$ of the MoE module can be written as follows: \begin{equation} y = \sum_{i=1}^{n}G(x)_iE_i(x) \end{equation} We save computation based on the sparsity of the output of $G(x)$. Wherever $G(x)_i=0$, we need not compute $E_i(x)$. In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of ``experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix \ref{sec:hierarchical}. Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in \citep{Cho14}. A MoE whose experts have one hidden layer is similar to the block-wise dropout described in \citep{Bengio15:CondComp}, where the dropped-out layer is sandwiched between fully-activated layers. \subsection{Gating Network} \paragraph{Softmax Gating:} A simple choice of non-sparse gating function \citep{Jordan1994HME} is to multiply the input by a trainable weight matrix $W_g$ and then apply the $Softmax$ function. \begin{equation}\label{eq:softmax} G_\sigma(x) = Softmax(x \cdot W_g) \end{equation} \paragraph{Noisy Top-K Gating:}\label{sec:noisytopk} We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to $-\infty$ (which causes the corresponding gate values to equal $0$). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix \ref{sec:load}. The amount of noise per component is controlled by a second trainable weight matrix $W_{noise}$. \begin{equation}\label{eq:g} G(x) = Softmax(KeepTopK(H(x), k)) \end{equation} \begin{equation}\label{eq:noise} H(x)_i = (x \cdot W_g)_i + StandardNormal() \cdot Softplus((x \cdot W_{noise})_i) \end{equation} \begin{equation}\label{eq:keeptopk} KeepTopK(v, k)_i = \begin{cases} v_i & \text{if $v_i$ is in the top $k$ elements of $v$.} \\ -\infty & \text{otherwise.} \end{cases} \end{equation} \paragraph{Training the Gating Network} We train the gating network by simple back-propagation, along with the rest of the model. If we choose $k>1$, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in \citep{Bengio13:CondComp} with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from \citep{Bengio15:CondComp} who use boolean gates and a REINFORCE-style approach to train the gating network. \section{Addressing Performance Challenges} \label{sec:performance} \subsection{The Shrinking Batch Problem} On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses $k$ out of $n$ experts for each example, then for a batch of $b$ examples, each expert receives a much smaller batch of approximately $\frac{kb}{n}\ll b$ examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size: \paragraph{Mixing Data Parallelism and Model Parallelism:} In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over $d$ devices, and each device processes a batch of size $b$, each expert receives a batch of approximately $\frac{kbd}{n}$ examples. Thus, we achieve a factor of $d$ improvement in expert batch size. In the case of a hierarchical MoE (Section \ref{sec:hierarchical}), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device. This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware. \paragraph{Taking Advantage of Convolutionality:} In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps. \paragraph{Increasing Batch Size for a Recurrent MoE:} We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. \cite{Gruslys16} describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size. \subsection{Network Bandwidth} Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes $input$\_${size} \times hidden$\_${size}$ and $hidden$\_${size} \times output$\_${size}$, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers. \section{Balancing Expert Utilization} \label{sec:losses} We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. \cite{eigen2013learning} describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. \cite{Bengio15:CondComp} include a soft constraint on the batch-wise average of each gate.\footnote{\cite{Bengio15:CondComp} also include two additional losses. One controls per-example sparsity, which we do not need since it is enforced by the fixed value of $k$. A third loss encourages diversity of gate values. In our experiments, we find that the gate values naturally diversify as the experts specialize (in a virtuous cycle), and we do not need to enforce diversity of gate values.} We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss $L_{importance}$, which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor $w_{importance}$. This additional loss encourages all experts to have equal importance. \begin{equation}\label{eq:gateloss} Importance(X) = \sum_{x \in X}G(x) \end{equation} \begin{equation}\label{eq:importanceloss} L_{importance}(X) = w_{importance} \cdot CV(Importance(X))^2 \end{equation} While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, $L_{load}$ , which ensures balanced loads. Appendix \ref{sec:load} contains the definition of this function, along with experimental results. \section{Experiments} \subsection{1 Billion Word Language Modeling Benchmark}\label{sec:lm} \paragraph{Dataset:} This dataset, introduced by \citep{chelba2013one} consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words. \paragraph{Previous State-of-the-Art:} The best previously published results \citep{RafalNoam16} use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers \citep{Hochreiter:1997:LSM,Gers:2000:LFC}. The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure \ref{fig:lm1b}-right. \paragraph{MoE Models:} Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure \ref{fig:moe}). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix \ref{sec:appendixlm1b}. \paragraph{Low Computation, Varied Capacity:} To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input. The results of these models are shown in Figure \ref{fig:lm1b}-left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24\% lower perplexity on the test set. \paragraph{Varied Computation, High Capacity:} In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix \ref{sec:expensive}. Results of these three models form the bottom line of Figure \ref{fig:lm1b}-right. Table \ref{tab:lm1bshort} compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6\% of the computation. \paragraph{Computational Efficiency:} We trained our models using TensorFlow \citep{Abadi16} on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37\% and 46\% of the total. For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix \ref{sec:appendixlm1b}, Table \ref{tab:lm1bresults}. \subsection{100 Billion Word Google News Corpus} On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure \ref{fig:lm1b}-left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements. We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix \ref{sec:appendixgn11}. \paragraph{Results:} Figure \ref{fig:gn11} shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39\% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets. Even at 65536 experts (99.994\% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU. \subsection{Machine Translation (Single Language Pair)} \label{sec:mt} \paragraph{Model Architecture:} Our model was a modified version of the GNMT model described in~\citep{GNMT}. To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix \ref{sec:appendixmt}. \paragraph{Datasets:} We benchmarked our method on the WMT'14 En$\rightarrow$Fr and En$\rightarrow$De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in~\citep{GNMT}: newstest2014 was used as the test set to compare against previous work \citep{LuongPM:2015:EAANMT,Zhou:2016:DeppAtt,GNMT}, while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data. \paragraph{Results:} Tables \ref{tab:wmtenfr}, \ref{tab:wmtende}, and ~\ref{tab:prodmt} show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En$\rightarrow$Fr and En$\rightarrow$De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in \citep{GNMT}. The perplexity scores are also better.\footnote{Reported perplexities relative to the tokenization used by both our models and GNMT.} On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time. \subsection{Multilingual Machine Translation} \label{sec:mlmt} \paragraph{Dataset:} \citep{Johnson16} train a single GNMT \citep{GNMT} model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix \ref{sec:appendixmt} for details on model architecture. We train our model on the same dataset as \citep{Johnson16} and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model. \paragraph{Results:} Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table \ref{tab:ml}. The MoE model achieves 19\% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English $\rightarrow$ Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus. \vspace{-8pt}\section{Conclusion}\label{sec:conc} This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come. \subsubsection*{Acknowledgments} We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better. \bibliographystyle{iclr2017_conference} \newpage \appendix \section*{Appendices} \addcontentsline{toc}{section}{Appendices} \renewcommand{\thesubsection}{\Alph{subsection}} \subsection{Load-Balancing Loss} \label{sec:load} As discussed in section \ref{sec:losses}, for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator $Load(X)$ of the number of examples assigned to each expert for a batch $X$ of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define $P(x, i)$ as the probability that $G(x)_i$ is nonzero, given a new random choice of noise on element $i$, but keeping the already-sampled choices of noise on the other elements. To compute $P(x, i)$, we note that the $G(x)_i$ is nonzero if and only if $H(x)_i$ is greater than the $k^{th}$-greatest element of $H(x)$ excluding itself. The probability works out to be: \begin{equation} \begin{aligned} P(x, i) = Pr\Big( (x \cdot W_g)_i + StandardNormal() \cdot Softplus((x \cdot W_{noise})_i) \\ > kth\_excluding(H(x), k, i)\Big) \end{aligned} \end{equation} Where $kth\_excluding(v, k, i)$ means the kth highest component of $v$, excluding component $i$. Simplifying, we get: \begin{equation} P(x, i) = \Phi\Big(\frac{(x \cdot W_g)_i - kth\_excluding(H(x), k, i)}{Softplus((x \cdot W_{noise})_i)}\Big) \end{equation} Where $\Phi$ is the CDF of the standard normal distribution. \begin{equation} Load(X)_i = \sum_{x \in X}P(x, i) \end{equation} We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor $w_{load}$. \begin{equation}\label{eq:loadloss} L_{load}(X) = w_{load} \cdot CV(Load(X))^2 \end{equation} \paragraph{Initial Load Imbalance:} To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices $W_g$ and $W_{noise}$ to all zeros, which yields no signal and some noise. \paragraph{Experiments:} We trained a set of models with identical architecture (the MoE-256 model described in Appendix \ref{sec:appendixlm1b}), using different values of $w_{importance}$ and $w_{load}$. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in $Importance$ and $Load$, as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches. \paragraph{Results:} Results are reported in Table \ref{tab:losses}. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of $w_{load}$ had lower loads on the most overloaded expert. \subsection{Hierachical Mixture of Experts} \label{sec:hierarchical} If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of ``experts", each of which is itself a secondary mixture-of-experts with its own gating network.\footnote{ We have not found the need for deeper hierarchies.} If the hierarchical MoE consists of $a$ groups of $b$ experts each, we denote the primary gating network by $G_{primary}$, the secondary gating networks by $(G_1, G_2 .. G_a)$, and the expert networks by $(E_{0,0}, E_{0,1} .. E_{a,b})$. The output of the MoE is given by: \begin{equation}\label{eq:gate_expert} y_H = \sum_{i=1}^{a}\sum_{j=1}^{b}G_{primary}(x)_i \cdot G_i(x)_j \cdot E_{i,j}(x) \end{equation} Our metrics of expert utilization change to the following: \begin{equation} Importance_H(X)_{i,j} = \sum_{x \in X}G_{primary}(x)_i \cdot G_i(x)_j \end{equation} \begin{equation} Load_H(X)_{i,j} = \frac{Load_{primary}(X)_i \cdot Load_i(X^{(i)})_j}{|X^{(i)}|} \end{equation} $Load_{primary}$ and $Load_i$ deonte the $Load$ functions for the primary gating network and $i^{th}$ secondary gating network respectively. $X^{(i)}$ denotes the subset of $X$ for which $G_{primary}(x)_i > 0$. It would seem simpler to let $Load_H(X)_{i,j} = Load_i(X_i)_j$ , but this would not have a gradient with respect to the primary gating network, so we use the formulation above. \subsection{1 Billion Word Language Modeling Benchmark - Experimental Details}\label{sec:appendixlm1b} \subsubsection{8-Million-Operations-per-Timestep Models} \paragraph{Model Architecture:} Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer \citep{Hochreiter:1997:LSM,Gers:2000:LFC}, a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput \citep{ZarembaSV14} to the layer output, dropping each activation with probability $DropProb$, otherwise dividing by $(1-DropProb)$. After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow \citep{HeZRS:2015:DRL}. \paragraph{MoE Layer Architecture:} Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains $[512 * 1024] + [1024 * 512] = 1M$ parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section \ref{sec:noisytopk}) with $k=4$ for the ordinary MoE layers and $k=2$ at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M. \paragraph{Computationally-Matched Baselines:} The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity: \begin{itemize} \item MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096. \item MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size $1024$. \item 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers. \item LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions \citep{sak2014long}. The next timestep of the LSTM receives the projected output. This is identical to one of the models published in \citep{RafalNoam16}. We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones. \end{itemize} \paragraph{Training:} The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section~\ref{sec:performance}. Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer~\citep{kingma2014adam}. The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in \citep{RafalNoam16}. For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1. To ensure balanced expert utilization we set $w_{importance}=0.1$ and $w_{load}=0.1$, as described in Section \ref{sec:losses} and Appendix \ref{sec:load}. \paragraph{Results:} We evaluate our model using perplexity on the holdout dataset, used by~\citep{chelba2013one,RafalNoam16}. We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table~\ref{tab:lm1bresults}. For each model, we report the test perplexity, the computational budget, the parameter counts, the value of $DropProb$, and the computational efficiency. \subsubsection{More Expensive Models}\label{sec:expensive} We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 \citep{sak2014long}. MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best $DropProb$ for each model, and trained each model for 10 epochs. The two models achieved test perplexity of $31.3$ and $28.0$ respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table~\ref{tab:lm1bresults}. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by $18\%$. \subsection{100 Billion Word Google News Corpus - Experimental Details}\label{sec:appendixgn11} \paragraph{Model Architecture:} The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively. \paragraph{Training:} Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words. We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage: The Adam optimizer \citep{kingma2014adam} keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set $\beta_1=0$. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad \citep{duchi10}. \paragraph{Results:} We evaluate our model using perplexity on a holdout dataset. Results are reported in Table~\ref{tab:gn11results}. Perplexity after 100 billion training words is 39\% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing \citep{KneserNey95}.\footnote{While the original size of the corpus was 130 billion words, the neural models were trained for a maximum of 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billion words respectively, giving them a slight advantage over the other reported results.} \subsection{Machine Translation - Experimental Details} \label{sec:appendixmt} \paragraph{Model Architecture for Single Language Pair MoE Models:} Our model is a modified version of the GNMT model described in~\citep{GNMT}. To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention \footnote{For performance reasons, we use a slightly different attention function from the one described in~\citep{GNMT} - See Appendix \ref{sec:attention}}. All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow \citep{HeZRS:2015:DRL}. Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as ``wordpieces") \citep{Schuster:2012:JKVS} for inputs and outputs in our system. We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in~\citep{GNMT}. We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use $k=4$ and the hierarchical MoE models use $k=2$ at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains $[512 * 2048] + [2048 * 512] = 2M$ parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix \ref{sec:batchwisemask}. \paragraph{Model Architecture for Multilingual MoE Model:} We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section \ref{sec:noisytopk}, not the scheme from Appendix \ref{sec:batchwisemask}. The MoE layers in the encoder and decoder are non-hierarchical MoEs with $n=512$ experts, and $k=2$. Each expert has a larger hidden layer of size $8192$. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep. \paragraph{Training:} We trained our networks using the Adam optimizer~\citep{kingma2014adam}. The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to \citep{GNMT}, we applied dropout \citep{ZarembaSV14} to the output of all embedding, LSTM and MoE layers, using $DropProb=0.4$. Training was done synchronously on a cluster of up to 64 GPUs as described in section \ref{sec:performance}. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU. To ensure balanced expert utilization we set $w_{importance}=0.01$ and $w_{load}=0.01$, as described in Section \ref{sec:losses} and Appendix \ref{sec:load}. \paragraph{Metrics:} We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in \citep{LuongPM:2015:EAANMT}. \paragraph{Results:} Tables \ref{tab:wmtenfr}, \ref{tab:wmtende} and \ref{tab:prodmt} in Section \ref{sec:mt} show comparisons of our results to other published methods. Figure~\ref{fig:mt} shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve. We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table~\ref{tab:experts}. For example, one expert is used when the indefinite article ``a" introduces the direct object in a verb phrase indicating importance or leadership. \subsection{Strictly Balanced Gating}\label{sec:batchwisemask} Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below. Recall that we define the softmax gating function to be: \begin{equation}\label{eq:softmax} G_\sigma(x) = Softmax(x \cdot W_g) \end{equation} \paragraph{Sparse Gating (alternate formulation):} To obtain a sparse gating vector, we multiply $G_\sigma(x)$ component-wise with a sparse mask $M(G_\sigma(x))$ and normalize the output. The mask itself is a function of $G_\sigma(x)$ and specifies which experts are assigned to each input example: \begin{equation}\label{eq:g_top_k} G(x)_i = \frac{G_\sigma(x)_i M(G_\sigma(x))_i}{\sum_{j=1}^{n} G_\sigma(x)_j M(G_\sigma(x))_j } \end{equation} \paragraph{Top-K Mask:} To implement top-k gating in this formulation, we would let $M(v) = TopK(v, k)$, where: \begin{equation}\label{eq:top_k} TopK(v, k)_i = \begin{cases} 1 & \text{if $v_i$ is in the top $k$ elements of $v$.} \\ 0 & \text{otherwise.} \end{cases} \end{equation} \paragraph{Batchwise Mask:} To force each expert to receive the exact same number of examples, we introduce an alternative mask function, $M_{batchwise}(X, m)$, which operates over batches of input vectors. Instead of keeping the top $k$ values per example, we keep the top $m$ values per expert across the training batch, where $m=\frac{k|X|}{n}$, so that each example is sent to an average of $k$ experts. \begin{equation}\label{eq:batchwisetop_k} M_{batchwise}(X, m)_{j,i} = \begin{cases} 1 & \text{if $X_{j,i}$ is in the top $m$ values for to expert $i$} \\ 0 & \text{otherwise} \end{cases} \end{equation} As our experiments suggest and also observed in ~\citep{DBLP:journals/corr/IoffeS15}, using a batchwise function during training (such as $M_{batchwise}$) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector $T$ of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: \begin{equation}\label{eq:threshold} M_{threshold}(x, T)_i = \begin{cases} 1 & \text{if $x_i > T_i$} \\ 0 & \text{otherwise} \end{cases} \end{equation} To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. \begin{equation}\label{eq:thresholdloss} L_{batchwise}(X, T, m) = \sum_{j = 1}^{|X|} \sum_{i=1}^n (M_{threshold}(x, T)_i - M_{batchwise}(X, m)_{j,i}) (X_{j, i} - T_i) \end{equation} \subsection{Attention Function}\label{sec:attention} The attention mechanism described in GNMT ~\citep{GNMT} involves a learned ``Attention Function" $A(x_i,y_j)$ which takes a ``source vector" $x_i$ and a ``target vector" $y_j$, and must be computed for every source time step $i$ and target time step $j$. In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size $n$. It can be expressed as: \begin{equation}\label{eq:gnmtattention} A_{GNMT}(x_i, y_j) = \sum_{d=1}^{n}V_d tanh((x_iU)_d + (y_jW)_d) \end{equation} Where $U$ and $W$ are trainable weight matrices and $V$ is a trainable weight vector. For performance reasons, in our models, we used a slightly different attention function: \begin{equation}\label{eq:ourattention} A(x_i, y_j) = \sum_{d=1}^{n}V_d tanh((x_iU)_d) tanh((y_jW)_d) \end{equation} With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. \end{document}
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
1701.06538
Table 6: Experiments with different combinations of losses.
[ "[ITALIC] wimportance", "[ITALIC] wload", "Test Perplexity", "[ITALIC] CV( [ITALIC] Importance( [ITALIC] X))", "[ITALIC] CV( [ITALIC] Load( [ITALIC] X))", "[ITALIC] max( [ITALIC] Load( [ITALIC] X)) [ITALIC] mean( [ITALIC] Load( [ITALIC] X))" ]
[ [ "0.0", "0.0", "39.8", "3.04", "3.01", "17.80" ], [ "0.2", "0.0", "[BOLD] 35.6", "0.06", "0.17", "1.47" ], [ "0.0", "0.2", "35.7", "0.22", "0.04", "1.15" ], [ "0.1", "0.1", "[BOLD] 35.6", "0.06", "0.05", "1.14" ], [ "0.01", "0.01", "35.7", "0.48", "0.11", "1.37" ], [ "1.0", "1.0", "35.7", "0.03", "0.02", "[BOLD] 1.07" ] ]
All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of wload had lower loads on the most overloaded expert.
\documentclass{article} % For LaTeX2e \pdfoutput=1 \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \newcommand{\gate}{BalancedSparseThreshold} \setlength\abovecaptionskip{0pt} \setlength{\textfloatsep}{8pt plus 1.0pt minus 2.0pt} \newcommand\revisions[1]{\textcolor{red}{#1}} \title{\Large Outrageously Large Neural Networks: \\ The Sparsely-Gated Mixture-of-Experts Layer} \author[1]{Noam Shazeer} \author[1]{Azalia Mirhoseini\thanks{Equally major contributors} \thanks{Work done as a member of the Google Brain Residency program (g.co/brainresidency)}~} \author[2]{Krzysztof Maziarz$^*$} \author[1]{Andy Davis} \author[1]{Quoc Le} \author[1]{Geoffrey Hinton} \author[1]{Jeff Dean} \affil[1]{Google Brain, \{noam,azalia,andydavis,qvl,geoffhinton,jeff\}@google.com} \affil[2]{Jagiellonian University, Cracow, krzysztof.maziarz@student.uj.edu.pl} \renewcommand\Authands{ and } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost. \end{abstract} \section{Introduction and Related Work} \subsection{Conditional Computation} Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text \citep{sutskever2014sequence,bahdanau2014neural,RafalNoam16,GNMT}, images \citep{Imagenet,qvl2012building}, and audio \citep{hinton2012deep,DeepSpeech2}. For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand. Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs \citep{Davis13:CondComp, Bengio13:CondComp, eigen2013learning, Denoyer14:CondComp, Cho14, Bengio15:CondComp, Almahairi15}. In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions. While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges: \begin{itemize} \item Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision. \item Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network. \item Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity. \item Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. \cite{Bengio15:CondComp} use three such terms. These issues can affect both model quality and load-balancing. \item Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters. \end{itemize} In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets. \subsection{Our Approach: The Sparsely-Gated Mixture-of-Experts Layer} Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure \ref{fig:moe}). All parts of the network are trained jointly by back-propagation. While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers \citep{Hochreiter:1997:LSM}, as in Figure \ref{fig:moe}. The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix \ref{sec:appendixmt} Table \ref{tab:experts}). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost. \subsection{Related work on Mixtures of Experts} Since its introduction more than two decades ago \citep{Jacobs91Adaptive,Jordan1994HME}, the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs \citep{Collobert02PMS}, Gaussian Processes \citep{Tresp2001Mixture,Theis2015Generative,Deisenroth15Distributed}, Dirichlet Processes \citep{Shahbaba09NMU}, and deep networks. Other work has focused on different expert configurations such as a hierarchical structure \citep{Yao09Hierarchical}, infinite numbers of experts \citep{Rasmussen02Infinite}, and adding experts sequentially \citep{Aljundi16}. \cite{Garmash2016ensemble} suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model. The works above concern top-level mixtures of experts. The mixture of experts is the whole model. \cite{eigen2013learning} introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation. Our work builds on this use of MoEs as a general purpose neural network component. While \cite{eigen2013learning} uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity. \section{The Structure of the Mixture-of-Experts layer}\label{sec:gating} The Mixture-of-Experts (MoE) layer consists of a set of $n$ ``expert networks" $E_1, \cdots, E_n$, and a ``gating network" $G$ whose output is a sparse $n$-dimensional vector. Figure \ref{fig:moe} shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters. Let us denote by $G(x)$ and $E_i(x)$ the output of the gating network and the output of the $i$-th expert network for a given input $x$. The output $y$ of the MoE module can be written as follows: \begin{equation} y = \sum_{i=1}^{n}G(x)_iE_i(x) \end{equation} We save computation based on the sparsity of the output of $G(x)$. Wherever $G(x)_i=0$, we need not compute $E_i(x)$. In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of ``experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix \ref{sec:hierarchical}. Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in \citep{Cho14}. A MoE whose experts have one hidden layer is similar to the block-wise dropout described in \citep{Bengio15:CondComp}, where the dropped-out layer is sandwiched between fully-activated layers. \subsection{Gating Network} \paragraph{Softmax Gating:} A simple choice of non-sparse gating function \citep{Jordan1994HME} is to multiply the input by a trainable weight matrix $W_g$ and then apply the $Softmax$ function. \begin{equation}\label{eq:softmax} G_\sigma(x) = Softmax(x \cdot W_g) \end{equation} \paragraph{Noisy Top-K Gating:}\label{sec:noisytopk} We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to $-\infty$ (which causes the corresponding gate values to equal $0$). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix \ref{sec:load}. The amount of noise per component is controlled by a second trainable weight matrix $W_{noise}$. \begin{equation}\label{eq:g} G(x) = Softmax(KeepTopK(H(x), k)) \end{equation} \begin{equation}\label{eq:noise} H(x)_i = (x \cdot W_g)_i + StandardNormal() \cdot Softplus((x \cdot W_{noise})_i) \end{equation} \begin{equation}\label{eq:keeptopk} KeepTopK(v, k)_i = \begin{cases} v_i & \text{if $v_i$ is in the top $k$ elements of $v$.} \\ -\infty & \text{otherwise.} \end{cases} \end{equation} \paragraph{Training the Gating Network} We train the gating network by simple back-propagation, along with the rest of the model. If we choose $k>1$, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in \citep{Bengio13:CondComp} with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from \citep{Bengio15:CondComp} who use boolean gates and a REINFORCE-style approach to train the gating network. \section{Addressing Performance Challenges} \label{sec:performance} \subsection{The Shrinking Batch Problem} On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses $k$ out of $n$ experts for each example, then for a batch of $b$ examples, each expert receives a much smaller batch of approximately $\frac{kb}{n}\ll b$ examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size: \paragraph{Mixing Data Parallelism and Model Parallelism:} In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over $d$ devices, and each device processes a batch of size $b$, each expert receives a batch of approximately $\frac{kbd}{n}$ examples. Thus, we achieve a factor of $d$ improvement in expert batch size. In the case of a hierarchical MoE (Section \ref{sec:hierarchical}), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device. This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware. \paragraph{Taking Advantage of Convolutionality:} In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps. \paragraph{Increasing Batch Size for a Recurrent MoE:} We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. \cite{Gruslys16} describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size. \subsection{Network Bandwidth} Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes $input$\_${size} \times hidden$\_${size}$ and $hidden$\_${size} \times output$\_${size}$, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers. \section{Balancing Expert Utilization} \label{sec:losses} We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. \cite{eigen2013learning} describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. \cite{Bengio15:CondComp} include a soft constraint on the batch-wise average of each gate.\footnote{\cite{Bengio15:CondComp} also include two additional losses. One controls per-example sparsity, which we do not need since it is enforced by the fixed value of $k$. A third loss encourages diversity of gate values. In our experiments, we find that the gate values naturally diversify as the experts specialize (in a virtuous cycle), and we do not need to enforce diversity of gate values.} We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss $L_{importance}$, which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor $w_{importance}$. This additional loss encourages all experts to have equal importance. \begin{equation}\label{eq:gateloss} Importance(X) = \sum_{x \in X}G(x) \end{equation} \begin{equation}\label{eq:importanceloss} L_{importance}(X) = w_{importance} \cdot CV(Importance(X))^2 \end{equation} While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, $L_{load}$ , which ensures balanced loads. Appendix \ref{sec:load} contains the definition of this function, along with experimental results. \section{Experiments} \subsection{1 Billion Word Language Modeling Benchmark}\label{sec:lm} \paragraph{Dataset:} This dataset, introduced by \citep{chelba2013one} consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words. \paragraph{Previous State-of-the-Art:} The best previously published results \citep{RafalNoam16} use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers \citep{Hochreiter:1997:LSM,Gers:2000:LFC}. The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure \ref{fig:lm1b}-right. \paragraph{MoE Models:} Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure \ref{fig:moe}). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix \ref{sec:appendixlm1b}. \paragraph{Low Computation, Varied Capacity:} To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input. The results of these models are shown in Figure \ref{fig:lm1b}-left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24\% lower perplexity on the test set. \paragraph{Varied Computation, High Capacity:} In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix \ref{sec:expensive}. Results of these three models form the bottom line of Figure \ref{fig:lm1b}-right. Table \ref{tab:lm1bshort} compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6\% of the computation. \paragraph{Computational Efficiency:} We trained our models using TensorFlow \citep{Abadi16} on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37\% and 46\% of the total. For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix \ref{sec:appendixlm1b}, Table \ref{tab:lm1bresults}. \subsection{100 Billion Word Google News Corpus} On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure \ref{fig:lm1b}-left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements. We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix \ref{sec:appendixgn11}. \paragraph{Results:} Figure \ref{fig:gn11} shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39\% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets. Even at 65536 experts (99.994\% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU. \subsection{Machine Translation (Single Language Pair)} \label{sec:mt} \paragraph{Model Architecture:} Our model was a modified version of the GNMT model described in~\citep{GNMT}. To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix \ref{sec:appendixmt}. \paragraph{Datasets:} We benchmarked our method on the WMT'14 En$\rightarrow$Fr and En$\rightarrow$De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in~\citep{GNMT}: newstest2014 was used as the test set to compare against previous work \citep{LuongPM:2015:EAANMT,Zhou:2016:DeppAtt,GNMT}, while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data. \paragraph{Results:} Tables \ref{tab:wmtenfr}, \ref{tab:wmtende}, and ~\ref{tab:prodmt} show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En$\rightarrow$Fr and En$\rightarrow$De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in \citep{GNMT}. The perplexity scores are also better.\footnote{Reported perplexities relative to the tokenization used by both our models and GNMT.} On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time. \subsection{Multilingual Machine Translation} \label{sec:mlmt} \paragraph{Dataset:} \citep{Johnson16} train a single GNMT \citep{GNMT} model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix \ref{sec:appendixmt} for details on model architecture. We train our model on the same dataset as \citep{Johnson16} and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model. \paragraph{Results:} Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table \ref{tab:ml}. The MoE model achieves 19\% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English $\rightarrow$ Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus. \vspace{-8pt}\section{Conclusion}\label{sec:conc} This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come. \subsubsection*{Acknowledgments} We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better. \bibliographystyle{iclr2017_conference} \newpage \appendix \section*{Appendices} \addcontentsline{toc}{section}{Appendices} \renewcommand{\thesubsection}{\Alph{subsection}} \subsection{Load-Balancing Loss} \label{sec:load} As discussed in section \ref{sec:losses}, for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator $Load(X)$ of the number of examples assigned to each expert for a batch $X$ of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define $P(x, i)$ as the probability that $G(x)_i$ is nonzero, given a new random choice of noise on element $i$, but keeping the already-sampled choices of noise on the other elements. To compute $P(x, i)$, we note that the $G(x)_i$ is nonzero if and only if $H(x)_i$ is greater than the $k^{th}$-greatest element of $H(x)$ excluding itself. The probability works out to be: \begin{equation} \begin{aligned} P(x, i) = Pr\Big( (x \cdot W_g)_i + StandardNormal() \cdot Softplus((x \cdot W_{noise})_i) \\ > kth\_excluding(H(x), k, i)\Big) \end{aligned} \end{equation} Where $kth\_excluding(v, k, i)$ means the kth highest component of $v$, excluding component $i$. Simplifying, we get: \begin{equation} P(x, i) = \Phi\Big(\frac{(x \cdot W_g)_i - kth\_excluding(H(x), k, i)}{Softplus((x \cdot W_{noise})_i)}\Big) \end{equation} Where $\Phi$ is the CDF of the standard normal distribution. \begin{equation} Load(X)_i = \sum_{x \in X}P(x, i) \end{equation} We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor $w_{load}$. \begin{equation}\label{eq:loadloss} L_{load}(X) = w_{load} \cdot CV(Load(X))^2 \end{equation} \paragraph{Initial Load Imbalance:} To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices $W_g$ and $W_{noise}$ to all zeros, which yields no signal and some noise. \paragraph{Experiments:} We trained a set of models with identical architecture (the MoE-256 model described in Appendix \ref{sec:appendixlm1b}), using different values of $w_{importance}$ and $w_{load}$. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in $Importance$ and $Load$, as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches. \paragraph{Results:} Results are reported in Table \ref{tab:losses}. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of $w_{load}$ had lower loads on the most overloaded expert. \subsection{Hierachical Mixture of Experts} \label{sec:hierarchical} If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of ``experts", each of which is itself a secondary mixture-of-experts with its own gating network.\footnote{ We have not found the need for deeper hierarchies.} If the hierarchical MoE consists of $a$ groups of $b$ experts each, we denote the primary gating network by $G_{primary}$, the secondary gating networks by $(G_1, G_2 .. G_a)$, and the expert networks by $(E_{0,0}, E_{0,1} .. E_{a,b})$. The output of the MoE is given by: \begin{equation}\label{eq:gate_expert} y_H = \sum_{i=1}^{a}\sum_{j=1}^{b}G_{primary}(x)_i \cdot G_i(x)_j \cdot E_{i,j}(x) \end{equation} Our metrics of expert utilization change to the following: \begin{equation} Importance_H(X)_{i,j} = \sum_{x \in X}G_{primary}(x)_i \cdot G_i(x)_j \end{equation} \begin{equation} Load_H(X)_{i,j} = \frac{Load_{primary}(X)_i \cdot Load_i(X^{(i)})_j}{|X^{(i)}|} \end{equation} $Load_{primary}$ and $Load_i$ deonte the $Load$ functions for the primary gating network and $i^{th}$ secondary gating network respectively. $X^{(i)}$ denotes the subset of $X$ for which $G_{primary}(x)_i > 0$. It would seem simpler to let $Load_H(X)_{i,j} = Load_i(X_i)_j$ , but this would not have a gradient with respect to the primary gating network, so we use the formulation above. \subsection{1 Billion Word Language Modeling Benchmark - Experimental Details}\label{sec:appendixlm1b} \subsubsection{8-Million-Operations-per-Timestep Models} \paragraph{Model Architecture:} Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer \citep{Hochreiter:1997:LSM,Gers:2000:LFC}, a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput \citep{ZarembaSV14} to the layer output, dropping each activation with probability $DropProb$, otherwise dividing by $(1-DropProb)$. After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow \citep{HeZRS:2015:DRL}. \paragraph{MoE Layer Architecture:} Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains $[512 * 1024] + [1024 * 512] = 1M$ parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section \ref{sec:noisytopk}) with $k=4$ for the ordinary MoE layers and $k=2$ at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M. \paragraph{Computationally-Matched Baselines:} The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity: \begin{itemize} \item MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096. \item MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size $1024$. \item 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers. \item LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions \citep{sak2014long}. The next timestep of the LSTM receives the projected output. This is identical to one of the models published in \citep{RafalNoam16}. We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones. \end{itemize} \paragraph{Training:} The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section~\ref{sec:performance}. Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer~\citep{kingma2014adam}. The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in \citep{RafalNoam16}. For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1. To ensure balanced expert utilization we set $w_{importance}=0.1$ and $w_{load}=0.1$, as described in Section \ref{sec:losses} and Appendix \ref{sec:load}. \paragraph{Results:} We evaluate our model using perplexity on the holdout dataset, used by~\citep{chelba2013one,RafalNoam16}. We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table~\ref{tab:lm1bresults}. For each model, we report the test perplexity, the computational budget, the parameter counts, the value of $DropProb$, and the computational efficiency. \subsubsection{More Expensive Models}\label{sec:expensive} We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 \citep{sak2014long}. MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best $DropProb$ for each model, and trained each model for 10 epochs. The two models achieved test perplexity of $31.3$ and $28.0$ respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table~\ref{tab:lm1bresults}. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by $18\%$. \subsection{100 Billion Word Google News Corpus - Experimental Details}\label{sec:appendixgn11} \paragraph{Model Architecture:} The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively. \paragraph{Training:} Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words. We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage: The Adam optimizer \citep{kingma2014adam} keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set $\beta_1=0$. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad \citep{duchi10}. \paragraph{Results:} We evaluate our model using perplexity on a holdout dataset. Results are reported in Table~\ref{tab:gn11results}. Perplexity after 100 billion training words is 39\% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing \citep{KneserNey95}.\footnote{While the original size of the corpus was 130 billion words, the neural models were trained for a maximum of 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billion words respectively, giving them a slight advantage over the other reported results.} \subsection{Machine Translation - Experimental Details} \label{sec:appendixmt} \paragraph{Model Architecture for Single Language Pair MoE Models:} Our model is a modified version of the GNMT model described in~\citep{GNMT}. To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention \footnote{For performance reasons, we use a slightly different attention function from the one described in~\citep{GNMT} - See Appendix \ref{sec:attention}}. All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow \citep{HeZRS:2015:DRL}. Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as ``wordpieces") \citep{Schuster:2012:JKVS} for inputs and outputs in our system. We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in~\citep{GNMT}. We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use $k=4$ and the hierarchical MoE models use $k=2$ at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains $[512 * 2048] + [2048 * 512] = 2M$ parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix \ref{sec:batchwisemask}. \paragraph{Model Architecture for Multilingual MoE Model:} We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section \ref{sec:noisytopk}, not the scheme from Appendix \ref{sec:batchwisemask}. The MoE layers in the encoder and decoder are non-hierarchical MoEs with $n=512$ experts, and $k=2$. Each expert has a larger hidden layer of size $8192$. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep. \paragraph{Training:} We trained our networks using the Adam optimizer~\citep{kingma2014adam}. The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to \citep{GNMT}, we applied dropout \citep{ZarembaSV14} to the output of all embedding, LSTM and MoE layers, using $DropProb=0.4$. Training was done synchronously on a cluster of up to 64 GPUs as described in section \ref{sec:performance}. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU. To ensure balanced expert utilization we set $w_{importance}=0.01$ and $w_{load}=0.01$, as described in Section \ref{sec:losses} and Appendix \ref{sec:load}. \paragraph{Metrics:} We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in \citep{LuongPM:2015:EAANMT}. \paragraph{Results:} Tables \ref{tab:wmtenfr}, \ref{tab:wmtende} and \ref{tab:prodmt} in Section \ref{sec:mt} show comparisons of our results to other published methods. Figure~\ref{fig:mt} shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve. We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table~\ref{tab:experts}. For example, one expert is used when the indefinite article ``a" introduces the direct object in a verb phrase indicating importance or leadership. \subsection{Strictly Balanced Gating}\label{sec:batchwisemask} Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below. Recall that we define the softmax gating function to be: \begin{equation}\label{eq:softmax} G_\sigma(x) = Softmax(x \cdot W_g) \end{equation} \paragraph{Sparse Gating (alternate formulation):} To obtain a sparse gating vector, we multiply $G_\sigma(x)$ component-wise with a sparse mask $M(G_\sigma(x))$ and normalize the output. The mask itself is a function of $G_\sigma(x)$ and specifies which experts are assigned to each input example: \begin{equation}\label{eq:g_top_k} G(x)_i = \frac{G_\sigma(x)_i M(G_\sigma(x))_i}{\sum_{j=1}^{n} G_\sigma(x)_j M(G_\sigma(x))_j } \end{equation} \paragraph{Top-K Mask:} To implement top-k gating in this formulation, we would let $M(v) = TopK(v, k)$, where: \begin{equation}\label{eq:top_k} TopK(v, k)_i = \begin{cases} 1 & \text{if $v_i$ is in the top $k$ elements of $v$.} \\ 0 & \text{otherwise.} \end{cases} \end{equation} \paragraph{Batchwise Mask:} To force each expert to receive the exact same number of examples, we introduce an alternative mask function, $M_{batchwise}(X, m)$, which operates over batches of input vectors. Instead of keeping the top $k$ values per example, we keep the top $m$ values per expert across the training batch, where $m=\frac{k|X|}{n}$, so that each example is sent to an average of $k$ experts. \begin{equation}\label{eq:batchwisetop_k} M_{batchwise}(X, m)_{j,i} = \begin{cases} 1 & \text{if $X_{j,i}$ is in the top $m$ values for to expert $i$} \\ 0 & \text{otherwise} \end{cases} \end{equation} As our experiments suggest and also observed in ~\citep{DBLP:journals/corr/IoffeS15}, using a batchwise function during training (such as $M_{batchwise}$) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector $T$ of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: \begin{equation}\label{eq:threshold} M_{threshold}(x, T)_i = \begin{cases} 1 & \text{if $x_i > T_i$} \\ 0 & \text{otherwise} \end{cases} \end{equation} To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. \begin{equation}\label{eq:thresholdloss} L_{batchwise}(X, T, m) = \sum_{j = 1}^{|X|} \sum_{i=1}^n (M_{threshold}(x, T)_i - M_{batchwise}(X, m)_{j,i}) (X_{j, i} - T_i) \end{equation} \subsection{Attention Function}\label{sec:attention} The attention mechanism described in GNMT ~\citep{GNMT} involves a learned ``Attention Function" $A(x_i,y_j)$ which takes a ``source vector" $x_i$ and a ``target vector" $y_j$, and must be computed for every source time step $i$ and target time step $j$. In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size $n$. It can be expressed as: \begin{equation}\label{eq:gnmtattention} A_{GNMT}(x_i, y_j) = \sum_{d=1}^{n}V_d tanh((x_iU)_d + (y_jW)_d) \end{equation} Where $U$ and $W$ are trainable weight matrices and $V$ is a trainable weight vector. For performance reasons, in our models, we used a slightly different attention function: \begin{equation}\label{eq:ourattention} A(x_i, y_j) = \sum_{d=1}^{n}V_d tanh((x_iU)_d) tanh((y_jW)_d) \end{equation} With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. \end{document}
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
1701.06538
Table 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016).
[ "Model", "Test Perplexity", "Test Perplexity", "ops/timestep (millions)", "#Params excluding embed. & softmax", "Total #Params", "[ITALIC] Drop- [ITALIC] Prob", "TFLOPS per GPU" ]
[ [ "[EMPTY]", "10 epochs", "(final)", "[EMPTY]", "(millions)", "(billions)", "[EMPTY]", "(observed)" ], [ "Kneser-Ney 5-gram*", "[EMPTY]", "67.6", "0.00001", "[EMPTY]", "1.8", "[EMPTY]", "[EMPTY]" ], [ "LSTM-512-512*", "[EMPTY]", "54.1", "2.4", "2.4", "0.8", "0.1", "[EMPTY]" ], [ "LSTM-1024-512*", "[EMPTY]", "48.2", "4.7", "4.7", "0.8", "0.1", "[EMPTY]" ], [ "LSTM-2048-512*", "45.0", "43.7", "9.4", "9.4", "0.8", "0.1", "0.61" ], [ "LSTM-2048-512", "44.7", "[EMPTY]", "9.4", "9.4", "0.8", "0.1", "1.21" ], [ "4xLSTM-512", "46.0", "[EMPTY]", "8.4", "8.4", "0.8", "0.1", "1.07" ], [ "MoE-1-Wide", "46.1", "[EMPTY]", "8.4", "8.4", "0.8", "0.1", "1.29" ], [ "MoE-1-Deep", "45.7", "[EMPTY]", "8.4", "8.4", "0.8", "0.1", "1.29" ], [ "MoE-4", "45.0", "[EMPTY]", "8.4", "8.4", "0.8", "0.1", "0.52" ], [ "MoE-32", "39.7", "[EMPTY]", "8.4", "37.8", "0.9", "0.1", "0.87" ], [ "MoE-256", "35.7", "[EMPTY]", "8.6", "272.9", "1.1", "0.1", "0.81" ], [ "MoE-256-h", "36.0", "[EMPTY]", "8.4", "272.9", "1.1", "0.1", "0.89" ], [ "MoE-1024-h", "34.6", "[EMPTY]", "8.5", "1079.0", "1.9", "0.2", "0.90" ], [ "MoE-4096-h", "34.1", "[EMPTY]", "8.9", "4303.4", "5.1", "0.2", "0.74" ], [ "2xLSTM-8192-1024*", "34.7", "30.6", "151.0", "151.0", "1.8", "0.25", "1.09" ], [ "MoE-34M", "31.3", "[EMPTY]", "33.8", "4313.9", "6.0", "0.3", "1.22" ], [ "MoE-143M", "[BOLD] 28.0", "[EMPTY]", "142.7", "4371.1", "6.0", "0.4", "[BOLD] 1.56" ] ]
For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA.
\documentclass{article} % For LaTeX2e \pdfoutput=1 \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \newcommand{\gate}{BalancedSparseThreshold} \setlength\abovecaptionskip{0pt} \setlength{\textfloatsep}{8pt plus 1.0pt minus 2.0pt} \newcommand\revisions[1]{\textcolor{red}{#1}} \title{\Large Outrageously Large Neural Networks: \\ The Sparsely-Gated Mixture-of-Experts Layer} \author[1]{Noam Shazeer} \author[1]{Azalia Mirhoseini\thanks{Equally major contributors} \thanks{Work done as a member of the Google Brain Residency program (g.co/brainresidency)}~} \author[2]{Krzysztof Maziarz$^*$} \author[1]{Andy Davis} \author[1]{Quoc Le} \author[1]{Geoffrey Hinton} \author[1]{Jeff Dean} \affil[1]{Google Brain, \{noam,azalia,andydavis,qvl,geoffhinton,jeff\}@google.com} \affil[2]{Jagiellonian University, Cracow, krzysztof.maziarz@student.uj.edu.pl} \renewcommand\Authands{ and } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost. \end{abstract} \section{Introduction and Related Work} \subsection{Conditional Computation} Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text \citep{sutskever2014sequence,bahdanau2014neural,RafalNoam16,GNMT}, images \citep{Imagenet,qvl2012building}, and audio \citep{hinton2012deep,DeepSpeech2}. For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand. Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs \citep{Davis13:CondComp, Bengio13:CondComp, eigen2013learning, Denoyer14:CondComp, Cho14, Bengio15:CondComp, Almahairi15}. In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions. While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges: \begin{itemize} \item Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision. \item Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network. \item Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity. \item Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. \cite{Bengio15:CondComp} use three such terms. These issues can affect both model quality and load-balancing. \item Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters. \end{itemize} In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets. \subsection{Our Approach: The Sparsely-Gated Mixture-of-Experts Layer} Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure \ref{fig:moe}). All parts of the network are trained jointly by back-propagation. While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers \citep{Hochreiter:1997:LSM}, as in Figure \ref{fig:moe}. The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix \ref{sec:appendixmt} Table \ref{tab:experts}). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost. \subsection{Related work on Mixtures of Experts} Since its introduction more than two decades ago \citep{Jacobs91Adaptive,Jordan1994HME}, the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs \citep{Collobert02PMS}, Gaussian Processes \citep{Tresp2001Mixture,Theis2015Generative,Deisenroth15Distributed}, Dirichlet Processes \citep{Shahbaba09NMU}, and deep networks. Other work has focused on different expert configurations such as a hierarchical structure \citep{Yao09Hierarchical}, infinite numbers of experts \citep{Rasmussen02Infinite}, and adding experts sequentially \citep{Aljundi16}. \cite{Garmash2016ensemble} suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model. The works above concern top-level mixtures of experts. The mixture of experts is the whole model. \cite{eigen2013learning} introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation. Our work builds on this use of MoEs as a general purpose neural network component. While \cite{eigen2013learning} uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity. \section{The Structure of the Mixture-of-Experts layer}\label{sec:gating} The Mixture-of-Experts (MoE) layer consists of a set of $n$ ``expert networks" $E_1, \cdots, E_n$, and a ``gating network" $G$ whose output is a sparse $n$-dimensional vector. Figure \ref{fig:moe} shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters. Let us denote by $G(x)$ and $E_i(x)$ the output of the gating network and the output of the $i$-th expert network for a given input $x$. The output $y$ of the MoE module can be written as follows: \begin{equation} y = \sum_{i=1}^{n}G(x)_iE_i(x) \end{equation} We save computation based on the sparsity of the output of $G(x)$. Wherever $G(x)_i=0$, we need not compute $E_i(x)$. In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of ``experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix \ref{sec:hierarchical}. Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in \citep{Cho14}. A MoE whose experts have one hidden layer is similar to the block-wise dropout described in \citep{Bengio15:CondComp}, where the dropped-out layer is sandwiched between fully-activated layers. \subsection{Gating Network} \paragraph{Softmax Gating:} A simple choice of non-sparse gating function \citep{Jordan1994HME} is to multiply the input by a trainable weight matrix $W_g$ and then apply the $Softmax$ function. \begin{equation}\label{eq:softmax} G_\sigma(x) = Softmax(x \cdot W_g) \end{equation} \paragraph{Noisy Top-K Gating:}\label{sec:noisytopk} We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to $-\infty$ (which causes the corresponding gate values to equal $0$). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix \ref{sec:load}. The amount of noise per component is controlled by a second trainable weight matrix $W_{noise}$. \begin{equation}\label{eq:g} G(x) = Softmax(KeepTopK(H(x), k)) \end{equation} \begin{equation}\label{eq:noise} H(x)_i = (x \cdot W_g)_i + StandardNormal() \cdot Softplus((x \cdot W_{noise})_i) \end{equation} \begin{equation}\label{eq:keeptopk} KeepTopK(v, k)_i = \begin{cases} v_i & \text{if $v_i$ is in the top $k$ elements of $v$.} \\ -\infty & \text{otherwise.} \end{cases} \end{equation} \paragraph{Training the Gating Network} We train the gating network by simple back-propagation, along with the rest of the model. If we choose $k>1$, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in \citep{Bengio13:CondComp} with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from \citep{Bengio15:CondComp} who use boolean gates and a REINFORCE-style approach to train the gating network. \section{Addressing Performance Challenges} \label{sec:performance} \subsection{The Shrinking Batch Problem} On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses $k$ out of $n$ experts for each example, then for a batch of $b$ examples, each expert receives a much smaller batch of approximately $\frac{kb}{n}\ll b$ examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size: \paragraph{Mixing Data Parallelism and Model Parallelism:} In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over $d$ devices, and each device processes a batch of size $b$, each expert receives a batch of approximately $\frac{kbd}{n}$ examples. Thus, we achieve a factor of $d$ improvement in expert batch size. In the case of a hierarchical MoE (Section \ref{sec:hierarchical}), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device. This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware. \paragraph{Taking Advantage of Convolutionality:} In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps. \paragraph{Increasing Batch Size for a Recurrent MoE:} We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. \cite{Gruslys16} describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size. \subsection{Network Bandwidth} Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes $input$\_${size} \times hidden$\_${size}$ and $hidden$\_${size} \times output$\_${size}$, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers. \section{Balancing Expert Utilization} \label{sec:losses} We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. \cite{eigen2013learning} describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. \cite{Bengio15:CondComp} include a soft constraint on the batch-wise average of each gate.\footnote{\cite{Bengio15:CondComp} also include two additional losses. One controls per-example sparsity, which we do not need since it is enforced by the fixed value of $k$. A third loss encourages diversity of gate values. In our experiments, we find that the gate values naturally diversify as the experts specialize (in a virtuous cycle), and we do not need to enforce diversity of gate values.} We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss $L_{importance}$, which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor $w_{importance}$. This additional loss encourages all experts to have equal importance. \begin{equation}\label{eq:gateloss} Importance(X) = \sum_{x \in X}G(x) \end{equation} \begin{equation}\label{eq:importanceloss} L_{importance}(X) = w_{importance} \cdot CV(Importance(X))^2 \end{equation} While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, $L_{load}$ , which ensures balanced loads. Appendix \ref{sec:load} contains the definition of this function, along with experimental results. \section{Experiments} \subsection{1 Billion Word Language Modeling Benchmark}\label{sec:lm} \paragraph{Dataset:} This dataset, introduced by \citep{chelba2013one} consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words. \paragraph{Previous State-of-the-Art:} The best previously published results \citep{RafalNoam16} use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers \citep{Hochreiter:1997:LSM,Gers:2000:LFC}. The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure \ref{fig:lm1b}-right. \paragraph{MoE Models:} Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure \ref{fig:moe}). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix \ref{sec:appendixlm1b}. \paragraph{Low Computation, Varied Capacity:} To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input. The results of these models are shown in Figure \ref{fig:lm1b}-left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24\% lower perplexity on the test set. \paragraph{Varied Computation, High Capacity:} In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix \ref{sec:expensive}. Results of these three models form the bottom line of Figure \ref{fig:lm1b}-right. Table \ref{tab:lm1bshort} compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6\% of the computation. \paragraph{Computational Efficiency:} We trained our models using TensorFlow \citep{Abadi16} on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37\% and 46\% of the total. For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix \ref{sec:appendixlm1b}, Table \ref{tab:lm1bresults}. \subsection{100 Billion Word Google News Corpus} On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure \ref{fig:lm1b}-left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements. We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix \ref{sec:appendixgn11}. \paragraph{Results:} Figure \ref{fig:gn11} shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39\% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets. Even at 65536 experts (99.994\% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU. \subsection{Machine Translation (Single Language Pair)} \label{sec:mt} \paragraph{Model Architecture:} Our model was a modified version of the GNMT model described in~\citep{GNMT}. To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix \ref{sec:appendixmt}. \paragraph{Datasets:} We benchmarked our method on the WMT'14 En$\rightarrow$Fr and En$\rightarrow$De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in~\citep{GNMT}: newstest2014 was used as the test set to compare against previous work \citep{LuongPM:2015:EAANMT,Zhou:2016:DeppAtt,GNMT}, while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data. \paragraph{Results:} Tables \ref{tab:wmtenfr}, \ref{tab:wmtende}, and ~\ref{tab:prodmt} show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En$\rightarrow$Fr and En$\rightarrow$De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in \citep{GNMT}. The perplexity scores are also better.\footnote{Reported perplexities relative to the tokenization used by both our models and GNMT.} On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time. \subsection{Multilingual Machine Translation} \label{sec:mlmt} \paragraph{Dataset:} \citep{Johnson16} train a single GNMT \citep{GNMT} model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix \ref{sec:appendixmt} for details on model architecture. We train our model on the same dataset as \citep{Johnson16} and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model. \paragraph{Results:} Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table \ref{tab:ml}. The MoE model achieves 19\% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English $\rightarrow$ Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus. \vspace{-8pt}\section{Conclusion}\label{sec:conc} This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come. \subsubsection*{Acknowledgments} We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better. \bibliographystyle{iclr2017_conference} \newpage \appendix \section*{Appendices} \addcontentsline{toc}{section}{Appendices} \renewcommand{\thesubsection}{\Alph{subsection}} \subsection{Load-Balancing Loss} \label{sec:load} As discussed in section \ref{sec:losses}, for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator $Load(X)$ of the number of examples assigned to each expert for a batch $X$ of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define $P(x, i)$ as the probability that $G(x)_i$ is nonzero, given a new random choice of noise on element $i$, but keeping the already-sampled choices of noise on the other elements. To compute $P(x, i)$, we note that the $G(x)_i$ is nonzero if and only if $H(x)_i$ is greater than the $k^{th}$-greatest element of $H(x)$ excluding itself. The probability works out to be: \begin{equation} \begin{aligned} P(x, i) = Pr\Big( (x \cdot W_g)_i + StandardNormal() \cdot Softplus((x \cdot W_{noise})_i) \\ > kth\_excluding(H(x), k, i)\Big) \end{aligned} \end{equation} Where $kth\_excluding(v, k, i)$ means the kth highest component of $v$, excluding component $i$. Simplifying, we get: \begin{equation} P(x, i) = \Phi\Big(\frac{(x \cdot W_g)_i - kth\_excluding(H(x), k, i)}{Softplus((x \cdot W_{noise})_i)}\Big) \end{equation} Where $\Phi$ is the CDF of the standard normal distribution. \begin{equation} Load(X)_i = \sum_{x \in X}P(x, i) \end{equation} We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor $w_{load}$. \begin{equation}\label{eq:loadloss} L_{load}(X) = w_{load} \cdot CV(Load(X))^2 \end{equation} \paragraph{Initial Load Imbalance:} To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices $W_g$ and $W_{noise}$ to all zeros, which yields no signal and some noise. \paragraph{Experiments:} We trained a set of models with identical architecture (the MoE-256 model described in Appendix \ref{sec:appendixlm1b}), using different values of $w_{importance}$ and $w_{load}$. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in $Importance$ and $Load$, as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches. \paragraph{Results:} Results are reported in Table \ref{tab:losses}. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of $w_{load}$ had lower loads on the most overloaded expert. \subsection{Hierachical Mixture of Experts} \label{sec:hierarchical} If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of ``experts", each of which is itself a secondary mixture-of-experts with its own gating network.\footnote{ We have not found the need for deeper hierarchies.} If the hierarchical MoE consists of $a$ groups of $b$ experts each, we denote the primary gating network by $G_{primary}$, the secondary gating networks by $(G_1, G_2 .. G_a)$, and the expert networks by $(E_{0,0}, E_{0,1} .. E_{a,b})$. The output of the MoE is given by: \begin{equation}\label{eq:gate_expert} y_H = \sum_{i=1}^{a}\sum_{j=1}^{b}G_{primary}(x)_i \cdot G_i(x)_j \cdot E_{i,j}(x) \end{equation} Our metrics of expert utilization change to the following: \begin{equation} Importance_H(X)_{i,j} = \sum_{x \in X}G_{primary}(x)_i \cdot G_i(x)_j \end{equation} \begin{equation} Load_H(X)_{i,j} = \frac{Load_{primary}(X)_i \cdot Load_i(X^{(i)})_j}{|X^{(i)}|} \end{equation} $Load_{primary}$ and $Load_i$ deonte the $Load$ functions for the primary gating network and $i^{th}$ secondary gating network respectively. $X^{(i)}$ denotes the subset of $X$ for which $G_{primary}(x)_i > 0$. It would seem simpler to let $Load_H(X)_{i,j} = Load_i(X_i)_j$ , but this would not have a gradient with respect to the primary gating network, so we use the formulation above. \subsection{1 Billion Word Language Modeling Benchmark - Experimental Details}\label{sec:appendixlm1b} \subsubsection{8-Million-Operations-per-Timestep Models} \paragraph{Model Architecture:} Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer \citep{Hochreiter:1997:LSM,Gers:2000:LFC}, a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput \citep{ZarembaSV14} to the layer output, dropping each activation with probability $DropProb$, otherwise dividing by $(1-DropProb)$. After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow \citep{HeZRS:2015:DRL}. \paragraph{MoE Layer Architecture:} Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains $[512 * 1024] + [1024 * 512] = 1M$ parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section \ref{sec:noisytopk}) with $k=4$ for the ordinary MoE layers and $k=2$ at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M. \paragraph{Computationally-Matched Baselines:} The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity: \begin{itemize} \item MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096. \item MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size $1024$. \item 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers. \item LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions \citep{sak2014long}. The next timestep of the LSTM receives the projected output. This is identical to one of the models published in \citep{RafalNoam16}. We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones. \end{itemize} \paragraph{Training:} The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section~\ref{sec:performance}. Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer~\citep{kingma2014adam}. The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in \citep{RafalNoam16}. For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1. To ensure balanced expert utilization we set $w_{importance}=0.1$ and $w_{load}=0.1$, as described in Section \ref{sec:losses} and Appendix \ref{sec:load}. \paragraph{Results:} We evaluate our model using perplexity on the holdout dataset, used by~\citep{chelba2013one,RafalNoam16}. We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table~\ref{tab:lm1bresults}. For each model, we report the test perplexity, the computational budget, the parameter counts, the value of $DropProb$, and the computational efficiency. \subsubsection{More Expensive Models}\label{sec:expensive} We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 \citep{sak2014long}. MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best $DropProb$ for each model, and trained each model for 10 epochs. The two models achieved test perplexity of $31.3$ and $28.0$ respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table~\ref{tab:lm1bresults}. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by $18\%$. \subsection{100 Billion Word Google News Corpus - Experimental Details}\label{sec:appendixgn11} \paragraph{Model Architecture:} The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively. \paragraph{Training:} Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words. We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage: The Adam optimizer \citep{kingma2014adam} keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set $\beta_1=0$. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad \citep{duchi10}. \paragraph{Results:} We evaluate our model using perplexity on a holdout dataset. Results are reported in Table~\ref{tab:gn11results}. Perplexity after 100 billion training words is 39\% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing \citep{KneserNey95}.\footnote{While the original size of the corpus was 130 billion words, the neural models were trained for a maximum of 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billion words respectively, giving them a slight advantage over the other reported results.} \subsection{Machine Translation - Experimental Details} \label{sec:appendixmt} \paragraph{Model Architecture for Single Language Pair MoE Models:} Our model is a modified version of the GNMT model described in~\citep{GNMT}. To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention \footnote{For performance reasons, we use a slightly different attention function from the one described in~\citep{GNMT} - See Appendix \ref{sec:attention}}. All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow \citep{HeZRS:2015:DRL}. Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as ``wordpieces") \citep{Schuster:2012:JKVS} for inputs and outputs in our system. We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in~\citep{GNMT}. We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use $k=4$ and the hierarchical MoE models use $k=2$ at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains $[512 * 2048] + [2048 * 512] = 2M$ parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix \ref{sec:batchwisemask}. \paragraph{Model Architecture for Multilingual MoE Model:} We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section \ref{sec:noisytopk}, not the scheme from Appendix \ref{sec:batchwisemask}. The MoE layers in the encoder and decoder are non-hierarchical MoEs with $n=512$ experts, and $k=2$. Each expert has a larger hidden layer of size $8192$. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep. \paragraph{Training:} We trained our networks using the Adam optimizer~\citep{kingma2014adam}. The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to \citep{GNMT}, we applied dropout \citep{ZarembaSV14} to the output of all embedding, LSTM and MoE layers, using $DropProb=0.4$. Training was done synchronously on a cluster of up to 64 GPUs as described in section \ref{sec:performance}. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU. To ensure balanced expert utilization we set $w_{importance}=0.01$ and $w_{load}=0.01$, as described in Section \ref{sec:losses} and Appendix \ref{sec:load}. \paragraph{Metrics:} We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in \citep{LuongPM:2015:EAANMT}. \paragraph{Results:} Tables \ref{tab:wmtenfr}, \ref{tab:wmtende} and \ref{tab:prodmt} in Section \ref{sec:mt} show comparisons of our results to other published methods. Figure~\ref{fig:mt} shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve. We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table~\ref{tab:experts}. For example, one expert is used when the indefinite article ``a" introduces the direct object in a verb phrase indicating importance or leadership. \subsection{Strictly Balanced Gating}\label{sec:batchwisemask} Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below. Recall that we define the softmax gating function to be: \begin{equation}\label{eq:softmax} G_\sigma(x) = Softmax(x \cdot W_g) \end{equation} \paragraph{Sparse Gating (alternate formulation):} To obtain a sparse gating vector, we multiply $G_\sigma(x)$ component-wise with a sparse mask $M(G_\sigma(x))$ and normalize the output. The mask itself is a function of $G_\sigma(x)$ and specifies which experts are assigned to each input example: \begin{equation}\label{eq:g_top_k} G(x)_i = \frac{G_\sigma(x)_i M(G_\sigma(x))_i}{\sum_{j=1}^{n} G_\sigma(x)_j M(G_\sigma(x))_j } \end{equation} \paragraph{Top-K Mask:} To implement top-k gating in this formulation, we would let $M(v) = TopK(v, k)$, where: \begin{equation}\label{eq:top_k} TopK(v, k)_i = \begin{cases} 1 & \text{if $v_i$ is in the top $k$ elements of $v$.} \\ 0 & \text{otherwise.} \end{cases} \end{equation} \paragraph{Batchwise Mask:} To force each expert to receive the exact same number of examples, we introduce an alternative mask function, $M_{batchwise}(X, m)$, which operates over batches of input vectors. Instead of keeping the top $k$ values per example, we keep the top $m$ values per expert across the training batch, where $m=\frac{k|X|}{n}$, so that each example is sent to an average of $k$ experts. \begin{equation}\label{eq:batchwisetop_k} M_{batchwise}(X, m)_{j,i} = \begin{cases} 1 & \text{if $X_{j,i}$ is in the top $m$ values for to expert $i$} \\ 0 & \text{otherwise} \end{cases} \end{equation} As our experiments suggest and also observed in ~\citep{DBLP:journals/corr/IoffeS15}, using a batchwise function during training (such as $M_{batchwise}$) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector $T$ of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: \begin{equation}\label{eq:threshold} M_{threshold}(x, T)_i = \begin{cases} 1 & \text{if $x_i > T_i$} \\ 0 & \text{otherwise} \end{cases} \end{equation} To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. \begin{equation}\label{eq:thresholdloss} L_{batchwise}(X, T, m) = \sum_{j = 1}^{|X|} \sum_{i=1}^n (M_{threshold}(x, T)_i - M_{batchwise}(X, m)_{j,i}) (X_{j, i} - T_i) \end{equation} \subsection{Attention Function}\label{sec:attention} The attention mechanism described in GNMT ~\citep{GNMT} involves a learned ``Attention Function" $A(x_i,y_j)$ which takes a ``source vector" $x_i$ and a ``target vector" $y_j$, and must be computed for every source time step $i$ and target time step $j$. In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size $n$. It can be expressed as: \begin{equation}\label{eq:gnmtattention} A_{GNMT}(x_i, y_j) = \sum_{d=1}^{n}V_d tanh((x_iU)_d + (y_jW)_d) \end{equation} Where $U$ and $W$ are trainable weight matrices and $V$ is a trainable weight vector. For performance reasons, in our models, we used a slightly different attention function: \begin{equation}\label{eq:ourattention} A(x_i, y_j) = \sum_{d=1}^{n}V_d tanh((x_iU)_d) tanh((y_jW)_d) \end{equation} With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. \end{document}
An Interpretable Knowledge Transfer Model for Knowledge Base Completion
1704.05908
Table 5: Different λ’s effect on our model performance. The compared models are trained for 2000 epochs
[ "[BOLD] Method", "[BOLD] WN18 MR", "[BOLD] WN18 H10", "[BOLD] FB15k MR", "[BOLD] FB15k H10" ]
[ [ "[ITALIC] λ=0.0003", "[BOLD] 217", "95.0", "[BOLD] 68", "80.4" ], [ "[ITALIC] λ=0.001", "223", "[BOLD] 95.2", "73", "80.6" ], [ "[ITALIC] λ=0.003", "239", "[BOLD] 95.2", "82", "[BOLD] 80.9" ] ]
We compare how different value of λ would influence our model’s performance in Table. With large λ and higher domain sampling probability, our model’s Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \aclfinalcopy % Uncomment this line for the final sumathbfission \def\aclpaperid{79} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{An Interpretable Knowledge Transfer Model \\ for Knowledge Base Completion} \author{Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, PA 15213, USA\\ {\tt \{qzxie, xuezhem, dzihang, hovy\}@cs.cmu.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. \end{abstract} \section{Introduction} Knowledge bases (KB), such as WordNet~\citep{FellbaumC98}, Freebase~\citep{Bollacker:2008}, YAGO ~\citep{Suchanek:2007} and DBpedia~\citep{LehmannIJJKMHMK15}, are useful resources for many applications such as question answering~\citep{berant-EtAl:2013:EMNLP,yih-EtAl:2015:ACL-IJCNLP,dai-li-xu:2016:P16-1} and information extraction~\citep{mintz-EtAl:2009:ACLIJCNLP}. However, knowledge bases suffer from incompleteness despite their formidable sizes ~\citep{NIPS2013_5028,West:2014:KBC:2566486.2568032}, leading to a number of studies on automatic knowledge base completion (KBC)~\citep{NickelMTG15} or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multi-relational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE}. As a seminal work, \citet{TransE} proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}, and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE~\citep{STransE} utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. \iffalse Of these studies, a number of neural network based techniques have emerged over the years to address the KBC task, among which embedding based models~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE} have stood out for its simplicity and effectiveness. \citet{TransE} proposed the TransE model that associated entities and relations with dense embedding vectors. To better model different aspects of the same entity, a variety of models map the entity embedding to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}. For instance, STransE~\citep{STransE} projected the head entity and tail entity to a relation-dependent space by multiplying two relation-specific projection matrices. \fi Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC, nguyen2016neighborhood}, alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multi-relation paths~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}. Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. \iffalse However, for some relations, in practice, there are not enough data to estimate the projection matrices.%(repretition) due to the data sparsity problem in knowledge bases. This led to a vast amount of research on utilizing external information, such as textual relations from web-scale corpus~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC} and relation path~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}, to enhance performance. %\FIXME{The performance decrease was not caused by data sparsity?} Unfortunately, such task-specific knowledge is costly to develop, making these models difficult to adapt to new tasks or new domains. \fi In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. \section{Notation and Previous Models} Let $E$ denote the set of entities and $R$ denote the set of relations. In knowledge base completion, given a training set $P$ of triples $(h, r, t)$ where $h,t\in E$ are the head and tail entities having a relation $r\in R$, e.g., (\textit{Steve Jobs}, \texttt{FounderOf}, \textit{Apple}), we want to predict missing facts such as (\textit{Steve Jobs}, \texttt{Profession}, \textit{Businessperson}). Most of the embedding models for knowledge base completion define an energy function $f_r(h,t)$ according to the fact's plausibility~\citep{bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,yang-etal-2015, guu-miller-liang:2015:EMNLP,STransE}. The models are learned to minimize energy $f_r(h,t)$ of a plausible triple $(h,r,t)$ and to maximize energy $f_r(h',t')$ of an implausible triple $(h',r,t')$. Motivated by the linear translation phenomenon observed in well trained word embeddings~\citep{mikolov2013distributed}, TransE~\citep{TransE} represents the head entity $h$, the relation $r$ and the tail entity $t$ with vectors $\mathbf{h}, \mathbf{r}$ and $\mathbf{t} \in \mathbb{R}^{n}$ respectively, which were trained so that $\mathbf{h}+\mathbf{r}\approx \mathbf{t}$. They define the energy function as $$f_r(h,t) = \| \mathbf{h} + \mathbf{r} - \mathbf{t} \|_{\ell}$$ where $\ell=1$ or $2$, which means either the $\ell_1$ or the $\ell_2$ norm of the vector $\mathbf{h} + \mathbf{r} - \mathbf{t}$ will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR~\citep{AAAI159571} uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE~\citep{STransE} extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is $$f_r(h,t) = \|\mathbf{W}_{r,1}\mathbf{h} + \mathbf{r} - \mathbf{W}_{r,2}\mathbf{t} \|_{\ell}$$ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. \section{Interpretable Knowledge Transfer} \subsection{Model} As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation ``(somebody) won award for (some work)'' and ``(somebody) was nominated for (some work)'' both describe a person's high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. \paragraph{Energy function} Specifically, we stack all the concept projection matrices to a 3-dimensional tensor $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$, where $m$ is the pre-specified number of concept projection matrices and $n$ is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: \begin{equation} f_r(h,t) = \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell} \label{eq:energy function} \end{equation} where $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T} \in [0,1]^m$, satisfying $\sum_i\pmb{\alpha}_{r,i}^{H}=\sum_i\pmb{\alpha}_{r,i}^{T}=1$, are normalized attention vectors used to compose all concept projection matrices in $\mathbf{D}$ by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use $m=2|R|$ concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section \ref{sec:compress}), though STransE always requires $2|R|$ projection matrices. We follow previous work to minimize the following hinge loss function: \begin{equation} \mathcal{L}=\sum_{\substack{(h,r,t) \sim P,\\ (h',r,t') \sim N}} \left[ \gamma + f_{r}(h,t) -f_{r}(h',t') \right]_+ \label{eq:hinge} \end{equation} where $P$ is the training set consisting of correct triples, $N$ is the distribution of corrupted triples defined in section \ref{sec:sampling}, and $[\cdot]_+ = \max(\cdot, 0)$. Note that we have omitted the dependence of $N$ on $(h,r,t)$ to avoid clutter. We normalize the entity vectors $\mathbf{h},\mathbf{t}$, and the projected entity vectors $\pmb{\alpha}_{r}^{H} \cdot \mathbf{D}\cdot \mathbf{h}$ and $\pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}$ to have unit length after each update, which is an effective regularization method that benefits all models. \paragraph{Sparse attention vectors} In Eq.~\eqref{eq:energy function}, we have defined $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T}$ to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of $m$ matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing $\ell_1$ regularization~\citep{tibshirani1996regression} on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce $\ell_0$ constraints on $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. In order to satisfy both the normalization condition and the $\ell_0$ constraints, we reparameterize the attention vectors in the following way: \begin{align*} \pmb{\alpha}_{r}^{H}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{H}, \mathbf{I}_{r}^{H}) \\ \pmb{\alpha}_{r}^{T}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{T}, \mathbf{I}_{r}^{T}) \end{align*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in \mathbb{R}^m$ are the pre-softmax scores, $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{0,1\}^{m}$ are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the $\mathrm{SparseSoftmax}$ is defined as \begin{equation*} \mathrm{SparseSoftmax}(\mathbf{v}, \mathbf{I})_i=\frac{\exp(\mathbf{v}_i / \tau) \mathbf{I}_i}{\sum_j \exp(\mathbf{v}_j / \tau) \mathbf{I}_j} \end{equation*} with $\tau$ being the temperature of Softmax. With this reparameterization, $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T}$ and $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ replace $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$ to become the real parameters of the model. Also, note that it is equivalent to pose the $\ell_0$ constraints on $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ instead of $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. Putting these modifications together, we can rewrite the optimization problem as \begin{equation} \begin{aligned} & {\text{minimize}} & & \mathcal{L} \\ & \text{subject to} & & \|\mathbf{I}_{r}^{H}\|_{0} \leq k,\|\mathbf{I}_{r}^{T}\|_{0} \leq k \end{aligned} \label{eq:l0_problem} \end{equation} where $\mathcal{L}$ is the loss function defined in Eq.~\eqref{eq:hinge}. \subsection{Block Iterative Optimization} Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under $\ell_0$ constraints. Thus, we resort to an approximated algorithm in this work. For convenience, we refer to the parameters with and without the sparse constraints as the \textit{sparse} partition and the \textit{dense} partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold \begin{enumerate}[itemsep=-1mm] \item the sparsity required by the $\ell_0$ constaint is maintained, and \item the cost define by Eq.~\eqref{eq:hinge} is decreased. \end{enumerate} Satisfying the two criterion seems to highly resemble the original problem defined in Eq.~\eqref{eq:l0_problem}. However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation $r$. In other words, the optimal choice of $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ is independent of $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ for any $r' \neq r$. Therefore, we only need to consider the optimization for a single relation $r$, which is essentially an assignment problem. Note that, however, $\mathbf{I}_{r}^{H}$ and $\mathbf{I}_{r}^{T}$ are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation $r$, we consider the induced cost $\mathcal{L}_{r,i}^{H}$ where only a single projection matrix $i$ is used for the head entity: \begin{equation*} \mathcal{L}_{r,i}^{H} = \sum_{\substack{(h,r,t) \sim P_r,\\ (h',r,t') \sim N_r}} \left[ \gamma + f_{r,i}^{H}(h,t) - f_{r,i}^{H}(h',t') \right]_+ \end{equation*} where $f_{r,i}^{H}(h,t) = \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|$ is the corresponding energy function, and the subscript in $P_r$ and $N_r$ denotes the subsets with relation $r$. Intuitively, $\mathcal{L}_{r,i}^{H}$ measures, given the current tail attention vector $\pmb{\alpha}_{r}^{T}$, if only one project matrix could be chosen for the head entity, how implausible $D_i$ would be. Hence, $i^* = \arg\min_i \mathcal{L}_{r,i}^{H}$ gives us the best single projection matrix on the head side given $\pmb{\alpha}_{r}^{T}$. Now, in order to choose the best $k$ matrices, we basically ignore the interaction among projection matrices, and update $\mathbf{I}_{r}^{H}$ in the following way: \begin{equation*} \mathbf{I}_{r,i}^{H} \leftarrow \begin{cases} 1, &i \in \mathrm{argpartition}_{i}(\mathcal{L}_{r,i}^{H}, k) \\ 0, &\text{otherwise} \end{cases}%, \, \forall i \end{equation*} where the function $\mathrm{argpartition}_{i}(x_i, k)$ produces the index set of the lowest-$k$ values of $x_i$. Analogously, we can define the single-matrix cost $\mathcal{L}_{r,i}^{T}$ and the energy function $f_{r,i}^{T}(h,t)$ on the tail side in a symmetric way. Then, the update rule for $\mathbf{I}_{r}^{H}$ follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section \ref{sec:experiments}, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. \iffalse Though sparseness is favorable in practice, even in linear regression, it has been shown to be an NP-hard problem to find the optimal solutions under $\ell_0$ constraints. %A lot of algorithms such as Approximated algorithms such as forward stepwise algorithm are proposed. Here we propose an approximated algorithm to solve it. We divide all of our parameters into two partitions: differentiable and non-differentiable, and we iteratively optimize those two. Differentiable parameters such as embeddings, projection matrices are optimized by SGD. Non-differentiable parameters are optimized by a greedy approximated process, aiming to minimize the cost function. Recall that we want the number of concepts associated with relations to be less than or equal to $k$, indicated by the $\ell_0$ constraint of attention vectors. We represent the mapping between relation $r$ and concepts by two indicator vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{-\infty ,1\}^{m}$, the indicator of non-zero entries in attention vectors. The $\ell_0$ constraint is guaranteed as the number of $1$s in $\mathbf{I}_{r}^{H},\mathbf{I}_{r}^{T}$ is less than or equal to $k$. Those mapping vectors constitutes the non-differentiable parameters in our framework. Formally, the attention vectors are calculated as follows: $$\pmb{\alpha}_{r}^{H}=\mathrm{Softmax}(\mathbf{v}_{r}^{H} \circ \mathbf{I}_{r}^{H})$$ $$\pmb{\alpha}_{r}^{T}=\mathrm{Softmax}(\mathbf{v}_{r}^{T} \circ \mathbf{I}_{r}^{T})$$ \begin{equation*} \mathrm{Softmax}(\mathbf{x})_i=\frac{\exp(\mathbf{x}_i / \tau)}{\sum_j \exp(\mathbf{x}_j / \tau)} \end{equation*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in (0, \infty)^m$ are the parameters for attention, $\circ$ is element wise multiplication, $\tau$ is the temperature of softmax ($\tau$ is set to $1/4$ in our experiments). Then there are only $k$ non-zero elements in $\pmb{\alpha}_{r}^{H}$ and $\pmb{\alpha}_{r}^{T}$ since $exp(-\infty)=0$. The algorithm is: \begin{itemize} \item[(1)] Randomly initialize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$, which is not differentiable. \item[(2)] Optimize differentiable parameters by SGD with fixed mapping vectors for several epochs. \item[(3)] If it is the early stage of training, fix the differentiable parameters learned in the previous step and optimize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ to minimize objective function. Go to step (2) if the model has not converged. \end{itemize} How do we optimize mapping vectors? A brute-force algorithm is to enumerate all possible values of mapping vectors. However, the time complexity of such a naive algorithm grows exponentially. Instead, we make several approximations to optimize the mapping vectors greedily. We define $J_{r,i}^{H}$ as an approximation to the cost $L$ when relation $r$ is mapped to concept $i$, i.e., $\textbf{I}_{r,i}^{H}=1$. We select the top $k$ concepts with smallest approximated cost when we optimize mapping vectors. The approximated cost takes the same hinge loss as the original cost function shown in Equation \ref{eq:hinge} with a concept specific energy function $f_{r,i}^{H}(h,t)$: \begin{equation*} J_{r,i}^{H}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}\max(\gamma + f_{r,i}^{H}(h,t) -f_{r,i}^{H}(h',t'), 0) \end{equation*} where the energy function is similar to the original function defined in Equation \ref{eq:energy function} except that relation $r$ is completely assigned with concept $i$: \begin{align*} f_{r,i}^{H}(h,t) &= \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\| \\ \end{align*} Similarly, the formulations for tail entities are defined as \begin{align*} f_{r,i}^{T}(h,t) &= \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \mathbf{D}_i \cdot \mathbf{t}\| \end{align*} $$J_{r,i}^{T}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}[\gamma + f_{r,i}^{T}(h,t) -f_{i}^{r,2}(h',t')]_+$$ The above process is a greedy algorithm. We make the following relaxations to ensure efficient and effective approximation: Firstly, concepts used to project head and tail entities are decoupled and selected independently. Secondly, $J_{r,i}^{H}$ and $J_{r,i}^{T}$ are evaluated on condition that concept $i$ is fully utilized, i.e., we ignore the interaction between concept $i$ and other concepts by setting attention $\pmb{\alpha}_{r,1,i}= 1$\footnote{The relaxation is made to reduce the computation complexity. Otherwise, to evaluate indicator vectors involving multiple matrices, we need to perform SGD to get the corresponding optimal values of $v_{r,1}$ and $v_{r,2}$. }. The greedy process works well empirically. We draw our inspiration from Expectation-Maximization (EM) ~\citep{dempster1977maximum} and LightRNN~\citep{LightRNN}. The above algorithm is similar to EM and LightRNN in the sense that some parameters can change rapidly based on the estimation of the corresponding cost. In other words, we are not changing and exploring the mapping vectors bit by bit but they can be reassigned with a completely different value in one step, leading to fast convergence. \fi \iffalse \begin{algorithm}[] Initialize $\mathbf{I}_{r}^{H}$,$\mathbf{I}_{r}^{T}$ randomly \\ \While {not convergence} { \For{\texttt{epoch} $= 1$ to $T_1$}{ Optimize $L$ by SGD on $\theta$ with $I_{r}^{H}, I_{r}^{T}$ fixed } $\texttt{tot\_epoch} = \texttt{tot\_epoch} + T_1$ \\ \If{$\texttt{tot\_epoch} \leq T_2$}{ { Find $\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{(T)'}$ which approximately maximize $L$. \\ Set $\mathbf{I}_{r}^{H}=\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{T}=\mathbf{I}_{r}^{(T)'}$. }} } \caption{Coordinate ascent optimization algorithm} \label{alg:opt} \end{algorithm} \fi \subsection{Corrupted Sample Generating Method} \label{sec:sampling} Recall that we need to sample a negative triple $(h',r,t')$ to compute hinge loss shown in Eq.~\ref{eq:hinge}, given a positive triple $(h,r,t)\in P$. The distribution of negative triple is denoted by $N(h,r,t)$. Previous work~\citep{TransE, AAAI159571, yang-etal-2015,STransE} generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds $\gamma$, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability $p_r$ and from the whole entity set with probability $1-p_r$. The choice of relation-dependent probability $p_r$ is specified in Appendix \ref{sec:domain_sampling}. In the rest of the paper, we refer to the new proposed sampling method as "domain sampling". \section{Experiments} \label{sec:experiments} \subsection{Setup} To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by \citet{TransE} and use the same training/validation/test split as in \citep{TransE}. The information of the two datasets is given in Table \ref{tab:datasets}. In knowledge base completion task, we evaluate model's performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation $r$ and tail $t$ in triple $(h,r,t)$, we compute the energy function $f_r(h', t)$ for each entity $h'$ in the knowledge base and rank all the entities according to the energy. We follow \citet{TransE} to report the \emph{filter} results, i.e., removing all other correct candidates $h'$ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top $10$ accuracy). Lower mean rank or higher Hits@10 mean better performance. \subsection{Implementation Details} We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution $\mathcal{N}(0,\,0.005^2)$. The entity and relation vectors of ITransF are initialized by TransE~\citep{TransE}, following~\citet{AAAI159571, ji-EtAl:2015:ACL-IJCNLP, Garcia-DuranBUG15, garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1}. We ran mini-batch SGD until convergence. We employ the {``\textit{Bernoulli}''} sampling method to generate incorrect triples as used in \citet{AAAI148531}, \citet{AAAI159571}, \citet{He:2015}, \citet{ji-EtAl:2015:ACL-IJCNLP} and \citet{lin-EtAl:2015:EMNLP1}. STransE~\citep{STransE} is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin $\gamma$ to $5$ and dimension of embedding $n$ to $50$ for WN18, and $\gamma = 1, n = 100$ for FB15k. We set the batch size to $20$ for WN18 and $1000$ for FB15k. The learning rate is $0.01$ on WN18 and $0.1$ on FB15k. We use $30$ matrices on WN18 and $300$ matrices on FB15k. All the models are implemented with Theano~\citep{bergstra2010theano}. The Softmax temperature is set to $1/4$. %\FIXME{T1, T2} \subsection{Results \& Analysis} The overall link prediction results\footnote{Note that although IRN~\citep{implicit} does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is $80.7$, similar to models without path information.} are reported in Table \ref{tab:main}. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. %In our framework, since Obama+IsBornIn$\approx$Honolulu, Honolulu+IsCityIn$\approx$US. Then we can expect Obama+IsBornIn+IsCityIn$\approx$US. %Projection matrices are not exactly the same An straightforward way of extending our proposed model to $k$-step path $P=\{r_i\}_{i=1}^{k}$ is to define a path energy function $\| \pmb{\alpha}_{P}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \sum_{r_i\in P}\mathbf{r}_i - \pmb{\alpha}_{P}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell}$, $\pmb{\alpha}_{P}^{H}$ is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. \iffalse In many knowledge bases, a small number of relations enjoy the majority of data, while a lot of relations are rare and hard to deal with. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. \fi \paragraph{Performance on Rare Relations} In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model's performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf's law. Since the frequencies of relations approximately follow a power distribution, their log frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure \ref{fig:rare}.\footnote{Domain sampling is not employed.} As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from $74.4$ to $92.0$, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix \ref{sec:rare_WN} where we can observe tha. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. \paragraph{Interpretability} In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure \ref{fig:att}. For WN18, the words ``hyponym'' and ``hypernym'' refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, ``instance\_hypernym'' is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is $(\textit{New York}, \texttt{instance\_hypernym}, \textit{city})$. This connection has also been discovered by our model, indicated by the fact that ``instance\_hypernym(T)'' and ``hypernym(T)'' share a common concept matrix. Finally, for symmetric relations like ``similar\_to'', we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in ``(somebody) won\_award\_for (some work)'' and ``(some work) award\_winning\_work (somebody)''. What's more, although relation ``won\_award\_for'' and ``was\_nominated\_for'' share the same concepts, their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. \begin{center} \end{center} \paragraph{Model Compression} \label{sec:compress} A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure \ref{fig:num_of_matrix} plots the average performance of ITransF against the number of projection matrices $m$, together with two baseline models. On FB15k, when we reduce the number of matrices from $2200$ to $30$ ($\sim90\times$ compression), our model performance decreases by only $0.09\%$ on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to $18$. \section{Analysis on Sparseness} Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. \paragraph{Dense Attention w/o $\ell_1$ regularization} Although $\ell_0$ constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with $\ell_1$ regularization. We set the $\ell_1$ coefficient to $0.001$ in our experiments and does not apply Softmax since the $\ell_1$ of a vector after Softmax is always $1$. We compare models in a setting where the computation time of dense attention model is acceptable\footnote{With $300$ projection matrices, it takes $1h1m$ to run one epoch for a model with dense attention.}. We use $22$ weight matrices on WN18 and $15$ weight matrices on FB15k and train both the models for $2000$ epochs. The results are reported in Table \ref{tab:dense}. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of model with $\ell_1$ regularized dense attention in Figure \ref{fig:att_l1}. We see that $\ell_1$ regularization does not produce a sparse attention, especially on FB15k. \paragraph{Nonnegative Sparse Encoding} In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking $|2R|$ pretrained projection matrices into a 3-dimensional tensor $X\in \mathbb{R}^{2|R| \times n \times n}$, similar sparsity can be induced by solving an $\ell_1$-regularized tensor completion problem $\min_{\mathbf{A},\mathbf{D}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \|\mathbf{A}\|_{\ell_1}$. Basically, $\mathbf{A}$ plays the same role as the attention vectors in our model. For more details, we refer readers to \citep{faruqui-EtAl:2015:ACL-IJCNLP}. For completeness, we compare our model with the aforementioned approach\footnote{We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP}.}. The comparison is summarized in table \ref{tab:optimization}. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. \iffalse To investigate whether our representation, we apply a non-negative sparse encoding method to obtain the sparse representation of projection matrices. We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP} and set the hyperparamters so that we obtain average degree of sparseness. We first train a STransE model which utilize separate projection matrices for different relations and stack all the projection matrices in a 3-Dimensional tensor $X\in \mathbb{R}^{2H\times n \times n}$ where $H$ is the number of relations. Then we minimize the following reconstruction loss \begin{equation} \begin{aligned} \min_{\mathbf{D},\mathbf{A}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \Omega(\mathbf{A}) + \gamma ||\mathbf{D}||_2^2 \end{aligned} \end{equation} where $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$ is the basis matrices and $\Omega$ is a regularizer to ensure sparseness representations. which utilize a $\ell_1$ regularizer. We \fi \section{Related Work} \label{sec:related_work} In KBC, CTransR~\citep{AAAI159571} enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement~\citep{turney2012domain} and relation adaptation~\citep{bollegala2015embedding}. Data sparsity is a common problem in many fields. Transfer learning~\citep{pan2010survey} has been shown to be promising to transfer knowledge and statistical strengths across similar models or languages. For example, \citet{D16-1153} transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. \citet{zoph-EtAl:2016:EMNLP2016} initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention~\citep{martins2016softmax, makhzani2013k,OUTRAGEOUSLY} share a similar idea of sorting the values before softmax and only keeping the $K$ largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN~\citep{LightRNN}. They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. \section{Conclusion and Future Work} In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. \section*{Acknowledgments} We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. \bibliographystyle{acl_natbib} \clearpage \appendix \section{Appendix} \subsection{Domain Sampling Probability} \label{sec:domain_sampling} In this section, we define the probability $p_r$ to generate a negative sample from the same domain mentioned in Section \ref{sec:sampling}. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. %To avoid generating false negative samples, we apply domain sampling with a higher probability on relations with a sparse domain. Specifically, let $\mathrm{M}^H_r=\{h \mid \exists t (h,r,t) \in P\}$ and $\mathrm{M}^T_r=\{t \mid \exists h (h,r,t) \in P\}$ denote the head or tail domain of relation $r$. Suppose $N_r=\{(h,r,t) \in P\}$ is the induced set of edges with relation $r$. We define the probability $p_r$ as \begin{equation} p_r=min(\frac{\lambda|\mathrm{M}^T_r| |\mathrm{M}^H_r|}{|N_r|}, 0.5) \label{eq:domain_sampling} \end{equation} Our motivation of such a formulation is as follows: Suppose $O_r$ is the set that contains all truthful fact triples on relation $r$, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is ${Pr((h,r,t)\in O_r \mid h\in \mathrm{M}^H_r, t \in \mathrm{M}^T_r) = \frac{|O_r|}{|\mathrm{M}^H_r||\mathrm{M}^T_r|}}$ Assume that all facts are missing with a probability $\lambda$, then $|N_r|=\lambda|O_r|$ and the above probability can be approximated by $ \frac{|N_r|}{\lambda|\mathrm{M}^H_r||\mathrm{M}^T_r|}$. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. \ref{eq:domain_sampling}. The results in section \ref{sec:experiments} are obtained with $\lambda$ set to $0.001$. We compare how different value of $\lambda$ would influence our model's performance in Table. \ref{tab:domain_sampling}. With large $\lambda$ and higher domain sampling probability, our model's Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. \subsection{Performance on individual relations of WN18} \label{sec:rare_WN} We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. \iffalse \subsection{Performance on different relations} \fi \end{document} \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \aclfinalcopy % Uncomment this line for the final sumathbfission \def\aclpaperid{79} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{An Interpretable Knowledge Transfer Model \\ for Knowledge Base Completion} \author{Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, PA 15213, USA\\ {\tt \{qzxie, xuezhem, dzihang, hovy\}@cs.cmu.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. \end{abstract} \section{Introduction} Knowledge bases (KB), such as WordNet~\citep{FellbaumC98}, Freebase~\citep{Bollacker:2008}, YAGO ~\citep{Suchanek:2007} and DBpedia~\citep{LehmannIJJKMHMK15}, are useful resources for many applications such as question answering~\citep{berant-EtAl:2013:EMNLP,yih-EtAl:2015:ACL-IJCNLP,dai-li-xu:2016:P16-1} and information extraction~\citep{mintz-EtAl:2009:ACLIJCNLP}. However, knowledge bases suffer from incompleteness despite their formidable sizes ~\citep{NIPS2013_5028,West:2014:KBC:2566486.2568032}, leading to a number of studies on automatic knowledge base completion (KBC)~\citep{NickelMTG15} or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multi-relational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE}. As a seminal work, \citet{TransE} proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}, and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE~\citep{STransE} utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. \iffalse Of these studies, a number of neural network based techniques have emerged over the years to address the KBC task, among which embedding based models~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE} have stood out for its simplicity and effectiveness. \citet{TransE} proposed the TransE model that associated entities and relations with dense embedding vectors. To better model different aspects of the same entity, a variety of models map the entity embedding to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}. For instance, STransE~\citep{STransE} projected the head entity and tail entity to a relation-dependent space by multiplying two relation-specific projection matrices. \fi Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC, nguyen2016neighborhood}, alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multi-relation paths~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}. Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. \iffalse However, for some relations, in practice, there are not enough data to estimate the projection matrices.%(repretition) due to the data sparsity problem in knowledge bases. This led to a vast amount of research on utilizing external information, such as textual relations from web-scale corpus~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC} and relation path~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}, to enhance performance. %\FIXME{The performance decrease was not caused by data sparsity?} Unfortunately, such task-specific knowledge is costly to develop, making these models difficult to adapt to new tasks or new domains. \fi In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. \section{Notation and Previous Models} Let $E$ denote the set of entities and $R$ denote the set of relations. In knowledge base completion, given a training set $P$ of triples $(h, r, t)$ where $h,t\in E$ are the head and tail entities having a relation $r\in R$, e.g., (\textit{Steve Jobs}, \texttt{FounderOf}, \textit{Apple}), we want to predict missing facts such as (\textit{Steve Jobs}, \texttt{Profession}, \textit{Businessperson}). Most of the embedding models for knowledge base completion define an energy function $f_r(h,t)$ according to the fact's plausibility~\citep{bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,yang-etal-2015, guu-miller-liang:2015:EMNLP,STransE}. The models are learned to minimize energy $f_r(h,t)$ of a plausible triple $(h,r,t)$ and to maximize energy $f_r(h',t')$ of an implausible triple $(h',r,t')$. Motivated by the linear translation phenomenon observed in well trained word embeddings~\citep{mikolov2013distributed}, TransE~\citep{TransE} represents the head entity $h$, the relation $r$ and the tail entity $t$ with vectors $\mathbf{h}, \mathbf{r}$ and $\mathbf{t} \in \mathbb{R}^{n}$ respectively, which were trained so that $\mathbf{h}+\mathbf{r}\approx \mathbf{t}$. They define the energy function as $$f_r(h,t) = \| \mathbf{h} + \mathbf{r} - \mathbf{t} \|_{\ell}$$ where $\ell=1$ or $2$, which means either the $\ell_1$ or the $\ell_2$ norm of the vector $\mathbf{h} + \mathbf{r} - \mathbf{t}$ will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR~\citep{AAAI159571} uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE~\citep{STransE} extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is $$f_r(h,t) = \|\mathbf{W}_{r,1}\mathbf{h} + \mathbf{r} - \mathbf{W}_{r,2}\mathbf{t} \|_{\ell}$$ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. \section{Interpretable Knowledge Transfer} \subsection{Model} As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation ``(somebody) won award for (some work)'' and ``(somebody) was nominated for (some work)'' both describe a person's high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. \paragraph{Energy function} Specifically, we stack all the concept projection matrices to a 3-dimensional tensor $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$, where $m$ is the pre-specified number of concept projection matrices and $n$ is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: \begin{equation} f_r(h,t) = \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell} \label{eq:energy function} \end{equation} where $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T} \in [0,1]^m$, satisfying $\sum_i\pmb{\alpha}_{r,i}^{H}=\sum_i\pmb{\alpha}_{r,i}^{T}=1$, are normalized attention vectors used to compose all concept projection matrices in $\mathbf{D}$ by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use $m=2|R|$ concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section \ref{sec:compress}), though STransE always requires $2|R|$ projection matrices. We follow previous work to minimize the following hinge loss function: \begin{equation} \mathcal{L}=\sum_{\substack{(h,r,t) \sim P,\\ (h',r,t') \sim N}} \left[ \gamma + f_{r}(h,t) -f_{r}(h',t') \right]_+ \label{eq:hinge} \end{equation} where $P$ is the training set consisting of correct triples, $N$ is the distribution of corrupted triples defined in section \ref{sec:sampling}, and $[\cdot]_+ = \max(\cdot, 0)$. Note that we have omitted the dependence of $N$ on $(h,r,t)$ to avoid clutter. We normalize the entity vectors $\mathbf{h},\mathbf{t}$, and the projected entity vectors $\pmb{\alpha}_{r}^{H} \cdot \mathbf{D}\cdot \mathbf{h}$ and $\pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}$ to have unit length after each update, which is an effective regularization method that benefits all models. \paragraph{Sparse attention vectors} In Eq.~\eqref{eq:energy function}, we have defined $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T}$ to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of $m$ matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing $\ell_1$ regularization~\citep{tibshirani1996regression} on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce $\ell_0$ constraints on $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. In order to satisfy both the normalization condition and the $\ell_0$ constraints, we reparameterize the attention vectors in the following way: \begin{align*} \pmb{\alpha}_{r}^{H}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{H}, \mathbf{I}_{r}^{H}) \\ \pmb{\alpha}_{r}^{T}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{T}, \mathbf{I}_{r}^{T}) \end{align*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in \mathbb{R}^m$ are the pre-softmax scores, $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{0,1\}^{m}$ are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the $\mathrm{SparseSoftmax}$ is defined as \begin{equation*} \mathrm{SparseSoftmax}(\mathbf{v}, \mathbf{I})_i=\frac{\exp(\mathbf{v}_i / \tau) \mathbf{I}_i}{\sum_j \exp(\mathbf{v}_j / \tau) \mathbf{I}_j} \end{equation*} with $\tau$ being the temperature of Softmax. With this reparameterization, $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T}$ and $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ replace $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$ to become the real parameters of the model. Also, note that it is equivalent to pose the $\ell_0$ constraints on $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ instead of $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. Putting these modifications together, we can rewrite the optimization problem as \begin{equation} \begin{aligned} & {\text{minimize}} & & \mathcal{L} \\ & \text{subject to} & & \|\mathbf{I}_{r}^{H}\|_{0} \leq k,\|\mathbf{I}_{r}^{T}\|_{0} \leq k \end{aligned} \label{eq:l0_problem} \end{equation} where $\mathcal{L}$ is the loss function defined in Eq.~\eqref{eq:hinge}. \subsection{Block Iterative Optimization} Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under $\ell_0$ constraints. Thus, we resort to an approximated algorithm in this work. For convenience, we refer to the parameters with and without the sparse constraints as the \textit{sparse} partition and the \textit{dense} partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold \begin{enumerate}[itemsep=-1mm] \item the sparsity required by the $\ell_0$ constaint is maintained, and \item the cost define by Eq.~\eqref{eq:hinge} is decreased. \end{enumerate} Satisfying the two criterion seems to highly resemble the original problem defined in Eq.~\eqref{eq:l0_problem}. However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation $r$. In other words, the optimal choice of $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ is independent of $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ for any $r' \neq r$. Therefore, we only need to consider the optimization for a single relation $r$, which is essentially an assignment problem. Note that, however, $\mathbf{I}_{r}^{H}$ and $\mathbf{I}_{r}^{T}$ are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation $r$, we consider the induced cost $\mathcal{L}_{r,i}^{H}$ where only a single projection matrix $i$ is used for the head entity: \begin{equation*} \mathcal{L}_{r,i}^{H} = \sum_{\substack{(h,r,t) \sim P_r,\\ (h',r,t') \sim N_r}} \left[ \gamma + f_{r,i}^{H}(h,t) - f_{r,i}^{H}(h',t') \right]_+ \end{equation*} where $f_{r,i}^{H}(h,t) = \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|$ is the corresponding energy function, and the subscript in $P_r$ and $N_r$ denotes the subsets with relation $r$. Intuitively, $\mathcal{L}_{r,i}^{H}$ measures, given the current tail attention vector $\pmb{\alpha}_{r}^{T}$, if only one project matrix could be chosen for the head entity, how implausible $D_i$ would be. Hence, $i^* = \arg\min_i \mathcal{L}_{r,i}^{H}$ gives us the best single projection matrix on the head side given $\pmb{\alpha}_{r}^{T}$. Now, in order to choose the best $k$ matrices, we basically ignore the interaction among projection matrices, and update $\mathbf{I}_{r}^{H}$ in the following way: \begin{equation*} \mathbf{I}_{r,i}^{H} \leftarrow \begin{cases} 1, &i \in \mathrm{argpartition}_{i}(\mathcal{L}_{r,i}^{H}, k) \\ 0, &\text{otherwise} \end{cases}%, \, \forall i \end{equation*} where the function $\mathrm{argpartition}_{i}(x_i, k)$ produces the index set of the lowest-$k$ values of $x_i$. Analogously, we can define the single-matrix cost $\mathcal{L}_{r,i}^{T}$ and the energy function $f_{r,i}^{T}(h,t)$ on the tail side in a symmetric way. Then, the update rule for $\mathbf{I}_{r}^{H}$ follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section \ref{sec:experiments}, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. \iffalse Though sparseness is favorable in practice, even in linear regression, it has been shown to be an NP-hard problem to find the optimal solutions under $\ell_0$ constraints. %A lot of algorithms such as Approximated algorithms such as forward stepwise algorithm are proposed. Here we propose an approximated algorithm to solve it. We divide all of our parameters into two partitions: differentiable and non-differentiable, and we iteratively optimize those two. Differentiable parameters such as embeddings, projection matrices are optimized by SGD. Non-differentiable parameters are optimized by a greedy approximated process, aiming to minimize the cost function. Recall that we want the number of concepts associated with relations to be less than or equal to $k$, indicated by the $\ell_0$ constraint of attention vectors. We represent the mapping between relation $r$ and concepts by two indicator vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{-\infty ,1\}^{m}$, the indicator of non-zero entries in attention vectors. The $\ell_0$ constraint is guaranteed as the number of $1$s in $\mathbf{I}_{r}^{H},\mathbf{I}_{r}^{T}$ is less than or equal to $k$. Those mapping vectors constitutes the non-differentiable parameters in our framework. Formally, the attention vectors are calculated as follows: $$\pmb{\alpha}_{r}^{H}=\mathrm{Softmax}(\mathbf{v}_{r}^{H} \circ \mathbf{I}_{r}^{H})$$ $$\pmb{\alpha}_{r}^{T}=\mathrm{Softmax}(\mathbf{v}_{r}^{T} \circ \mathbf{I}_{r}^{T})$$ \begin{equation*} \mathrm{Softmax}(\mathbf{x})_i=\frac{\exp(\mathbf{x}_i / \tau)}{\sum_j \exp(\mathbf{x}_j / \tau)} \end{equation*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in (0, \infty)^m$ are the parameters for attention, $\circ$ is element wise multiplication, $\tau$ is the temperature of softmax ($\tau$ is set to $1/4$ in our experiments). Then there are only $k$ non-zero elements in $\pmb{\alpha}_{r}^{H}$ and $\pmb{\alpha}_{r}^{T}$ since $exp(-\infty)=0$. The algorithm is: \begin{itemize} \item[(1)] Randomly initialize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$, which is not differentiable. \item[(2)] Optimize differentiable parameters by SGD with fixed mapping vectors for several epochs. \item[(3)] If it is the early stage of training, fix the differentiable parameters learned in the previous step and optimize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ to minimize objective function. Go to step (2) if the model has not converged. \end{itemize} How do we optimize mapping vectors? A brute-force algorithm is to enumerate all possible values of mapping vectors. However, the time complexity of such a naive algorithm grows exponentially. Instead, we make several approximations to optimize the mapping vectors greedily. We define $J_{r,i}^{H}$ as an approximation to the cost $L$ when relation $r$ is mapped to concept $i$, i.e., $\textbf{I}_{r,i}^{H}=1$. We select the top $k$ concepts with smallest approximated cost when we optimize mapping vectors. The approximated cost takes the same hinge loss as the original cost function shown in Equation \ref{eq:hinge} with a concept specific energy function $f_{r,i}^{H}(h,t)$: \begin{equation*} J_{r,i}^{H}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}\max(\gamma + f_{r,i}^{H}(h,t) -f_{r,i}^{H}(h',t'), 0) \end{equation*} where the energy function is similar to the original function defined in Equation \ref{eq:energy function} except that relation $r$ is completely assigned with concept $i$: \begin{align*} f_{r,i}^{H}(h,t) &= \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\| \\ \end{align*} Similarly, the formulations for tail entities are defined as \begin{align*} f_{r,i}^{T}(h,t) &= \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \mathbf{D}_i \cdot \mathbf{t}\| \end{align*} $$J_{r,i}^{T}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}[\gamma + f_{r,i}^{T}(h,t) -f_{i}^{r,2}(h',t')]_+$$ The above process is a greedy algorithm. We make the following relaxations to ensure efficient and effective approximation: Firstly, concepts used to project head and tail entities are decoupled and selected independently. Secondly, $J_{r,i}^{H}$ and $J_{r,i}^{T}$ are evaluated on condition that concept $i$ is fully utilized, i.e., we ignore the interaction between concept $i$ and other concepts by setting attention $\pmb{\alpha}_{r,1,i}= 1$\footnote{The relaxation is made to reduce the computation complexity. Otherwise, to evaluate indicator vectors involving multiple matrices, we need to perform SGD to get the corresponding optimal values of $v_{r,1}$ and $v_{r,2}$. }. The greedy process works well empirically. We draw our inspiration from Expectation-Maximization (EM) ~\citep{dempster1977maximum} and LightRNN~\citep{LightRNN}. The above algorithm is similar to EM and LightRNN in the sense that some parameters can change rapidly based on the estimation of the corresponding cost. In other words, we are not changing and exploring the mapping vectors bit by bit but they can be reassigned with a completely different value in one step, leading to fast convergence. \fi \iffalse \begin{algorithm}[] Initialize $\mathbf{I}_{r}^{H}$,$\mathbf{I}_{r}^{T}$ randomly \\ \While {not convergence} { \For{\texttt{epoch} $= 1$ to $T_1$}{ Optimize $L$ by SGD on $\theta$ with $I_{r}^{H}, I_{r}^{T}$ fixed } $\texttt{tot\_epoch} = \texttt{tot\_epoch} + T_1$ \\ \If{$\texttt{tot\_epoch} \leq T_2$}{ { Find $\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{(T)'}$ which approximately maximize $L$. \\ Set $\mathbf{I}_{r}^{H}=\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{T}=\mathbf{I}_{r}^{(T)'}$. }} } \caption{Coordinate ascent optimization algorithm} \label{alg:opt} \end{algorithm} \fi \subsection{Corrupted Sample Generating Method} \label{sec:sampling} Recall that we need to sample a negative triple $(h',r,t')$ to compute hinge loss shown in Eq.~\ref{eq:hinge}, given a positive triple $(h,r,t)\in P$. The distribution of negative triple is denoted by $N(h,r,t)$. Previous work~\citep{TransE, AAAI159571, yang-etal-2015,STransE} generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds $\gamma$, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability $p_r$ and from the whole entity set with probability $1-p_r$. The choice of relation-dependent probability $p_r$ is specified in Appendix \ref{sec:domain_sampling}. In the rest of the paper, we refer to the new proposed sampling method as "domain sampling". \section{Experiments} \label{sec:experiments} \subsection{Setup} To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by \citet{TransE} and use the same training/validation/test split as in \citep{TransE}. The information of the two datasets is given in Table \ref{tab:datasets}. In knowledge base completion task, we evaluate model's performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation $r$ and tail $t$ in triple $(h,r,t)$, we compute the energy function $f_r(h', t)$ for each entity $h'$ in the knowledge base and rank all the entities according to the energy. We follow \citet{TransE} to report the \emph{filter} results, i.e., removing all other correct candidates $h'$ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top $10$ accuracy). Lower mean rank or higher Hits@10 mean better performance. \subsection{Implementation Details} We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution $\mathcal{N}(0,\,0.005^2)$. The entity and relation vectors of ITransF are initialized by TransE~\citep{TransE}, following~\citet{AAAI159571, ji-EtAl:2015:ACL-IJCNLP, Garcia-DuranBUG15, garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1}. We ran mini-batch SGD until convergence. We employ the {``\textit{Bernoulli}''} sampling method to generate incorrect triples as used in \citet{AAAI148531}, \citet{AAAI159571}, \citet{He:2015}, \citet{ji-EtAl:2015:ACL-IJCNLP} and \citet{lin-EtAl:2015:EMNLP1}. STransE~\citep{STransE} is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin $\gamma$ to $5$ and dimension of embedding $n$ to $50$ for WN18, and $\gamma = 1, n = 100$ for FB15k. We set the batch size to $20$ for WN18 and $1000$ for FB15k. The learning rate is $0.01$ on WN18 and $0.1$ on FB15k. We use $30$ matrices on WN18 and $300$ matrices on FB15k. All the models are implemented with Theano~\citep{bergstra2010theano}. The Softmax temperature is set to $1/4$. %\FIXME{T1, T2} \subsection{Results \& Analysis} The overall link prediction results\footnote{Note that although IRN~\citep{implicit} does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is $80.7$, similar to models without path information.} are reported in Table \ref{tab:main}. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. %In our framework, since Obama+IsBornIn$\approx$Honolulu, Honolulu+IsCityIn$\approx$US. Then we can expect Obama+IsBornIn+IsCityIn$\approx$US. %Projection matrices are not exactly the same An straightforward way of extending our proposed model to $k$-step path $P=\{r_i\}_{i=1}^{k}$ is to define a path energy function $\| \pmb{\alpha}_{P}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \sum_{r_i\in P}\mathbf{r}_i - \pmb{\alpha}_{P}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell}$, $\pmb{\alpha}_{P}^{H}$ is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. \iffalse In many knowledge bases, a small number of relations enjoy the majority of data, while a lot of relations are rare and hard to deal with. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. \fi \paragraph{Performance on Rare Relations} In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model's performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf's law. Since the frequencies of relations approximately follow a power distribution, their log frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure \ref{fig:rare}.\footnote{Domain sampling is not employed.} As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from $74.4$ to $92.0$, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix \ref{sec:rare_WN} where we can observe tha. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. \paragraph{Interpretability} In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure \ref{fig:att}. For WN18, the words ``hyponym'' and ``hypernym'' refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, ``instance\_hypernym'' is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is $(\textit{New York}, \texttt{instance\_hypernym}, \textit{city})$. This connection has also been discovered by our model, indicated by the fact that ``instance\_hypernym(T)'' and ``hypernym(T)'' share a common concept matrix. Finally, for symmetric relations like ``similar\_to'', we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in ``(somebody) won\_award\_for (some work)'' and ``(some work) award\_winning\_work (somebody)''. What's more, although relation ``won\_award\_for'' and ``was\_nominated\_for'' share the same concepts, their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. \begin{center} \end{center} \paragraph{Model Compression} \label{sec:compress} A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure \ref{fig:num_of_matrix} plots the average performance of ITransF against the number of projection matrices $m$, together with two baseline models. On FB15k, when we reduce the number of matrices from $2200$ to $30$ ($\sim90\times$ compression), our model performance decreases by only $0.09\%$ on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to $18$. \section{Analysis on Sparseness} Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. \paragraph{Dense Attention w/o $\ell_1$ regularization} Although $\ell_0$ constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with $\ell_1$ regularization. We set the $\ell_1$ coefficient to $0.001$ in our experiments and does not apply Softmax since the $\ell_1$ of a vector after Softmax is always $1$. We compare models in a setting where the computation time of dense attention model is acceptable\footnote{With $300$ projection matrices, it takes $1h1m$ to run one epoch for a model with dense attention.}. We use $22$ weight matrices on WN18 and $15$ weight matrices on FB15k and train both the models for $2000$ epochs. The results are reported in Table \ref{tab:dense}. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of model with $\ell_1$ regularized dense attention in Figure \ref{fig:att_l1}. We see that $\ell_1$ regularization does not produce a sparse attention, especially on FB15k. \paragraph{Nonnegative Sparse Encoding} In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking $|2R|$ pretrained projection matrices into a 3-dimensional tensor $X\in \mathbb{R}^{2|R| \times n \times n}$, similar sparsity can be induced by solving an $\ell_1$-regularized tensor completion problem $\min_{\mathbf{A},\mathbf{D}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \|\mathbf{A}\|_{\ell_1}$. Basically, $\mathbf{A}$ plays the same role as the attention vectors in our model. For more details, we refer readers to \citep{faruqui-EtAl:2015:ACL-IJCNLP}. For completeness, we compare our model with the aforementioned approach\footnote{We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP}.}. The comparison is summarized in table \ref{tab:optimization}. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. \iffalse To investigate whether our representation, we apply a non-negative sparse encoding method to obtain the sparse representation of projection matrices. We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP} and set the hyperparamters so that we obtain average degree of sparseness. We first train a STransE model which utilize separate projection matrices for different relations and stack all the projection matrices in a 3-Dimensional tensor $X\in \mathbb{R}^{2H\times n \times n}$ where $H$ is the number of relations. Then we minimize the following reconstruction loss \begin{equation} \begin{aligned} \min_{\mathbf{D},\mathbf{A}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \Omega(\mathbf{A}) + \gamma ||\mathbf{D}||_2^2 \end{aligned} \end{equation} where $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$ is the basis matrices and $\Omega$ is a regularizer to ensure sparseness representations. which utilize a $\ell_1$ regularizer. We \fi \section{Related Work} \label{sec:related_work} In KBC, CTransR~\citep{AAAI159571} enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement~\citep{turney2012domain} and relation adaptation~\citep{bollegala2015embedding}. Data sparsity is a common problem in many fields. Transfer learning~\citep{pan2010survey} has been shown to be promising to transfer knowledge and statistical strengths across similar models or languages. For example, \citet{D16-1153} transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. \citet{zoph-EtAl:2016:EMNLP2016} initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention~\citep{martins2016softmax, makhzani2013k,OUTRAGEOUSLY} share a similar idea of sorting the values before softmax and only keeping the $K$ largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN~\citep{LightRNN}. They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. \section{Conclusion and Future Work} In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. \section*{Acknowledgments} We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. \bibliographystyle{acl_natbib} \clearpage \appendix \section{Appendix} \subsection{Domain Sampling Probability} \label{sec:domain_sampling} In this section, we define the probability $p_r$ to generate a negative sample from the same domain mentioned in Section \ref{sec:sampling}. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. %To avoid generating false negative samples, we apply domain sampling with a higher probability on relations with a sparse domain. Specifically, let $\mathrm{M}^H_r=\{h \mid \exists t (h,r,t) \in P\}$ and $\mathrm{M}^T_r=\{t \mid \exists h (h,r,t) \in P\}$ denote the head or tail domain of relation $r$. Suppose $N_r=\{(h,r,t) \in P\}$ is the induced set of edges with relation $r$. We define the probability $p_r$ as \begin{equation} p_r=min(\frac{\lambda|\mathrm{M}^T_r| |\mathrm{M}^H_r|}{|N_r|}, 0.5) \label{eq:domain_sampling} \end{equation} Our motivation of such a formulation is as follows: Suppose $O_r$ is the set that contains all truthful fact triples on relation $r$, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is ${Pr((h,r,t)\in O_r \mid h\in \mathrm{M}^H_r, t \in \mathrm{M}^T_r) = \frac{|O_r|}{|\mathrm{M}^H_r||\mathrm{M}^T_r|}}$ Assume that all facts are missing with a probability $\lambda$, then $|N_r|=\lambda|O_r|$ and the above probability can be approximated by $ \frac{|N_r|}{\lambda|\mathrm{M}^H_r||\mathrm{M}^T_r|}$. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. \ref{eq:domain_sampling}. The results in section \ref{sec:experiments} are obtained with $\lambda$ set to $0.001$. We compare how different value of $\lambda$ would influence our model's performance in Table. \ref{tab:domain_sampling}. With large $\lambda$ and higher domain sampling probability, our model's Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. \subsection{Performance on individual relations of WN18} \label{sec:rare_WN} We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. \iffalse \subsection{Performance on different relations} \fi \end{document}
An Interpretable Knowledge Transfer Model for Knowledge Base Completion
1704.05908
Table 2: Link prediction results on two datasets. Higher Hits@10 or lower Mean Rank indicates better performance. Following Nguyen et al. (2016b) and Shen et al. (2016), we divide the models into two groups. The first group contains intrinsic models without using extra information. The second group make use of additional information. Results in the brackets are another set of results STransE reported.
[ "[BOLD] Model", "[BOLD] Additional Information", "[BOLD] WN18 Mean Rank", "[BOLD] WN18 Hits@10", "[BOLD] FB15k Mean Rank", "[BOLD] FB15k Hits@10" ]
[ [ "SE Bordes et al. ( 2011 )", "No", "985", "80.5", "162", "39.8" ], [ "Unstructured Bordes et al. ( 2014 )", "No", "304", "38.2", "979", "6.3" ], [ "TransE (Bordes et al., 2013 )", "No", "251", "89.2", "125", "47.1" ], [ "TransH (Wang et al., 2014 )", "No", "303", "86.7", "87", "64.4" ], [ "TransR (Lin et al., 2015b )", "No", "225", "92.0", "77", "68.7" ], [ "CTransR (Lin et al., 2015b )", "No", "218", "92.3", "75", "70.2" ], [ "KG2E (He et al., 2015 )", "No", "348", "93.2", "59", "74.0" ], [ "TransD (Ji et al., 2015 )", "No", "212", "92.2", "91", "77.3" ], [ "TATEC (García-Durán et al., 2016 )", "No", "-", "-", "[BOLD] 58", "76.7" ], [ "NTN (Socher et al., 2013 )", "No", "-", "66.1", "-", "41.4" ], [ "DISTMULT (Yang et al., 2015 )", "No", "-", "94.2", "-", "57.7" ], [ "STransE (Nguyen et al., 2016b )", "No", "206 (244)", "93.4 (94.7)", "69", "79.7" ], [ "ITransF", "No", "[BOLD] 205", "94.2", "65", "81.0" ], [ "ITransF (domain sampling)", "No", "223", "[BOLD] 95.2", "77", "[BOLD] 81.4" ], [ "rTransE García-Durán et al. ( 2015 )", "Path", "-", "-", "50", "76.2" ], [ "PTransE Lin et al. ( 2015a )", "Path", "-", "-", "58", "84.6" ], [ "NLFeat Toutanova and Chen ( 2015 )", "Node + Link Features", "-", "94.3", "-", "87.0" ], [ "Random Walk (Wei et al., 2016 )", "Path", "-", "94.8", "-", "74.7" ], [ "IRN (Shen et al., 2016 )", "External Memory", "249", "[ITALIC] 95.3", "[ITALIC] 38", "[ITALIC] 92.7" ] ]
The overall link prediction results Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \aclfinalcopy % Uncomment this line for the final sumathbfission \def\aclpaperid{79} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{An Interpretable Knowledge Transfer Model \\ for Knowledge Base Completion} \author{Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, PA 15213, USA\\ {\tt \{qzxie, xuezhem, dzihang, hovy\}@cs.cmu.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. \end{abstract} \section{Introduction} Knowledge bases (KB), such as WordNet~\citep{FellbaumC98}, Freebase~\citep{Bollacker:2008}, YAGO ~\citep{Suchanek:2007} and DBpedia~\citep{LehmannIJJKMHMK15}, are useful resources for many applications such as question answering~\citep{berant-EtAl:2013:EMNLP,yih-EtAl:2015:ACL-IJCNLP,dai-li-xu:2016:P16-1} and information extraction~\citep{mintz-EtAl:2009:ACLIJCNLP}. However, knowledge bases suffer from incompleteness despite their formidable sizes ~\citep{NIPS2013_5028,West:2014:KBC:2566486.2568032}, leading to a number of studies on automatic knowledge base completion (KBC)~\citep{NickelMTG15} or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multi-relational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE}. As a seminal work, \citet{TransE} proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}, and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE~\citep{STransE} utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. \iffalse Of these studies, a number of neural network based techniques have emerged over the years to address the KBC task, among which embedding based models~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE} have stood out for its simplicity and effectiveness. \citet{TransE} proposed the TransE model that associated entities and relations with dense embedding vectors. To better model different aspects of the same entity, a variety of models map the entity embedding to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}. For instance, STransE~\citep{STransE} projected the head entity and tail entity to a relation-dependent space by multiplying two relation-specific projection matrices. \fi Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC, nguyen2016neighborhood}, alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multi-relation paths~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}. Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. \iffalse However, for some relations, in practice, there are not enough data to estimate the projection matrices.%(repretition) due to the data sparsity problem in knowledge bases. This led to a vast amount of research on utilizing external information, such as textual relations from web-scale corpus~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC} and relation path~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}, to enhance performance. %\FIXME{The performance decrease was not caused by data sparsity?} Unfortunately, such task-specific knowledge is costly to develop, making these models difficult to adapt to new tasks or new domains. \fi In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. \section{Notation and Previous Models} Let $E$ denote the set of entities and $R$ denote the set of relations. In knowledge base completion, given a training set $P$ of triples $(h, r, t)$ where $h,t\in E$ are the head and tail entities having a relation $r\in R$, e.g., (\textit{Steve Jobs}, \texttt{FounderOf}, \textit{Apple}), we want to predict missing facts such as (\textit{Steve Jobs}, \texttt{Profession}, \textit{Businessperson}). Most of the embedding models for knowledge base completion define an energy function $f_r(h,t)$ according to the fact's plausibility~\citep{bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,yang-etal-2015, guu-miller-liang:2015:EMNLP,STransE}. The models are learned to minimize energy $f_r(h,t)$ of a plausible triple $(h,r,t)$ and to maximize energy $f_r(h',t')$ of an implausible triple $(h',r,t')$. Motivated by the linear translation phenomenon observed in well trained word embeddings~\citep{mikolov2013distributed}, TransE~\citep{TransE} represents the head entity $h$, the relation $r$ and the tail entity $t$ with vectors $\mathbf{h}, \mathbf{r}$ and $\mathbf{t} \in \mathbb{R}^{n}$ respectively, which were trained so that $\mathbf{h}+\mathbf{r}\approx \mathbf{t}$. They define the energy function as $$f_r(h,t) = \| \mathbf{h} + \mathbf{r} - \mathbf{t} \|_{\ell}$$ where $\ell=1$ or $2$, which means either the $\ell_1$ or the $\ell_2$ norm of the vector $\mathbf{h} + \mathbf{r} - \mathbf{t}$ will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR~\citep{AAAI159571} uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE~\citep{STransE} extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is $$f_r(h,t) = \|\mathbf{W}_{r,1}\mathbf{h} + \mathbf{r} - \mathbf{W}_{r,2}\mathbf{t} \|_{\ell}$$ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. \section{Interpretable Knowledge Transfer} \subsection{Model} As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation ``(somebody) won award for (some work)'' and ``(somebody) was nominated for (some work)'' both describe a person's high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. \paragraph{Energy function} Specifically, we stack all the concept projection matrices to a 3-dimensional tensor $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$, where $m$ is the pre-specified number of concept projection matrices and $n$ is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: \begin{equation} f_r(h,t) = \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell} \label{eq:energy function} \end{equation} where $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T} \in [0,1]^m$, satisfying $\sum_i\pmb{\alpha}_{r,i}^{H}=\sum_i\pmb{\alpha}_{r,i}^{T}=1$, are normalized attention vectors used to compose all concept projection matrices in $\mathbf{D}$ by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use $m=2|R|$ concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section \ref{sec:compress}), though STransE always requires $2|R|$ projection matrices. We follow previous work to minimize the following hinge loss function: \begin{equation} \mathcal{L}=\sum_{\substack{(h,r,t) \sim P,\\ (h',r,t') \sim N}} \left[ \gamma + f_{r}(h,t) -f_{r}(h',t') \right]_+ \label{eq:hinge} \end{equation} where $P$ is the training set consisting of correct triples, $N$ is the distribution of corrupted triples defined in section \ref{sec:sampling}, and $[\cdot]_+ = \max(\cdot, 0)$. Note that we have omitted the dependence of $N$ on $(h,r,t)$ to avoid clutter. We normalize the entity vectors $\mathbf{h},\mathbf{t}$, and the projected entity vectors $\pmb{\alpha}_{r}^{H} \cdot \mathbf{D}\cdot \mathbf{h}$ and $\pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}$ to have unit length after each update, which is an effective regularization method that benefits all models. \paragraph{Sparse attention vectors} In Eq.~\eqref{eq:energy function}, we have defined $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T}$ to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of $m$ matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing $\ell_1$ regularization~\citep{tibshirani1996regression} on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce $\ell_0$ constraints on $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. In order to satisfy both the normalization condition and the $\ell_0$ constraints, we reparameterize the attention vectors in the following way: \begin{align*} \pmb{\alpha}_{r}^{H}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{H}, \mathbf{I}_{r}^{H}) \\ \pmb{\alpha}_{r}^{T}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{T}, \mathbf{I}_{r}^{T}) \end{align*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in \mathbb{R}^m$ are the pre-softmax scores, $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{0,1\}^{m}$ are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the $\mathrm{SparseSoftmax}$ is defined as \begin{equation*} \mathrm{SparseSoftmax}(\mathbf{v}, \mathbf{I})_i=\frac{\exp(\mathbf{v}_i / \tau) \mathbf{I}_i}{\sum_j \exp(\mathbf{v}_j / \tau) \mathbf{I}_j} \end{equation*} with $\tau$ being the temperature of Softmax. With this reparameterization, $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T}$ and $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ replace $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$ to become the real parameters of the model. Also, note that it is equivalent to pose the $\ell_0$ constraints on $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ instead of $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. Putting these modifications together, we can rewrite the optimization problem as \begin{equation} \begin{aligned} & {\text{minimize}} & & \mathcal{L} \\ & \text{subject to} & & \|\mathbf{I}_{r}^{H}\|_{0} \leq k,\|\mathbf{I}_{r}^{T}\|_{0} \leq k \end{aligned} \label{eq:l0_problem} \end{equation} where $\mathcal{L}$ is the loss function defined in Eq.~\eqref{eq:hinge}. \subsection{Block Iterative Optimization} Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under $\ell_0$ constraints. Thus, we resort to an approximated algorithm in this work. For convenience, we refer to the parameters with and without the sparse constraints as the \textit{sparse} partition and the \textit{dense} partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold \begin{enumerate}[itemsep=-1mm] \item the sparsity required by the $\ell_0$ constaint is maintained, and \item the cost define by Eq.~\eqref{eq:hinge} is decreased. \end{enumerate} Satisfying the two criterion seems to highly resemble the original problem defined in Eq.~\eqref{eq:l0_problem}. However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation $r$. In other words, the optimal choice of $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ is independent of $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ for any $r' \neq r$. Therefore, we only need to consider the optimization for a single relation $r$, which is essentially an assignment problem. Note that, however, $\mathbf{I}_{r}^{H}$ and $\mathbf{I}_{r}^{T}$ are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation $r$, we consider the induced cost $\mathcal{L}_{r,i}^{H}$ where only a single projection matrix $i$ is used for the head entity: \begin{equation*} \mathcal{L}_{r,i}^{H} = \sum_{\substack{(h,r,t) \sim P_r,\\ (h',r,t') \sim N_r}} \left[ \gamma + f_{r,i}^{H}(h,t) - f_{r,i}^{H}(h',t') \right]_+ \end{equation*} where $f_{r,i}^{H}(h,t) = \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|$ is the corresponding energy function, and the subscript in $P_r$ and $N_r$ denotes the subsets with relation $r$. Intuitively, $\mathcal{L}_{r,i}^{H}$ measures, given the current tail attention vector $\pmb{\alpha}_{r}^{T}$, if only one project matrix could be chosen for the head entity, how implausible $D_i$ would be. Hence, $i^* = \arg\min_i \mathcal{L}_{r,i}^{H}$ gives us the best single projection matrix on the head side given $\pmb{\alpha}_{r}^{T}$. Now, in order to choose the best $k$ matrices, we basically ignore the interaction among projection matrices, and update $\mathbf{I}_{r}^{H}$ in the following way: \begin{equation*} \mathbf{I}_{r,i}^{H} \leftarrow \begin{cases} 1, &i \in \mathrm{argpartition}_{i}(\mathcal{L}_{r,i}^{H}, k) \\ 0, &\text{otherwise} \end{cases}%, \, \forall i \end{equation*} where the function $\mathrm{argpartition}_{i}(x_i, k)$ produces the index set of the lowest-$k$ values of $x_i$. Analogously, we can define the single-matrix cost $\mathcal{L}_{r,i}^{T}$ and the energy function $f_{r,i}^{T}(h,t)$ on the tail side in a symmetric way. Then, the update rule for $\mathbf{I}_{r}^{H}$ follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section \ref{sec:experiments}, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. \iffalse Though sparseness is favorable in practice, even in linear regression, it has been shown to be an NP-hard problem to find the optimal solutions under $\ell_0$ constraints. %A lot of algorithms such as Approximated algorithms such as forward stepwise algorithm are proposed. Here we propose an approximated algorithm to solve it. We divide all of our parameters into two partitions: differentiable and non-differentiable, and we iteratively optimize those two. Differentiable parameters such as embeddings, projection matrices are optimized by SGD. Non-differentiable parameters are optimized by a greedy approximated process, aiming to minimize the cost function. Recall that we want the number of concepts associated with relations to be less than or equal to $k$, indicated by the $\ell_0$ constraint of attention vectors. We represent the mapping between relation $r$ and concepts by two indicator vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{-\infty ,1\}^{m}$, the indicator of non-zero entries in attention vectors. The $\ell_0$ constraint is guaranteed as the number of $1$s in $\mathbf{I}_{r}^{H},\mathbf{I}_{r}^{T}$ is less than or equal to $k$. Those mapping vectors constitutes the non-differentiable parameters in our framework. Formally, the attention vectors are calculated as follows: $$\pmb{\alpha}_{r}^{H}=\mathrm{Softmax}(\mathbf{v}_{r}^{H} \circ \mathbf{I}_{r}^{H})$$ $$\pmb{\alpha}_{r}^{T}=\mathrm{Softmax}(\mathbf{v}_{r}^{T} \circ \mathbf{I}_{r}^{T})$$ \begin{equation*} \mathrm{Softmax}(\mathbf{x})_i=\frac{\exp(\mathbf{x}_i / \tau)}{\sum_j \exp(\mathbf{x}_j / \tau)} \end{equation*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in (0, \infty)^m$ are the parameters for attention, $\circ$ is element wise multiplication, $\tau$ is the temperature of softmax ($\tau$ is set to $1/4$ in our experiments). Then there are only $k$ non-zero elements in $\pmb{\alpha}_{r}^{H}$ and $\pmb{\alpha}_{r}^{T}$ since $exp(-\infty)=0$. The algorithm is: \begin{itemize} \item[(1)] Randomly initialize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$, which is not differentiable. \item[(2)] Optimize differentiable parameters by SGD with fixed mapping vectors for several epochs. \item[(3)] If it is the early stage of training, fix the differentiable parameters learned in the previous step and optimize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ to minimize objective function. Go to step (2) if the model has not converged. \end{itemize} How do we optimize mapping vectors? A brute-force algorithm is to enumerate all possible values of mapping vectors. However, the time complexity of such a naive algorithm grows exponentially. Instead, we make several approximations to optimize the mapping vectors greedily. We define $J_{r,i}^{H}$ as an approximation to the cost $L$ when relation $r$ is mapped to concept $i$, i.e., $\textbf{I}_{r,i}^{H}=1$. We select the top $k$ concepts with smallest approximated cost when we optimize mapping vectors. The approximated cost takes the same hinge loss as the original cost function shown in Equation \ref{eq:hinge} with a concept specific energy function $f_{r,i}^{H}(h,t)$: \begin{equation*} J_{r,i}^{H}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}\max(\gamma + f_{r,i}^{H}(h,t) -f_{r,i}^{H}(h',t'), 0) \end{equation*} where the energy function is similar to the original function defined in Equation \ref{eq:energy function} except that relation $r$ is completely assigned with concept $i$: \begin{align*} f_{r,i}^{H}(h,t) &= \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\| \\ \end{align*} Similarly, the formulations for tail entities are defined as \begin{align*} f_{r,i}^{T}(h,t) &= \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \mathbf{D}_i \cdot \mathbf{t}\| \end{align*} $$J_{r,i}^{T}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}[\gamma + f_{r,i}^{T}(h,t) -f_{i}^{r,2}(h',t')]_+$$ The above process is a greedy algorithm. We make the following relaxations to ensure efficient and effective approximation: Firstly, concepts used to project head and tail entities are decoupled and selected independently. Secondly, $J_{r,i}^{H}$ and $J_{r,i}^{T}$ are evaluated on condition that concept $i$ is fully utilized, i.e., we ignore the interaction between concept $i$ and other concepts by setting attention $\pmb{\alpha}_{r,1,i}= 1$\footnote{The relaxation is made to reduce the computation complexity. Otherwise, to evaluate indicator vectors involving multiple matrices, we need to perform SGD to get the corresponding optimal values of $v_{r,1}$ and $v_{r,2}$. }. The greedy process works well empirically. We draw our inspiration from Expectation-Maximization (EM) ~\citep{dempster1977maximum} and LightRNN~\citep{LightRNN}. The above algorithm is similar to EM and LightRNN in the sense that some parameters can change rapidly based on the estimation of the corresponding cost. In other words, we are not changing and exploring the mapping vectors bit by bit but they can be reassigned with a completely different value in one step, leading to fast convergence. \fi \iffalse \begin{algorithm}[] Initialize $\mathbf{I}_{r}^{H}$,$\mathbf{I}_{r}^{T}$ randomly \\ \While {not convergence} { \For{\texttt{epoch} $= 1$ to $T_1$}{ Optimize $L$ by SGD on $\theta$ with $I_{r}^{H}, I_{r}^{T}$ fixed } $\texttt{tot\_epoch} = \texttt{tot\_epoch} + T_1$ \\ \If{$\texttt{tot\_epoch} \leq T_2$}{ { Find $\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{(T)'}$ which approximately maximize $L$. \\ Set $\mathbf{I}_{r}^{H}=\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{T}=\mathbf{I}_{r}^{(T)'}$. }} } \caption{Coordinate ascent optimization algorithm} \label{alg:opt} \end{algorithm} \fi \subsection{Corrupted Sample Generating Method} \label{sec:sampling} Recall that we need to sample a negative triple $(h',r,t')$ to compute hinge loss shown in Eq.~\ref{eq:hinge}, given a positive triple $(h,r,t)\in P$. The distribution of negative triple is denoted by $N(h,r,t)$. Previous work~\citep{TransE, AAAI159571, yang-etal-2015,STransE} generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds $\gamma$, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability $p_r$ and from the whole entity set with probability $1-p_r$. The choice of relation-dependent probability $p_r$ is specified in Appendix \ref{sec:domain_sampling}. In the rest of the paper, we refer to the new proposed sampling method as "domain sampling". \section{Experiments} \label{sec:experiments} \subsection{Setup} To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by \citet{TransE} and use the same training/validation/test split as in \citep{TransE}. The information of the two datasets is given in Table \ref{tab:datasets}. In knowledge base completion task, we evaluate model's performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation $r$ and tail $t$ in triple $(h,r,t)$, we compute the energy function $f_r(h', t)$ for each entity $h'$ in the knowledge base and rank all the entities according to the energy. We follow \citet{TransE} to report the \emph{filter} results, i.e., removing all other correct candidates $h'$ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top $10$ accuracy). Lower mean rank or higher Hits@10 mean better performance. \subsection{Implementation Details} We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution $\mathcal{N}(0,\,0.005^2)$. The entity and relation vectors of ITransF are initialized by TransE~\citep{TransE}, following~\citet{AAAI159571, ji-EtAl:2015:ACL-IJCNLP, Garcia-DuranBUG15, garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1}. We ran mini-batch SGD until convergence. We employ the {``\textit{Bernoulli}''} sampling method to generate incorrect triples as used in \citet{AAAI148531}, \citet{AAAI159571}, \citet{He:2015}, \citet{ji-EtAl:2015:ACL-IJCNLP} and \citet{lin-EtAl:2015:EMNLP1}. STransE~\citep{STransE} is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin $\gamma$ to $5$ and dimension of embedding $n$ to $50$ for WN18, and $\gamma = 1, n = 100$ for FB15k. We set the batch size to $20$ for WN18 and $1000$ for FB15k. The learning rate is $0.01$ on WN18 and $0.1$ on FB15k. We use $30$ matrices on WN18 and $300$ matrices on FB15k. All the models are implemented with Theano~\citep{bergstra2010theano}. The Softmax temperature is set to $1/4$. %\FIXME{T1, T2} \subsection{Results \& Analysis} The overall link prediction results\footnote{Note that although IRN~\citep{implicit} does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is $80.7$, similar to models without path information.} are reported in Table \ref{tab:main}. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. %In our framework, since Obama+IsBornIn$\approx$Honolulu, Honolulu+IsCityIn$\approx$US. Then we can expect Obama+IsBornIn+IsCityIn$\approx$US. %Projection matrices are not exactly the same An straightforward way of extending our proposed model to $k$-step path $P=\{r_i\}_{i=1}^{k}$ is to define a path energy function $\| \pmb{\alpha}_{P}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \sum_{r_i\in P}\mathbf{r}_i - \pmb{\alpha}_{P}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell}$, $\pmb{\alpha}_{P}^{H}$ is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. \iffalse In many knowledge bases, a small number of relations enjoy the majority of data, while a lot of relations are rare and hard to deal with. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. \fi \paragraph{Performance on Rare Relations} In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model's performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf's law. Since the frequencies of relations approximately follow a power distribution, their log frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure \ref{fig:rare}.\footnote{Domain sampling is not employed.} As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from $74.4$ to $92.0$, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix \ref{sec:rare_WN} where we can observe tha. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. \paragraph{Interpretability} In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure \ref{fig:att}. For WN18, the words ``hyponym'' and ``hypernym'' refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, ``instance\_hypernym'' is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is $(\textit{New York}, \texttt{instance\_hypernym}, \textit{city})$. This connection has also been discovered by our model, indicated by the fact that ``instance\_hypernym(T)'' and ``hypernym(T)'' share a common concept matrix. Finally, for symmetric relations like ``similar\_to'', we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in ``(somebody) won\_award\_for (some work)'' and ``(some work) award\_winning\_work (somebody)''. What's more, although relation ``won\_award\_for'' and ``was\_nominated\_for'' share the same concepts, their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. \begin{center} \end{center} \paragraph{Model Compression} \label{sec:compress} A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure \ref{fig:num_of_matrix} plots the average performance of ITransF against the number of projection matrices $m$, together with two baseline models. On FB15k, when we reduce the number of matrices from $2200$ to $30$ ($\sim90\times$ compression), our model performance decreases by only $0.09\%$ on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to $18$. \section{Analysis on Sparseness} Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. \paragraph{Dense Attention w/o $\ell_1$ regularization} Although $\ell_0$ constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with $\ell_1$ regularization. We set the $\ell_1$ coefficient to $0.001$ in our experiments and does not apply Softmax since the $\ell_1$ of a vector after Softmax is always $1$. We compare models in a setting where the computation time of dense attention model is acceptable\footnote{With $300$ projection matrices, it takes $1h1m$ to run one epoch for a model with dense attention.}. We use $22$ weight matrices on WN18 and $15$ weight matrices on FB15k and train both the models for $2000$ epochs. The results are reported in Table \ref{tab:dense}. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of model with $\ell_1$ regularized dense attention in Figure \ref{fig:att_l1}. We see that $\ell_1$ regularization does not produce a sparse attention, especially on FB15k. \paragraph{Nonnegative Sparse Encoding} In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking $|2R|$ pretrained projection matrices into a 3-dimensional tensor $X\in \mathbb{R}^{2|R| \times n \times n}$, similar sparsity can be induced by solving an $\ell_1$-regularized tensor completion problem $\min_{\mathbf{A},\mathbf{D}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \|\mathbf{A}\|_{\ell_1}$. Basically, $\mathbf{A}$ plays the same role as the attention vectors in our model. For more details, we refer readers to \citep{faruqui-EtAl:2015:ACL-IJCNLP}. For completeness, we compare our model with the aforementioned approach\footnote{We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP}.}. The comparison is summarized in table \ref{tab:optimization}. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. \iffalse To investigate whether our representation, we apply a non-negative sparse encoding method to obtain the sparse representation of projection matrices. We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP} and set the hyperparamters so that we obtain average degree of sparseness. We first train a STransE model which utilize separate projection matrices for different relations and stack all the projection matrices in a 3-Dimensional tensor $X\in \mathbb{R}^{2H\times n \times n}$ where $H$ is the number of relations. Then we minimize the following reconstruction loss \begin{equation} \begin{aligned} \min_{\mathbf{D},\mathbf{A}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \Omega(\mathbf{A}) + \gamma ||\mathbf{D}||_2^2 \end{aligned} \end{equation} where $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$ is the basis matrices and $\Omega$ is a regularizer to ensure sparseness representations. which utilize a $\ell_1$ regularizer. We \fi \section{Related Work} \label{sec:related_work} In KBC, CTransR~\citep{AAAI159571} enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement~\citep{turney2012domain} and relation adaptation~\citep{bollegala2015embedding}. Data sparsity is a common problem in many fields. Transfer learning~\citep{pan2010survey} has been shown to be promising to transfer knowledge and statistical strengths across similar models or languages. For example, \citet{D16-1153} transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. \citet{zoph-EtAl:2016:EMNLP2016} initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention~\citep{martins2016softmax, makhzani2013k,OUTRAGEOUSLY} share a similar idea of sorting the values before softmax and only keeping the $K$ largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN~\citep{LightRNN}. They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. \section{Conclusion and Future Work} In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. \section*{Acknowledgments} We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. \bibliographystyle{acl_natbib} \clearpage \appendix \section{Appendix} \subsection{Domain Sampling Probability} \label{sec:domain_sampling} In this section, we define the probability $p_r$ to generate a negative sample from the same domain mentioned in Section \ref{sec:sampling}. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. %To avoid generating false negative samples, we apply domain sampling with a higher probability on relations with a sparse domain. Specifically, let $\mathrm{M}^H_r=\{h \mid \exists t (h,r,t) \in P\}$ and $\mathrm{M}^T_r=\{t \mid \exists h (h,r,t) \in P\}$ denote the head or tail domain of relation $r$. Suppose $N_r=\{(h,r,t) \in P\}$ is the induced set of edges with relation $r$. We define the probability $p_r$ as \begin{equation} p_r=min(\frac{\lambda|\mathrm{M}^T_r| |\mathrm{M}^H_r|}{|N_r|}, 0.5) \label{eq:domain_sampling} \end{equation} Our motivation of such a formulation is as follows: Suppose $O_r$ is the set that contains all truthful fact triples on relation $r$, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is ${Pr((h,r,t)\in O_r \mid h\in \mathrm{M}^H_r, t \in \mathrm{M}^T_r) = \frac{|O_r|}{|\mathrm{M}^H_r||\mathrm{M}^T_r|}}$ Assume that all facts are missing with a probability $\lambda$, then $|N_r|=\lambda|O_r|$ and the above probability can be approximated by $ \frac{|N_r|}{\lambda|\mathrm{M}^H_r||\mathrm{M}^T_r|}$. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. \ref{eq:domain_sampling}. The results in section \ref{sec:experiments} are obtained with $\lambda$ set to $0.001$. We compare how different value of $\lambda$ would influence our model's performance in Table. \ref{tab:domain_sampling}. With large $\lambda$ and higher domain sampling probability, our model's Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. \subsection{Performance on individual relations of WN18} \label{sec:rare_WN} We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. \iffalse \subsection{Performance on different relations} \fi \end{document} \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \aclfinalcopy % Uncomment this line for the final sumathbfission \def\aclpaperid{79} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{An Interpretable Knowledge Transfer Model \\ for Knowledge Base Completion} \author{Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, PA 15213, USA\\ {\tt \{qzxie, xuezhem, dzihang, hovy\}@cs.cmu.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. \end{abstract} \section{Introduction} Knowledge bases (KB), such as WordNet~\citep{FellbaumC98}, Freebase~\citep{Bollacker:2008}, YAGO ~\citep{Suchanek:2007} and DBpedia~\citep{LehmannIJJKMHMK15}, are useful resources for many applications such as question answering~\citep{berant-EtAl:2013:EMNLP,yih-EtAl:2015:ACL-IJCNLP,dai-li-xu:2016:P16-1} and information extraction~\citep{mintz-EtAl:2009:ACLIJCNLP}. However, knowledge bases suffer from incompleteness despite their formidable sizes ~\citep{NIPS2013_5028,West:2014:KBC:2566486.2568032}, leading to a number of studies on automatic knowledge base completion (KBC)~\citep{NickelMTG15} or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multi-relational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE}. As a seminal work, \citet{TransE} proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}, and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE~\citep{STransE} utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. \iffalse Of these studies, a number of neural network based techniques have emerged over the years to address the KBC task, among which embedding based models~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE} have stood out for its simplicity and effectiveness. \citet{TransE} proposed the TransE model that associated entities and relations with dense embedding vectors. To better model different aspects of the same entity, a variety of models map the entity embedding to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}. For instance, STransE~\citep{STransE} projected the head entity and tail entity to a relation-dependent space by multiplying two relation-specific projection matrices. \fi Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC, nguyen2016neighborhood}, alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multi-relation paths~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}. Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. \iffalse However, for some relations, in practice, there are not enough data to estimate the projection matrices.%(repretition) due to the data sparsity problem in knowledge bases. This led to a vast amount of research on utilizing external information, such as textual relations from web-scale corpus~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC} and relation path~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}, to enhance performance. %\FIXME{The performance decrease was not caused by data sparsity?} Unfortunately, such task-specific knowledge is costly to develop, making these models difficult to adapt to new tasks or new domains. \fi In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. \section{Notation and Previous Models} Let $E$ denote the set of entities and $R$ denote the set of relations. In knowledge base completion, given a training set $P$ of triples $(h, r, t)$ where $h,t\in E$ are the head and tail entities having a relation $r\in R$, e.g., (\textit{Steve Jobs}, \texttt{FounderOf}, \textit{Apple}), we want to predict missing facts such as (\textit{Steve Jobs}, \texttt{Profession}, \textit{Businessperson}). Most of the embedding models for knowledge base completion define an energy function $f_r(h,t)$ according to the fact's plausibility~\citep{bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,yang-etal-2015, guu-miller-liang:2015:EMNLP,STransE}. The models are learned to minimize energy $f_r(h,t)$ of a plausible triple $(h,r,t)$ and to maximize energy $f_r(h',t')$ of an implausible triple $(h',r,t')$. Motivated by the linear translation phenomenon observed in well trained word embeddings~\citep{mikolov2013distributed}, TransE~\citep{TransE} represents the head entity $h$, the relation $r$ and the tail entity $t$ with vectors $\mathbf{h}, \mathbf{r}$ and $\mathbf{t} \in \mathbb{R}^{n}$ respectively, which were trained so that $\mathbf{h}+\mathbf{r}\approx \mathbf{t}$. They define the energy function as $$f_r(h,t) = \| \mathbf{h} + \mathbf{r} - \mathbf{t} \|_{\ell}$$ where $\ell=1$ or $2$, which means either the $\ell_1$ or the $\ell_2$ norm of the vector $\mathbf{h} + \mathbf{r} - \mathbf{t}$ will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR~\citep{AAAI159571} uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE~\citep{STransE} extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is $$f_r(h,t) = \|\mathbf{W}_{r,1}\mathbf{h} + \mathbf{r} - \mathbf{W}_{r,2}\mathbf{t} \|_{\ell}$$ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. \section{Interpretable Knowledge Transfer} \subsection{Model} As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation ``(somebody) won award for (some work)'' and ``(somebody) was nominated for (some work)'' both describe a person's high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. \paragraph{Energy function} Specifically, we stack all the concept projection matrices to a 3-dimensional tensor $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$, where $m$ is the pre-specified number of concept projection matrices and $n$ is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: \begin{equation} f_r(h,t) = \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell} \label{eq:energy function} \end{equation} where $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T} \in [0,1]^m$, satisfying $\sum_i\pmb{\alpha}_{r,i}^{H}=\sum_i\pmb{\alpha}_{r,i}^{T}=1$, are normalized attention vectors used to compose all concept projection matrices in $\mathbf{D}$ by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use $m=2|R|$ concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section \ref{sec:compress}), though STransE always requires $2|R|$ projection matrices. We follow previous work to minimize the following hinge loss function: \begin{equation} \mathcal{L}=\sum_{\substack{(h,r,t) \sim P,\\ (h',r,t') \sim N}} \left[ \gamma + f_{r}(h,t) -f_{r}(h',t') \right]_+ \label{eq:hinge} \end{equation} where $P$ is the training set consisting of correct triples, $N$ is the distribution of corrupted triples defined in section \ref{sec:sampling}, and $[\cdot]_+ = \max(\cdot, 0)$. Note that we have omitted the dependence of $N$ on $(h,r,t)$ to avoid clutter. We normalize the entity vectors $\mathbf{h},\mathbf{t}$, and the projected entity vectors $\pmb{\alpha}_{r}^{H} \cdot \mathbf{D}\cdot \mathbf{h}$ and $\pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}$ to have unit length after each update, which is an effective regularization method that benefits all models. \paragraph{Sparse attention vectors} In Eq.~\eqref{eq:energy function}, we have defined $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T}$ to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of $m$ matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing $\ell_1$ regularization~\citep{tibshirani1996regression} on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce $\ell_0$ constraints on $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. In order to satisfy both the normalization condition and the $\ell_0$ constraints, we reparameterize the attention vectors in the following way: \begin{align*} \pmb{\alpha}_{r}^{H}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{H}, \mathbf{I}_{r}^{H}) \\ \pmb{\alpha}_{r}^{T}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{T}, \mathbf{I}_{r}^{T}) \end{align*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in \mathbb{R}^m$ are the pre-softmax scores, $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{0,1\}^{m}$ are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the $\mathrm{SparseSoftmax}$ is defined as \begin{equation*} \mathrm{SparseSoftmax}(\mathbf{v}, \mathbf{I})_i=\frac{\exp(\mathbf{v}_i / \tau) \mathbf{I}_i}{\sum_j \exp(\mathbf{v}_j / \tau) \mathbf{I}_j} \end{equation*} with $\tau$ being the temperature of Softmax. With this reparameterization, $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T}$ and $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ replace $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$ to become the real parameters of the model. Also, note that it is equivalent to pose the $\ell_0$ constraints on $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ instead of $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. Putting these modifications together, we can rewrite the optimization problem as \begin{equation} \begin{aligned} & {\text{minimize}} & & \mathcal{L} \\ & \text{subject to} & & \|\mathbf{I}_{r}^{H}\|_{0} \leq k,\|\mathbf{I}_{r}^{T}\|_{0} \leq k \end{aligned} \label{eq:l0_problem} \end{equation} where $\mathcal{L}$ is the loss function defined in Eq.~\eqref{eq:hinge}. \subsection{Block Iterative Optimization} Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under $\ell_0$ constraints. Thus, we resort to an approximated algorithm in this work. For convenience, we refer to the parameters with and without the sparse constraints as the \textit{sparse} partition and the \textit{dense} partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold \begin{enumerate}[itemsep=-1mm] \item the sparsity required by the $\ell_0$ constaint is maintained, and \item the cost define by Eq.~\eqref{eq:hinge} is decreased. \end{enumerate} Satisfying the two criterion seems to highly resemble the original problem defined in Eq.~\eqref{eq:l0_problem}. However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation $r$. In other words, the optimal choice of $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ is independent of $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ for any $r' \neq r$. Therefore, we only need to consider the optimization for a single relation $r$, which is essentially an assignment problem. Note that, however, $\mathbf{I}_{r}^{H}$ and $\mathbf{I}_{r}^{T}$ are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation $r$, we consider the induced cost $\mathcal{L}_{r,i}^{H}$ where only a single projection matrix $i$ is used for the head entity: \begin{equation*} \mathcal{L}_{r,i}^{H} = \sum_{\substack{(h,r,t) \sim P_r,\\ (h',r,t') \sim N_r}} \left[ \gamma + f_{r,i}^{H}(h,t) - f_{r,i}^{H}(h',t') \right]_+ \end{equation*} where $f_{r,i}^{H}(h,t) = \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|$ is the corresponding energy function, and the subscript in $P_r$ and $N_r$ denotes the subsets with relation $r$. Intuitively, $\mathcal{L}_{r,i}^{H}$ measures, given the current tail attention vector $\pmb{\alpha}_{r}^{T}$, if only one project matrix could be chosen for the head entity, how implausible $D_i$ would be. Hence, $i^* = \arg\min_i \mathcal{L}_{r,i}^{H}$ gives us the best single projection matrix on the head side given $\pmb{\alpha}_{r}^{T}$. Now, in order to choose the best $k$ matrices, we basically ignore the interaction among projection matrices, and update $\mathbf{I}_{r}^{H}$ in the following way: \begin{equation*} \mathbf{I}_{r,i}^{H} \leftarrow \begin{cases} 1, &i \in \mathrm{argpartition}_{i}(\mathcal{L}_{r,i}^{H}, k) \\ 0, &\text{otherwise} \end{cases}%, \, \forall i \end{equation*} where the function $\mathrm{argpartition}_{i}(x_i, k)$ produces the index set of the lowest-$k$ values of $x_i$. Analogously, we can define the single-matrix cost $\mathcal{L}_{r,i}^{T}$ and the energy function $f_{r,i}^{T}(h,t)$ on the tail side in a symmetric way. Then, the update rule for $\mathbf{I}_{r}^{H}$ follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section \ref{sec:experiments}, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. \iffalse Though sparseness is favorable in practice, even in linear regression, it has been shown to be an NP-hard problem to find the optimal solutions under $\ell_0$ constraints. %A lot of algorithms such as Approximated algorithms such as forward stepwise algorithm are proposed. Here we propose an approximated algorithm to solve it. We divide all of our parameters into two partitions: differentiable and non-differentiable, and we iteratively optimize those two. Differentiable parameters such as embeddings, projection matrices are optimized by SGD. Non-differentiable parameters are optimized by a greedy approximated process, aiming to minimize the cost function. Recall that we want the number of concepts associated with relations to be less than or equal to $k$, indicated by the $\ell_0$ constraint of attention vectors. We represent the mapping between relation $r$ and concepts by two indicator vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{-\infty ,1\}^{m}$, the indicator of non-zero entries in attention vectors. The $\ell_0$ constraint is guaranteed as the number of $1$s in $\mathbf{I}_{r}^{H},\mathbf{I}_{r}^{T}$ is less than or equal to $k$. Those mapping vectors constitutes the non-differentiable parameters in our framework. Formally, the attention vectors are calculated as follows: $$\pmb{\alpha}_{r}^{H}=\mathrm{Softmax}(\mathbf{v}_{r}^{H} \circ \mathbf{I}_{r}^{H})$$ $$\pmb{\alpha}_{r}^{T}=\mathrm{Softmax}(\mathbf{v}_{r}^{T} \circ \mathbf{I}_{r}^{T})$$ \begin{equation*} \mathrm{Softmax}(\mathbf{x})_i=\frac{\exp(\mathbf{x}_i / \tau)}{\sum_j \exp(\mathbf{x}_j / \tau)} \end{equation*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in (0, \infty)^m$ are the parameters for attention, $\circ$ is element wise multiplication, $\tau$ is the temperature of softmax ($\tau$ is set to $1/4$ in our experiments). Then there are only $k$ non-zero elements in $\pmb{\alpha}_{r}^{H}$ and $\pmb{\alpha}_{r}^{T}$ since $exp(-\infty)=0$. The algorithm is: \begin{itemize} \item[(1)] Randomly initialize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$, which is not differentiable. \item[(2)] Optimize differentiable parameters by SGD with fixed mapping vectors for several epochs. \item[(3)] If it is the early stage of training, fix the differentiable parameters learned in the previous step and optimize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ to minimize objective function. Go to step (2) if the model has not converged. \end{itemize} How do we optimize mapping vectors? A brute-force algorithm is to enumerate all possible values of mapping vectors. However, the time complexity of such a naive algorithm grows exponentially. Instead, we make several approximations to optimize the mapping vectors greedily. We define $J_{r,i}^{H}$ as an approximation to the cost $L$ when relation $r$ is mapped to concept $i$, i.e., $\textbf{I}_{r,i}^{H}=1$. We select the top $k$ concepts with smallest approximated cost when we optimize mapping vectors. The approximated cost takes the same hinge loss as the original cost function shown in Equation \ref{eq:hinge} with a concept specific energy function $f_{r,i}^{H}(h,t)$: \begin{equation*} J_{r,i}^{H}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}\max(\gamma + f_{r,i}^{H}(h,t) -f_{r,i}^{H}(h',t'), 0) \end{equation*} where the energy function is similar to the original function defined in Equation \ref{eq:energy function} except that relation $r$ is completely assigned with concept $i$: \begin{align*} f_{r,i}^{H}(h,t) &= \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\| \\ \end{align*} Similarly, the formulations for tail entities are defined as \begin{align*} f_{r,i}^{T}(h,t) &= \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \mathbf{D}_i \cdot \mathbf{t}\| \end{align*} $$J_{r,i}^{T}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}[\gamma + f_{r,i}^{T}(h,t) -f_{i}^{r,2}(h',t')]_+$$ The above process is a greedy algorithm. We make the following relaxations to ensure efficient and effective approximation: Firstly, concepts used to project head and tail entities are decoupled and selected independently. Secondly, $J_{r,i}^{H}$ and $J_{r,i}^{T}$ are evaluated on condition that concept $i$ is fully utilized, i.e., we ignore the interaction between concept $i$ and other concepts by setting attention $\pmb{\alpha}_{r,1,i}= 1$\footnote{The relaxation is made to reduce the computation complexity. Otherwise, to evaluate indicator vectors involving multiple matrices, we need to perform SGD to get the corresponding optimal values of $v_{r,1}$ and $v_{r,2}$. }. The greedy process works well empirically. We draw our inspiration from Expectation-Maximization (EM) ~\citep{dempster1977maximum} and LightRNN~\citep{LightRNN}. The above algorithm is similar to EM and LightRNN in the sense that some parameters can change rapidly based on the estimation of the corresponding cost. In other words, we are not changing and exploring the mapping vectors bit by bit but they can be reassigned with a completely different value in one step, leading to fast convergence. \fi \iffalse \begin{algorithm}[] Initialize $\mathbf{I}_{r}^{H}$,$\mathbf{I}_{r}^{T}$ randomly \\ \While {not convergence} { \For{\texttt{epoch} $= 1$ to $T_1$}{ Optimize $L$ by SGD on $\theta$ with $I_{r}^{H}, I_{r}^{T}$ fixed } $\texttt{tot\_epoch} = \texttt{tot\_epoch} + T_1$ \\ \If{$\texttt{tot\_epoch} \leq T_2$}{ { Find $\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{(T)'}$ which approximately maximize $L$. \\ Set $\mathbf{I}_{r}^{H}=\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{T}=\mathbf{I}_{r}^{(T)'}$. }} } \caption{Coordinate ascent optimization algorithm} \label{alg:opt} \end{algorithm} \fi \subsection{Corrupted Sample Generating Method} \label{sec:sampling} Recall that we need to sample a negative triple $(h',r,t')$ to compute hinge loss shown in Eq.~\ref{eq:hinge}, given a positive triple $(h,r,t)\in P$. The distribution of negative triple is denoted by $N(h,r,t)$. Previous work~\citep{TransE, AAAI159571, yang-etal-2015,STransE} generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds $\gamma$, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability $p_r$ and from the whole entity set with probability $1-p_r$. The choice of relation-dependent probability $p_r$ is specified in Appendix \ref{sec:domain_sampling}. In the rest of the paper, we refer to the new proposed sampling method as "domain sampling". \section{Experiments} \label{sec:experiments} \subsection{Setup} To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by \citet{TransE} and use the same training/validation/test split as in \citep{TransE}. The information of the two datasets is given in Table \ref{tab:datasets}. In knowledge base completion task, we evaluate model's performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation $r$ and tail $t$ in triple $(h,r,t)$, we compute the energy function $f_r(h', t)$ for each entity $h'$ in the knowledge base and rank all the entities according to the energy. We follow \citet{TransE} to report the \emph{filter} results, i.e., removing all other correct candidates $h'$ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top $10$ accuracy). Lower mean rank or higher Hits@10 mean better performance. \subsection{Implementation Details} We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution $\mathcal{N}(0,\,0.005^2)$. The entity and relation vectors of ITransF are initialized by TransE~\citep{TransE}, following~\citet{AAAI159571, ji-EtAl:2015:ACL-IJCNLP, Garcia-DuranBUG15, garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1}. We ran mini-batch SGD until convergence. We employ the {``\textit{Bernoulli}''} sampling method to generate incorrect triples as used in \citet{AAAI148531}, \citet{AAAI159571}, \citet{He:2015}, \citet{ji-EtAl:2015:ACL-IJCNLP} and \citet{lin-EtAl:2015:EMNLP1}. STransE~\citep{STransE} is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin $\gamma$ to $5$ and dimension of embedding $n$ to $50$ for WN18, and $\gamma = 1, n = 100$ for FB15k. We set the batch size to $20$ for WN18 and $1000$ for FB15k. The learning rate is $0.01$ on WN18 and $0.1$ on FB15k. We use $30$ matrices on WN18 and $300$ matrices on FB15k. All the models are implemented with Theano~\citep{bergstra2010theano}. The Softmax temperature is set to $1/4$. %\FIXME{T1, T2} \subsection{Results \& Analysis} The overall link prediction results\footnote{Note that although IRN~\citep{implicit} does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is $80.7$, similar to models without path information.} are reported in Table \ref{tab:main}. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. %In our framework, since Obama+IsBornIn$\approx$Honolulu, Honolulu+IsCityIn$\approx$US. Then we can expect Obama+IsBornIn+IsCityIn$\approx$US. %Projection matrices are not exactly the same An straightforward way of extending our proposed model to $k$-step path $P=\{r_i\}_{i=1}^{k}$ is to define a path energy function $\| \pmb{\alpha}_{P}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \sum_{r_i\in P}\mathbf{r}_i - \pmb{\alpha}_{P}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell}$, $\pmb{\alpha}_{P}^{H}$ is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. \iffalse In many knowledge bases, a small number of relations enjoy the majority of data, while a lot of relations are rare and hard to deal with. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. \fi \paragraph{Performance on Rare Relations} In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model's performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf's law. Since the frequencies of relations approximately follow a power distribution, their log frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure \ref{fig:rare}.\footnote{Domain sampling is not employed.} As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from $74.4$ to $92.0$, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix \ref{sec:rare_WN} where we can observe tha. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. \paragraph{Interpretability} In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure \ref{fig:att}. For WN18, the words ``hyponym'' and ``hypernym'' refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, ``instance\_hypernym'' is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is $(\textit{New York}, \texttt{instance\_hypernym}, \textit{city})$. This connection has also been discovered by our model, indicated by the fact that ``instance\_hypernym(T)'' and ``hypernym(T)'' share a common concept matrix. Finally, for symmetric relations like ``similar\_to'', we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in ``(somebody) won\_award\_for (some work)'' and ``(some work) award\_winning\_work (somebody)''. What's more, although relation ``won\_award\_for'' and ``was\_nominated\_for'' share the same concepts, their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. \begin{center} \end{center} \paragraph{Model Compression} \label{sec:compress} A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure \ref{fig:num_of_matrix} plots the average performance of ITransF against the number of projection matrices $m$, together with two baseline models. On FB15k, when we reduce the number of matrices from $2200$ to $30$ ($\sim90\times$ compression), our model performance decreases by only $0.09\%$ on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to $18$. \section{Analysis on Sparseness} Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. \paragraph{Dense Attention w/o $\ell_1$ regularization} Although $\ell_0$ constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with $\ell_1$ regularization. We set the $\ell_1$ coefficient to $0.001$ in our experiments and does not apply Softmax since the $\ell_1$ of a vector after Softmax is always $1$. We compare models in a setting where the computation time of dense attention model is acceptable\footnote{With $300$ projection matrices, it takes $1h1m$ to run one epoch for a model with dense attention.}. We use $22$ weight matrices on WN18 and $15$ weight matrices on FB15k and train both the models for $2000$ epochs. The results are reported in Table \ref{tab:dense}. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of model with $\ell_1$ regularized dense attention in Figure \ref{fig:att_l1}. We see that $\ell_1$ regularization does not produce a sparse attention, especially on FB15k. \paragraph{Nonnegative Sparse Encoding} In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking $|2R|$ pretrained projection matrices into a 3-dimensional tensor $X\in \mathbb{R}^{2|R| \times n \times n}$, similar sparsity can be induced by solving an $\ell_1$-regularized tensor completion problem $\min_{\mathbf{A},\mathbf{D}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \|\mathbf{A}\|_{\ell_1}$. Basically, $\mathbf{A}$ plays the same role as the attention vectors in our model. For more details, we refer readers to \citep{faruqui-EtAl:2015:ACL-IJCNLP}. For completeness, we compare our model with the aforementioned approach\footnote{We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP}.}. The comparison is summarized in table \ref{tab:optimization}. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. \iffalse To investigate whether our representation, we apply a non-negative sparse encoding method to obtain the sparse representation of projection matrices. We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP} and set the hyperparamters so that we obtain average degree of sparseness. We first train a STransE model which utilize separate projection matrices for different relations and stack all the projection matrices in a 3-Dimensional tensor $X\in \mathbb{R}^{2H\times n \times n}$ where $H$ is the number of relations. Then we minimize the following reconstruction loss \begin{equation} \begin{aligned} \min_{\mathbf{D},\mathbf{A}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \Omega(\mathbf{A}) + \gamma ||\mathbf{D}||_2^2 \end{aligned} \end{equation} where $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$ is the basis matrices and $\Omega$ is a regularizer to ensure sparseness representations. which utilize a $\ell_1$ regularizer. We \fi \section{Related Work} \label{sec:related_work} In KBC, CTransR~\citep{AAAI159571} enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement~\citep{turney2012domain} and relation adaptation~\citep{bollegala2015embedding}. Data sparsity is a common problem in many fields. Transfer learning~\citep{pan2010survey} has been shown to be promising to transfer knowledge and statistical strengths across similar models or languages. For example, \citet{D16-1153} transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. \citet{zoph-EtAl:2016:EMNLP2016} initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention~\citep{martins2016softmax, makhzani2013k,OUTRAGEOUSLY} share a similar idea of sorting the values before softmax and only keeping the $K$ largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN~\citep{LightRNN}. They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. \section{Conclusion and Future Work} In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. \section*{Acknowledgments} We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. \bibliographystyle{acl_natbib} \clearpage \appendix \section{Appendix} \subsection{Domain Sampling Probability} \label{sec:domain_sampling} In this section, we define the probability $p_r$ to generate a negative sample from the same domain mentioned in Section \ref{sec:sampling}. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. %To avoid generating false negative samples, we apply domain sampling with a higher probability on relations with a sparse domain. Specifically, let $\mathrm{M}^H_r=\{h \mid \exists t (h,r,t) \in P\}$ and $\mathrm{M}^T_r=\{t \mid \exists h (h,r,t) \in P\}$ denote the head or tail domain of relation $r$. Suppose $N_r=\{(h,r,t) \in P\}$ is the induced set of edges with relation $r$. We define the probability $p_r$ as \begin{equation} p_r=min(\frac{\lambda|\mathrm{M}^T_r| |\mathrm{M}^H_r|}{|N_r|}, 0.5) \label{eq:domain_sampling} \end{equation} Our motivation of such a formulation is as follows: Suppose $O_r$ is the set that contains all truthful fact triples on relation $r$, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is ${Pr((h,r,t)\in O_r \mid h\in \mathrm{M}^H_r, t \in \mathrm{M}^T_r) = \frac{|O_r|}{|\mathrm{M}^H_r||\mathrm{M}^T_r|}}$ Assume that all facts are missing with a probability $\lambda$, then $|N_r|=\lambda|O_r|$ and the above probability can be approximated by $ \frac{|N_r|}{\lambda|\mathrm{M}^H_r||\mathrm{M}^T_r|}$. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. \ref{eq:domain_sampling}. The results in section \ref{sec:experiments} are obtained with $\lambda$ set to $0.001$. We compare how different value of $\lambda$ would influence our model's performance in Table. \ref{tab:domain_sampling}. With large $\lambda$ and higher domain sampling probability, our model's Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. \subsection{Performance on individual relations of WN18} \label{sec:rare_WN} We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. \iffalse \subsection{Performance on different relations} \fi \end{document}
An Interpretable Knowledge Transfer Model for Knowledge Base Completion
1704.05908
Table 3: Performance of model with dense attention vectors or sparse attention vectors. MR, H10 and Time denotes mean rank, Hits@10 and training time per epoch respectively
[ "[BOLD] Method", "[BOLD] WN18 MR", "[BOLD] WN18 H10", "[BOLD] WN18 Time", "[BOLD] FB15k MR", "[BOLD] FB15k H10", "[BOLD] FB15k Time" ]
[ [ "Dense", "[BOLD] 199", "94.0", "4m34s", "69", "79.4", "4m30s" ], [ "Dense + ℓ1", "228", "[BOLD] 94.2", "4m25s", "131", "78.9", "5m47s" ], [ "Sparse", "207", "94.1", "[BOLD] 2m32s", "[BOLD] 67", "[BOLD] 79.6", "[BOLD] 1m52s" ] ]
Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. We see that ℓ1 regularization does not produce a sparse attention, especially on FB15k.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \aclfinalcopy % Uncomment this line for the final sumathbfission \def\aclpaperid{79} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{An Interpretable Knowledge Transfer Model \\ for Knowledge Base Completion} \author{Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, PA 15213, USA\\ {\tt \{qzxie, xuezhem, dzihang, hovy\}@cs.cmu.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. \end{abstract} \section{Introduction} Knowledge bases (KB), such as WordNet~\citep{FellbaumC98}, Freebase~\citep{Bollacker:2008}, YAGO ~\citep{Suchanek:2007} and DBpedia~\citep{LehmannIJJKMHMK15}, are useful resources for many applications such as question answering~\citep{berant-EtAl:2013:EMNLP,yih-EtAl:2015:ACL-IJCNLP,dai-li-xu:2016:P16-1} and information extraction~\citep{mintz-EtAl:2009:ACLIJCNLP}. However, knowledge bases suffer from incompleteness despite their formidable sizes ~\citep{NIPS2013_5028,West:2014:KBC:2566486.2568032}, leading to a number of studies on automatic knowledge base completion (KBC)~\citep{NickelMTG15} or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multi-relational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE}. As a seminal work, \citet{TransE} proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}, and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE~\citep{STransE} utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. \iffalse Of these studies, a number of neural network based techniques have emerged over the years to address the KBC task, among which embedding based models~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE} have stood out for its simplicity and effectiveness. \citet{TransE} proposed the TransE model that associated entities and relations with dense embedding vectors. To better model different aspects of the same entity, a variety of models map the entity embedding to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}. For instance, STransE~\citep{STransE} projected the head entity and tail entity to a relation-dependent space by multiplying two relation-specific projection matrices. \fi Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC, nguyen2016neighborhood}, alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multi-relation paths~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}. Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. \iffalse However, for some relations, in practice, there are not enough data to estimate the projection matrices.%(repretition) due to the data sparsity problem in knowledge bases. This led to a vast amount of research on utilizing external information, such as textual relations from web-scale corpus~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC} and relation path~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}, to enhance performance. %\FIXME{The performance decrease was not caused by data sparsity?} Unfortunately, such task-specific knowledge is costly to develop, making these models difficult to adapt to new tasks or new domains. \fi In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. \section{Notation and Previous Models} Let $E$ denote the set of entities and $R$ denote the set of relations. In knowledge base completion, given a training set $P$ of triples $(h, r, t)$ where $h,t\in E$ are the head and tail entities having a relation $r\in R$, e.g., (\textit{Steve Jobs}, \texttt{FounderOf}, \textit{Apple}), we want to predict missing facts such as (\textit{Steve Jobs}, \texttt{Profession}, \textit{Businessperson}). Most of the embedding models for knowledge base completion define an energy function $f_r(h,t)$ according to the fact's plausibility~\citep{bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,yang-etal-2015, guu-miller-liang:2015:EMNLP,STransE}. The models are learned to minimize energy $f_r(h,t)$ of a plausible triple $(h,r,t)$ and to maximize energy $f_r(h',t')$ of an implausible triple $(h',r,t')$. Motivated by the linear translation phenomenon observed in well trained word embeddings~\citep{mikolov2013distributed}, TransE~\citep{TransE} represents the head entity $h$, the relation $r$ and the tail entity $t$ with vectors $\mathbf{h}, \mathbf{r}$ and $\mathbf{t} \in \mathbb{R}^{n}$ respectively, which were trained so that $\mathbf{h}+\mathbf{r}\approx \mathbf{t}$. They define the energy function as $$f_r(h,t) = \| \mathbf{h} + \mathbf{r} - \mathbf{t} \|_{\ell}$$ where $\ell=1$ or $2$, which means either the $\ell_1$ or the $\ell_2$ norm of the vector $\mathbf{h} + \mathbf{r} - \mathbf{t}$ will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR~\citep{AAAI159571} uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE~\citep{STransE} extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is $$f_r(h,t) = \|\mathbf{W}_{r,1}\mathbf{h} + \mathbf{r} - \mathbf{W}_{r,2}\mathbf{t} \|_{\ell}$$ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. \section{Interpretable Knowledge Transfer} \subsection{Model} As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation ``(somebody) won award for (some work)'' and ``(somebody) was nominated for (some work)'' both describe a person's high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. \paragraph{Energy function} Specifically, we stack all the concept projection matrices to a 3-dimensional tensor $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$, where $m$ is the pre-specified number of concept projection matrices and $n$ is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: \begin{equation} f_r(h,t) = \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell} \label{eq:energy function} \end{equation} where $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T} \in [0,1]^m$, satisfying $\sum_i\pmb{\alpha}_{r,i}^{H}=\sum_i\pmb{\alpha}_{r,i}^{T}=1$, are normalized attention vectors used to compose all concept projection matrices in $\mathbf{D}$ by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use $m=2|R|$ concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section \ref{sec:compress}), though STransE always requires $2|R|$ projection matrices. We follow previous work to minimize the following hinge loss function: \begin{equation} \mathcal{L}=\sum_{\substack{(h,r,t) \sim P,\\ (h',r,t') \sim N}} \left[ \gamma + f_{r}(h,t) -f_{r}(h',t') \right]_+ \label{eq:hinge} \end{equation} where $P$ is the training set consisting of correct triples, $N$ is the distribution of corrupted triples defined in section \ref{sec:sampling}, and $[\cdot]_+ = \max(\cdot, 0)$. Note that we have omitted the dependence of $N$ on $(h,r,t)$ to avoid clutter. We normalize the entity vectors $\mathbf{h},\mathbf{t}$, and the projected entity vectors $\pmb{\alpha}_{r}^{H} \cdot \mathbf{D}\cdot \mathbf{h}$ and $\pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}$ to have unit length after each update, which is an effective regularization method that benefits all models. \paragraph{Sparse attention vectors} In Eq.~\eqref{eq:energy function}, we have defined $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T}$ to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of $m$ matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing $\ell_1$ regularization~\citep{tibshirani1996regression} on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce $\ell_0$ constraints on $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. In order to satisfy both the normalization condition and the $\ell_0$ constraints, we reparameterize the attention vectors in the following way: \begin{align*} \pmb{\alpha}_{r}^{H}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{H}, \mathbf{I}_{r}^{H}) \\ \pmb{\alpha}_{r}^{T}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{T}, \mathbf{I}_{r}^{T}) \end{align*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in \mathbb{R}^m$ are the pre-softmax scores, $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{0,1\}^{m}$ are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the $\mathrm{SparseSoftmax}$ is defined as \begin{equation*} \mathrm{SparseSoftmax}(\mathbf{v}, \mathbf{I})_i=\frac{\exp(\mathbf{v}_i / \tau) \mathbf{I}_i}{\sum_j \exp(\mathbf{v}_j / \tau) \mathbf{I}_j} \end{equation*} with $\tau$ being the temperature of Softmax. With this reparameterization, $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T}$ and $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ replace $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$ to become the real parameters of the model. Also, note that it is equivalent to pose the $\ell_0$ constraints on $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ instead of $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. Putting these modifications together, we can rewrite the optimization problem as \begin{equation} \begin{aligned} & {\text{minimize}} & & \mathcal{L} \\ & \text{subject to} & & \|\mathbf{I}_{r}^{H}\|_{0} \leq k,\|\mathbf{I}_{r}^{T}\|_{0} \leq k \end{aligned} \label{eq:l0_problem} \end{equation} where $\mathcal{L}$ is the loss function defined in Eq.~\eqref{eq:hinge}. \subsection{Block Iterative Optimization} Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under $\ell_0$ constraints. Thus, we resort to an approximated algorithm in this work. For convenience, we refer to the parameters with and without the sparse constraints as the \textit{sparse} partition and the \textit{dense} partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold \begin{enumerate}[itemsep=-1mm] \item the sparsity required by the $\ell_0$ constaint is maintained, and \item the cost define by Eq.~\eqref{eq:hinge} is decreased. \end{enumerate} Satisfying the two criterion seems to highly resemble the original problem defined in Eq.~\eqref{eq:l0_problem}. However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation $r$. In other words, the optimal choice of $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ is independent of $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ for any $r' \neq r$. Therefore, we only need to consider the optimization for a single relation $r$, which is essentially an assignment problem. Note that, however, $\mathbf{I}_{r}^{H}$ and $\mathbf{I}_{r}^{T}$ are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation $r$, we consider the induced cost $\mathcal{L}_{r,i}^{H}$ where only a single projection matrix $i$ is used for the head entity: \begin{equation*} \mathcal{L}_{r,i}^{H} = \sum_{\substack{(h,r,t) \sim P_r,\\ (h',r,t') \sim N_r}} \left[ \gamma + f_{r,i}^{H}(h,t) - f_{r,i}^{H}(h',t') \right]_+ \end{equation*} where $f_{r,i}^{H}(h,t) = \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|$ is the corresponding energy function, and the subscript in $P_r$ and $N_r$ denotes the subsets with relation $r$. Intuitively, $\mathcal{L}_{r,i}^{H}$ measures, given the current tail attention vector $\pmb{\alpha}_{r}^{T}$, if only one project matrix could be chosen for the head entity, how implausible $D_i$ would be. Hence, $i^* = \arg\min_i \mathcal{L}_{r,i}^{H}$ gives us the best single projection matrix on the head side given $\pmb{\alpha}_{r}^{T}$. Now, in order to choose the best $k$ matrices, we basically ignore the interaction among projection matrices, and update $\mathbf{I}_{r}^{H}$ in the following way: \begin{equation*} \mathbf{I}_{r,i}^{H} \leftarrow \begin{cases} 1, &i \in \mathrm{argpartition}_{i}(\mathcal{L}_{r,i}^{H}, k) \\ 0, &\text{otherwise} \end{cases}%, \, \forall i \end{equation*} where the function $\mathrm{argpartition}_{i}(x_i, k)$ produces the index set of the lowest-$k$ values of $x_i$. Analogously, we can define the single-matrix cost $\mathcal{L}_{r,i}^{T}$ and the energy function $f_{r,i}^{T}(h,t)$ on the tail side in a symmetric way. Then, the update rule for $\mathbf{I}_{r}^{H}$ follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section \ref{sec:experiments}, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. \iffalse Though sparseness is favorable in practice, even in linear regression, it has been shown to be an NP-hard problem to find the optimal solutions under $\ell_0$ constraints. %A lot of algorithms such as Approximated algorithms such as forward stepwise algorithm are proposed. Here we propose an approximated algorithm to solve it. We divide all of our parameters into two partitions: differentiable and non-differentiable, and we iteratively optimize those two. Differentiable parameters such as embeddings, projection matrices are optimized by SGD. Non-differentiable parameters are optimized by a greedy approximated process, aiming to minimize the cost function. Recall that we want the number of concepts associated with relations to be less than or equal to $k$, indicated by the $\ell_0$ constraint of attention vectors. We represent the mapping between relation $r$ and concepts by two indicator vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{-\infty ,1\}^{m}$, the indicator of non-zero entries in attention vectors. The $\ell_0$ constraint is guaranteed as the number of $1$s in $\mathbf{I}_{r}^{H},\mathbf{I}_{r}^{T}$ is less than or equal to $k$. Those mapping vectors constitutes the non-differentiable parameters in our framework. Formally, the attention vectors are calculated as follows: $$\pmb{\alpha}_{r}^{H}=\mathrm{Softmax}(\mathbf{v}_{r}^{H} \circ \mathbf{I}_{r}^{H})$$ $$\pmb{\alpha}_{r}^{T}=\mathrm{Softmax}(\mathbf{v}_{r}^{T} \circ \mathbf{I}_{r}^{T})$$ \begin{equation*} \mathrm{Softmax}(\mathbf{x})_i=\frac{\exp(\mathbf{x}_i / \tau)}{\sum_j \exp(\mathbf{x}_j / \tau)} \end{equation*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in (0, \infty)^m$ are the parameters for attention, $\circ$ is element wise multiplication, $\tau$ is the temperature of softmax ($\tau$ is set to $1/4$ in our experiments). Then there are only $k$ non-zero elements in $\pmb{\alpha}_{r}^{H}$ and $\pmb{\alpha}_{r}^{T}$ since $exp(-\infty)=0$. The algorithm is: \begin{itemize} \item[(1)] Randomly initialize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$, which is not differentiable. \item[(2)] Optimize differentiable parameters by SGD with fixed mapping vectors for several epochs. \item[(3)] If it is the early stage of training, fix the differentiable parameters learned in the previous step and optimize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ to minimize objective function. Go to step (2) if the model has not converged. \end{itemize} How do we optimize mapping vectors? A brute-force algorithm is to enumerate all possible values of mapping vectors. However, the time complexity of such a naive algorithm grows exponentially. Instead, we make several approximations to optimize the mapping vectors greedily. We define $J_{r,i}^{H}$ as an approximation to the cost $L$ when relation $r$ is mapped to concept $i$, i.e., $\textbf{I}_{r,i}^{H}=1$. We select the top $k$ concepts with smallest approximated cost when we optimize mapping vectors. The approximated cost takes the same hinge loss as the original cost function shown in Equation \ref{eq:hinge} with a concept specific energy function $f_{r,i}^{H}(h,t)$: \begin{equation*} J_{r,i}^{H}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}\max(\gamma + f_{r,i}^{H}(h,t) -f_{r,i}^{H}(h',t'), 0) \end{equation*} where the energy function is similar to the original function defined in Equation \ref{eq:energy function} except that relation $r$ is completely assigned with concept $i$: \begin{align*} f_{r,i}^{H}(h,t) &= \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\| \\ \end{align*} Similarly, the formulations for tail entities are defined as \begin{align*} f_{r,i}^{T}(h,t) &= \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \mathbf{D}_i \cdot \mathbf{t}\| \end{align*} $$J_{r,i}^{T}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}[\gamma + f_{r,i}^{T}(h,t) -f_{i}^{r,2}(h',t')]_+$$ The above process is a greedy algorithm. We make the following relaxations to ensure efficient and effective approximation: Firstly, concepts used to project head and tail entities are decoupled and selected independently. Secondly, $J_{r,i}^{H}$ and $J_{r,i}^{T}$ are evaluated on condition that concept $i$ is fully utilized, i.e., we ignore the interaction between concept $i$ and other concepts by setting attention $\pmb{\alpha}_{r,1,i}= 1$\footnote{The relaxation is made to reduce the computation complexity. Otherwise, to evaluate indicator vectors involving multiple matrices, we need to perform SGD to get the corresponding optimal values of $v_{r,1}$ and $v_{r,2}$. }. The greedy process works well empirically. We draw our inspiration from Expectation-Maximization (EM) ~\citep{dempster1977maximum} and LightRNN~\citep{LightRNN}. The above algorithm is similar to EM and LightRNN in the sense that some parameters can change rapidly based on the estimation of the corresponding cost. In other words, we are not changing and exploring the mapping vectors bit by bit but they can be reassigned with a completely different value in one step, leading to fast convergence. \fi \iffalse \begin{algorithm}[] Initialize $\mathbf{I}_{r}^{H}$,$\mathbf{I}_{r}^{T}$ randomly \\ \While {not convergence} { \For{\texttt{epoch} $= 1$ to $T_1$}{ Optimize $L$ by SGD on $\theta$ with $I_{r}^{H}, I_{r}^{T}$ fixed } $\texttt{tot\_epoch} = \texttt{tot\_epoch} + T_1$ \\ \If{$\texttt{tot\_epoch} \leq T_2$}{ { Find $\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{(T)'}$ which approximately maximize $L$. \\ Set $\mathbf{I}_{r}^{H}=\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{T}=\mathbf{I}_{r}^{(T)'}$. }} } \caption{Coordinate ascent optimization algorithm} \label{alg:opt} \end{algorithm} \fi \subsection{Corrupted Sample Generating Method} \label{sec:sampling} Recall that we need to sample a negative triple $(h',r,t')$ to compute hinge loss shown in Eq.~\ref{eq:hinge}, given a positive triple $(h,r,t)\in P$. The distribution of negative triple is denoted by $N(h,r,t)$. Previous work~\citep{TransE, AAAI159571, yang-etal-2015,STransE} generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds $\gamma$, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability $p_r$ and from the whole entity set with probability $1-p_r$. The choice of relation-dependent probability $p_r$ is specified in Appendix \ref{sec:domain_sampling}. In the rest of the paper, we refer to the new proposed sampling method as "domain sampling". \section{Experiments} \label{sec:experiments} \subsection{Setup} To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by \citet{TransE} and use the same training/validation/test split as in \citep{TransE}. The information of the two datasets is given in Table \ref{tab:datasets}. In knowledge base completion task, we evaluate model's performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation $r$ and tail $t$ in triple $(h,r,t)$, we compute the energy function $f_r(h', t)$ for each entity $h'$ in the knowledge base and rank all the entities according to the energy. We follow \citet{TransE} to report the \emph{filter} results, i.e., removing all other correct candidates $h'$ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top $10$ accuracy). Lower mean rank or higher Hits@10 mean better performance. \subsection{Implementation Details} We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution $\mathcal{N}(0,\,0.005^2)$. The entity and relation vectors of ITransF are initialized by TransE~\citep{TransE}, following~\citet{AAAI159571, ji-EtAl:2015:ACL-IJCNLP, Garcia-DuranBUG15, garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1}. We ran mini-batch SGD until convergence. We employ the {``\textit{Bernoulli}''} sampling method to generate incorrect triples as used in \citet{AAAI148531}, \citet{AAAI159571}, \citet{He:2015}, \citet{ji-EtAl:2015:ACL-IJCNLP} and \citet{lin-EtAl:2015:EMNLP1}. STransE~\citep{STransE} is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin $\gamma$ to $5$ and dimension of embedding $n$ to $50$ for WN18, and $\gamma = 1, n = 100$ for FB15k. We set the batch size to $20$ for WN18 and $1000$ for FB15k. The learning rate is $0.01$ on WN18 and $0.1$ on FB15k. We use $30$ matrices on WN18 and $300$ matrices on FB15k. All the models are implemented with Theano~\citep{bergstra2010theano}. The Softmax temperature is set to $1/4$. %\FIXME{T1, T2} \subsection{Results \& Analysis} The overall link prediction results\footnote{Note that although IRN~\citep{implicit} does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is $80.7$, similar to models without path information.} are reported in Table \ref{tab:main}. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. %In our framework, since Obama+IsBornIn$\approx$Honolulu, Honolulu+IsCityIn$\approx$US. Then we can expect Obama+IsBornIn+IsCityIn$\approx$US. %Projection matrices are not exactly the same An straightforward way of extending our proposed model to $k$-step path $P=\{r_i\}_{i=1}^{k}$ is to define a path energy function $\| \pmb{\alpha}_{P}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \sum_{r_i\in P}\mathbf{r}_i - \pmb{\alpha}_{P}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell}$, $\pmb{\alpha}_{P}^{H}$ is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. \iffalse In many knowledge bases, a small number of relations enjoy the majority of data, while a lot of relations are rare and hard to deal with. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. \fi \paragraph{Performance on Rare Relations} In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model's performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf's law. Since the frequencies of relations approximately follow a power distribution, their log frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure \ref{fig:rare}.\footnote{Domain sampling is not employed.} As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from $74.4$ to $92.0$, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix \ref{sec:rare_WN} where we can observe tha. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. \paragraph{Interpretability} In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure \ref{fig:att}. For WN18, the words ``hyponym'' and ``hypernym'' refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, ``instance\_hypernym'' is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is $(\textit{New York}, \texttt{instance\_hypernym}, \textit{city})$. This connection has also been discovered by our model, indicated by the fact that ``instance\_hypernym(T)'' and ``hypernym(T)'' share a common concept matrix. Finally, for symmetric relations like ``similar\_to'', we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in ``(somebody) won\_award\_for (some work)'' and ``(some work) award\_winning\_work (somebody)''. What's more, although relation ``won\_award\_for'' and ``was\_nominated\_for'' share the same concepts, their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. \begin{center} \end{center} \paragraph{Model Compression} \label{sec:compress} A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure \ref{fig:num_of_matrix} plots the average performance of ITransF against the number of projection matrices $m$, together with two baseline models. On FB15k, when we reduce the number of matrices from $2200$ to $30$ ($\sim90\times$ compression), our model performance decreases by only $0.09\%$ on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to $18$. \section{Analysis on Sparseness} Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. \paragraph{Dense Attention w/o $\ell_1$ regularization} Although $\ell_0$ constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with $\ell_1$ regularization. We set the $\ell_1$ coefficient to $0.001$ in our experiments and does not apply Softmax since the $\ell_1$ of a vector after Softmax is always $1$. We compare models in a setting where the computation time of dense attention model is acceptable\footnote{With $300$ projection matrices, it takes $1h1m$ to run one epoch for a model with dense attention.}. We use $22$ weight matrices on WN18 and $15$ weight matrices on FB15k and train both the models for $2000$ epochs. The results are reported in Table \ref{tab:dense}. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of model with $\ell_1$ regularized dense attention in Figure \ref{fig:att_l1}. We see that $\ell_1$ regularization does not produce a sparse attention, especially on FB15k. \paragraph{Nonnegative Sparse Encoding} In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking $|2R|$ pretrained projection matrices into a 3-dimensional tensor $X\in \mathbb{R}^{2|R| \times n \times n}$, similar sparsity can be induced by solving an $\ell_1$-regularized tensor completion problem $\min_{\mathbf{A},\mathbf{D}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \|\mathbf{A}\|_{\ell_1}$. Basically, $\mathbf{A}$ plays the same role as the attention vectors in our model. For more details, we refer readers to \citep{faruqui-EtAl:2015:ACL-IJCNLP}. For completeness, we compare our model with the aforementioned approach\footnote{We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP}.}. The comparison is summarized in table \ref{tab:optimization}. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. \iffalse To investigate whether our representation, we apply a non-negative sparse encoding method to obtain the sparse representation of projection matrices. We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP} and set the hyperparamters so that we obtain average degree of sparseness. We first train a STransE model which utilize separate projection matrices for different relations and stack all the projection matrices in a 3-Dimensional tensor $X\in \mathbb{R}^{2H\times n \times n}$ where $H$ is the number of relations. Then we minimize the following reconstruction loss \begin{equation} \begin{aligned} \min_{\mathbf{D},\mathbf{A}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \Omega(\mathbf{A}) + \gamma ||\mathbf{D}||_2^2 \end{aligned} \end{equation} where $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$ is the basis matrices and $\Omega$ is a regularizer to ensure sparseness representations. which utilize a $\ell_1$ regularizer. We \fi \section{Related Work} \label{sec:related_work} In KBC, CTransR~\citep{AAAI159571} enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement~\citep{turney2012domain} and relation adaptation~\citep{bollegala2015embedding}. Data sparsity is a common problem in many fields. Transfer learning~\citep{pan2010survey} has been shown to be promising to transfer knowledge and statistical strengths across similar models or languages. For example, \citet{D16-1153} transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. \citet{zoph-EtAl:2016:EMNLP2016} initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention~\citep{martins2016softmax, makhzani2013k,OUTRAGEOUSLY} share a similar idea of sorting the values before softmax and only keeping the $K$ largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN~\citep{LightRNN}. They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. \section{Conclusion and Future Work} In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. \section*{Acknowledgments} We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. \bibliographystyle{acl_natbib} \clearpage \appendix \section{Appendix} \subsection{Domain Sampling Probability} \label{sec:domain_sampling} In this section, we define the probability $p_r$ to generate a negative sample from the same domain mentioned in Section \ref{sec:sampling}. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. %To avoid generating false negative samples, we apply domain sampling with a higher probability on relations with a sparse domain. Specifically, let $\mathrm{M}^H_r=\{h \mid \exists t (h,r,t) \in P\}$ and $\mathrm{M}^T_r=\{t \mid \exists h (h,r,t) \in P\}$ denote the head or tail domain of relation $r$. Suppose $N_r=\{(h,r,t) \in P\}$ is the induced set of edges with relation $r$. We define the probability $p_r$ as \begin{equation} p_r=min(\frac{\lambda|\mathrm{M}^T_r| |\mathrm{M}^H_r|}{|N_r|}, 0.5) \label{eq:domain_sampling} \end{equation} Our motivation of such a formulation is as follows: Suppose $O_r$ is the set that contains all truthful fact triples on relation $r$, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is ${Pr((h,r,t)\in O_r \mid h\in \mathrm{M}^H_r, t \in \mathrm{M}^T_r) = \frac{|O_r|}{|\mathrm{M}^H_r||\mathrm{M}^T_r|}}$ Assume that all facts are missing with a probability $\lambda$, then $|N_r|=\lambda|O_r|$ and the above probability can be approximated by $ \frac{|N_r|}{\lambda|\mathrm{M}^H_r||\mathrm{M}^T_r|}$. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. \ref{eq:domain_sampling}. The results in section \ref{sec:experiments} are obtained with $\lambda$ set to $0.001$. We compare how different value of $\lambda$ would influence our model's performance in Table. \ref{tab:domain_sampling}. With large $\lambda$ and higher domain sampling probability, our model's Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. \subsection{Performance on individual relations of WN18} \label{sec:rare_WN} We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. \iffalse \subsection{Performance on different relations} \fi \end{document} \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \aclfinalcopy % Uncomment this line for the final sumathbfission \def\aclpaperid{79} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{An Interpretable Knowledge Transfer Model \\ for Knowledge Base Completion} \author{Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, PA 15213, USA\\ {\tt \{qzxie, xuezhem, dzihang, hovy\}@cs.cmu.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. \end{abstract} \section{Introduction} Knowledge bases (KB), such as WordNet~\citep{FellbaumC98}, Freebase~\citep{Bollacker:2008}, YAGO ~\citep{Suchanek:2007} and DBpedia~\citep{LehmannIJJKMHMK15}, are useful resources for many applications such as question answering~\citep{berant-EtAl:2013:EMNLP,yih-EtAl:2015:ACL-IJCNLP,dai-li-xu:2016:P16-1} and information extraction~\citep{mintz-EtAl:2009:ACLIJCNLP}. However, knowledge bases suffer from incompleteness despite their formidable sizes ~\citep{NIPS2013_5028,West:2014:KBC:2566486.2568032}, leading to a number of studies on automatic knowledge base completion (KBC)~\citep{NickelMTG15} or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multi-relational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE}. As a seminal work, \citet{TransE} proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}, and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE~\citep{STransE} utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. \iffalse Of these studies, a number of neural network based techniques have emerged over the years to address the KBC task, among which embedding based models~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE} have stood out for its simplicity and effectiveness. \citet{TransE} proposed the TransE model that associated entities and relations with dense embedding vectors. To better model different aspects of the same entity, a variety of models map the entity embedding to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}. For instance, STransE~\citep{STransE} projected the head entity and tail entity to a relation-dependent space by multiplying two relation-specific projection matrices. \fi Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC, nguyen2016neighborhood}, alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multi-relation paths~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}. Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. \iffalse However, for some relations, in practice, there are not enough data to estimate the projection matrices.%(repretition) due to the data sparsity problem in knowledge bases. This led to a vast amount of research on utilizing external information, such as textual relations from web-scale corpus~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC} and relation path~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}, to enhance performance. %\FIXME{The performance decrease was not caused by data sparsity?} Unfortunately, such task-specific knowledge is costly to develop, making these models difficult to adapt to new tasks or new domains. \fi In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. \section{Notation and Previous Models} Let $E$ denote the set of entities and $R$ denote the set of relations. In knowledge base completion, given a training set $P$ of triples $(h, r, t)$ where $h,t\in E$ are the head and tail entities having a relation $r\in R$, e.g., (\textit{Steve Jobs}, \texttt{FounderOf}, \textit{Apple}), we want to predict missing facts such as (\textit{Steve Jobs}, \texttt{Profession}, \textit{Businessperson}). Most of the embedding models for knowledge base completion define an energy function $f_r(h,t)$ according to the fact's plausibility~\citep{bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,yang-etal-2015, guu-miller-liang:2015:EMNLP,STransE}. The models are learned to minimize energy $f_r(h,t)$ of a plausible triple $(h,r,t)$ and to maximize energy $f_r(h',t')$ of an implausible triple $(h',r,t')$. Motivated by the linear translation phenomenon observed in well trained word embeddings~\citep{mikolov2013distributed}, TransE~\citep{TransE} represents the head entity $h$, the relation $r$ and the tail entity $t$ with vectors $\mathbf{h}, \mathbf{r}$ and $\mathbf{t} \in \mathbb{R}^{n}$ respectively, which were trained so that $\mathbf{h}+\mathbf{r}\approx \mathbf{t}$. They define the energy function as $$f_r(h,t) = \| \mathbf{h} + \mathbf{r} - \mathbf{t} \|_{\ell}$$ where $\ell=1$ or $2$, which means either the $\ell_1$ or the $\ell_2$ norm of the vector $\mathbf{h} + \mathbf{r} - \mathbf{t}$ will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR~\citep{AAAI159571} uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE~\citep{STransE} extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is $$f_r(h,t) = \|\mathbf{W}_{r,1}\mathbf{h} + \mathbf{r} - \mathbf{W}_{r,2}\mathbf{t} \|_{\ell}$$ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. \section{Interpretable Knowledge Transfer} \subsection{Model} As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation ``(somebody) won award for (some work)'' and ``(somebody) was nominated for (some work)'' both describe a person's high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. \paragraph{Energy function} Specifically, we stack all the concept projection matrices to a 3-dimensional tensor $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$, where $m$ is the pre-specified number of concept projection matrices and $n$ is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: \begin{equation} f_r(h,t) = \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell} \label{eq:energy function} \end{equation} where $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T} \in [0,1]^m$, satisfying $\sum_i\pmb{\alpha}_{r,i}^{H}=\sum_i\pmb{\alpha}_{r,i}^{T}=1$, are normalized attention vectors used to compose all concept projection matrices in $\mathbf{D}$ by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use $m=2|R|$ concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section \ref{sec:compress}), though STransE always requires $2|R|$ projection matrices. We follow previous work to minimize the following hinge loss function: \begin{equation} \mathcal{L}=\sum_{\substack{(h,r,t) \sim P,\\ (h',r,t') \sim N}} \left[ \gamma + f_{r}(h,t) -f_{r}(h',t') \right]_+ \label{eq:hinge} \end{equation} where $P$ is the training set consisting of correct triples, $N$ is the distribution of corrupted triples defined in section \ref{sec:sampling}, and $[\cdot]_+ = \max(\cdot, 0)$. Note that we have omitted the dependence of $N$ on $(h,r,t)$ to avoid clutter. We normalize the entity vectors $\mathbf{h},\mathbf{t}$, and the projected entity vectors $\pmb{\alpha}_{r}^{H} \cdot \mathbf{D}\cdot \mathbf{h}$ and $\pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}$ to have unit length after each update, which is an effective regularization method that benefits all models. \paragraph{Sparse attention vectors} In Eq.~\eqref{eq:energy function}, we have defined $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T}$ to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of $m$ matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing $\ell_1$ regularization~\citep{tibshirani1996regression} on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce $\ell_0$ constraints on $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. In order to satisfy both the normalization condition and the $\ell_0$ constraints, we reparameterize the attention vectors in the following way: \begin{align*} \pmb{\alpha}_{r}^{H}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{H}, \mathbf{I}_{r}^{H}) \\ \pmb{\alpha}_{r}^{T}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{T}, \mathbf{I}_{r}^{T}) \end{align*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in \mathbb{R}^m$ are the pre-softmax scores, $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{0,1\}^{m}$ are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the $\mathrm{SparseSoftmax}$ is defined as \begin{equation*} \mathrm{SparseSoftmax}(\mathbf{v}, \mathbf{I})_i=\frac{\exp(\mathbf{v}_i / \tau) \mathbf{I}_i}{\sum_j \exp(\mathbf{v}_j / \tau) \mathbf{I}_j} \end{equation*} with $\tau$ being the temperature of Softmax. With this reparameterization, $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T}$ and $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ replace $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$ to become the real parameters of the model. Also, note that it is equivalent to pose the $\ell_0$ constraints on $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ instead of $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. Putting these modifications together, we can rewrite the optimization problem as \begin{equation} \begin{aligned} & {\text{minimize}} & & \mathcal{L} \\ & \text{subject to} & & \|\mathbf{I}_{r}^{H}\|_{0} \leq k,\|\mathbf{I}_{r}^{T}\|_{0} \leq k \end{aligned} \label{eq:l0_problem} \end{equation} where $\mathcal{L}$ is the loss function defined in Eq.~\eqref{eq:hinge}. \subsection{Block Iterative Optimization} Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under $\ell_0$ constraints. Thus, we resort to an approximated algorithm in this work. For convenience, we refer to the parameters with and without the sparse constraints as the \textit{sparse} partition and the \textit{dense} partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold \begin{enumerate}[itemsep=-1mm] \item the sparsity required by the $\ell_0$ constaint is maintained, and \item the cost define by Eq.~\eqref{eq:hinge} is decreased. \end{enumerate} Satisfying the two criterion seems to highly resemble the original problem defined in Eq.~\eqref{eq:l0_problem}. However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation $r$. In other words, the optimal choice of $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ is independent of $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ for any $r' \neq r$. Therefore, we only need to consider the optimization for a single relation $r$, which is essentially an assignment problem. Note that, however, $\mathbf{I}_{r}^{H}$ and $\mathbf{I}_{r}^{T}$ are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation $r$, we consider the induced cost $\mathcal{L}_{r,i}^{H}$ where only a single projection matrix $i$ is used for the head entity: \begin{equation*} \mathcal{L}_{r,i}^{H} = \sum_{\substack{(h,r,t) \sim P_r,\\ (h',r,t') \sim N_r}} \left[ \gamma + f_{r,i}^{H}(h,t) - f_{r,i}^{H}(h',t') \right]_+ \end{equation*} where $f_{r,i}^{H}(h,t) = \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|$ is the corresponding energy function, and the subscript in $P_r$ and $N_r$ denotes the subsets with relation $r$. Intuitively, $\mathcal{L}_{r,i}^{H}$ measures, given the current tail attention vector $\pmb{\alpha}_{r}^{T}$, if only one project matrix could be chosen for the head entity, how implausible $D_i$ would be. Hence, $i^* = \arg\min_i \mathcal{L}_{r,i}^{H}$ gives us the best single projection matrix on the head side given $\pmb{\alpha}_{r}^{T}$. Now, in order to choose the best $k$ matrices, we basically ignore the interaction among projection matrices, and update $\mathbf{I}_{r}^{H}$ in the following way: \begin{equation*} \mathbf{I}_{r,i}^{H} \leftarrow \begin{cases} 1, &i \in \mathrm{argpartition}_{i}(\mathcal{L}_{r,i}^{H}, k) \\ 0, &\text{otherwise} \end{cases}%, \, \forall i \end{equation*} where the function $\mathrm{argpartition}_{i}(x_i, k)$ produces the index set of the lowest-$k$ values of $x_i$. Analogously, we can define the single-matrix cost $\mathcal{L}_{r,i}^{T}$ and the energy function $f_{r,i}^{T}(h,t)$ on the tail side in a symmetric way. Then, the update rule for $\mathbf{I}_{r}^{H}$ follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section \ref{sec:experiments}, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. \iffalse Though sparseness is favorable in practice, even in linear regression, it has been shown to be an NP-hard problem to find the optimal solutions under $\ell_0$ constraints. %A lot of algorithms such as Approximated algorithms such as forward stepwise algorithm are proposed. Here we propose an approximated algorithm to solve it. We divide all of our parameters into two partitions: differentiable and non-differentiable, and we iteratively optimize those two. Differentiable parameters such as embeddings, projection matrices are optimized by SGD. Non-differentiable parameters are optimized by a greedy approximated process, aiming to minimize the cost function. Recall that we want the number of concepts associated with relations to be less than or equal to $k$, indicated by the $\ell_0$ constraint of attention vectors. We represent the mapping between relation $r$ and concepts by two indicator vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{-\infty ,1\}^{m}$, the indicator of non-zero entries in attention vectors. The $\ell_0$ constraint is guaranteed as the number of $1$s in $\mathbf{I}_{r}^{H},\mathbf{I}_{r}^{T}$ is less than or equal to $k$. Those mapping vectors constitutes the non-differentiable parameters in our framework. Formally, the attention vectors are calculated as follows: $$\pmb{\alpha}_{r}^{H}=\mathrm{Softmax}(\mathbf{v}_{r}^{H} \circ \mathbf{I}_{r}^{H})$$ $$\pmb{\alpha}_{r}^{T}=\mathrm{Softmax}(\mathbf{v}_{r}^{T} \circ \mathbf{I}_{r}^{T})$$ \begin{equation*} \mathrm{Softmax}(\mathbf{x})_i=\frac{\exp(\mathbf{x}_i / \tau)}{\sum_j \exp(\mathbf{x}_j / \tau)} \end{equation*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in (0, \infty)^m$ are the parameters for attention, $\circ$ is element wise multiplication, $\tau$ is the temperature of softmax ($\tau$ is set to $1/4$ in our experiments). Then there are only $k$ non-zero elements in $\pmb{\alpha}_{r}^{H}$ and $\pmb{\alpha}_{r}^{T}$ since $exp(-\infty)=0$. The algorithm is: \begin{itemize} \item[(1)] Randomly initialize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$, which is not differentiable. \item[(2)] Optimize differentiable parameters by SGD with fixed mapping vectors for several epochs. \item[(3)] If it is the early stage of training, fix the differentiable parameters learned in the previous step and optimize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ to minimize objective function. Go to step (2) if the model has not converged. \end{itemize} How do we optimize mapping vectors? A brute-force algorithm is to enumerate all possible values of mapping vectors. However, the time complexity of such a naive algorithm grows exponentially. Instead, we make several approximations to optimize the mapping vectors greedily. We define $J_{r,i}^{H}$ as an approximation to the cost $L$ when relation $r$ is mapped to concept $i$, i.e., $\textbf{I}_{r,i}^{H}=1$. We select the top $k$ concepts with smallest approximated cost when we optimize mapping vectors. The approximated cost takes the same hinge loss as the original cost function shown in Equation \ref{eq:hinge} with a concept specific energy function $f_{r,i}^{H}(h,t)$: \begin{equation*} J_{r,i}^{H}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}\max(\gamma + f_{r,i}^{H}(h,t) -f_{r,i}^{H}(h',t'), 0) \end{equation*} where the energy function is similar to the original function defined in Equation \ref{eq:energy function} except that relation $r$ is completely assigned with concept $i$: \begin{align*} f_{r,i}^{H}(h,t) &= \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\| \\ \end{align*} Similarly, the formulations for tail entities are defined as \begin{align*} f_{r,i}^{T}(h,t) &= \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \mathbf{D}_i \cdot \mathbf{t}\| \end{align*} $$J_{r,i}^{T}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}[\gamma + f_{r,i}^{T}(h,t) -f_{i}^{r,2}(h',t')]_+$$ The above process is a greedy algorithm. We make the following relaxations to ensure efficient and effective approximation: Firstly, concepts used to project head and tail entities are decoupled and selected independently. Secondly, $J_{r,i}^{H}$ and $J_{r,i}^{T}$ are evaluated on condition that concept $i$ is fully utilized, i.e., we ignore the interaction between concept $i$ and other concepts by setting attention $\pmb{\alpha}_{r,1,i}= 1$\footnote{The relaxation is made to reduce the computation complexity. Otherwise, to evaluate indicator vectors involving multiple matrices, we need to perform SGD to get the corresponding optimal values of $v_{r,1}$ and $v_{r,2}$. }. The greedy process works well empirically. We draw our inspiration from Expectation-Maximization (EM) ~\citep{dempster1977maximum} and LightRNN~\citep{LightRNN}. The above algorithm is similar to EM and LightRNN in the sense that some parameters can change rapidly based on the estimation of the corresponding cost. In other words, we are not changing and exploring the mapping vectors bit by bit but they can be reassigned with a completely different value in one step, leading to fast convergence. \fi \iffalse \begin{algorithm}[] Initialize $\mathbf{I}_{r}^{H}$,$\mathbf{I}_{r}^{T}$ randomly \\ \While {not convergence} { \For{\texttt{epoch} $= 1$ to $T_1$}{ Optimize $L$ by SGD on $\theta$ with $I_{r}^{H}, I_{r}^{T}$ fixed } $\texttt{tot\_epoch} = \texttt{tot\_epoch} + T_1$ \\ \If{$\texttt{tot\_epoch} \leq T_2$}{ { Find $\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{(T)'}$ which approximately maximize $L$. \\ Set $\mathbf{I}_{r}^{H}=\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{T}=\mathbf{I}_{r}^{(T)'}$. }} } \caption{Coordinate ascent optimization algorithm} \label{alg:opt} \end{algorithm} \fi \subsection{Corrupted Sample Generating Method} \label{sec:sampling} Recall that we need to sample a negative triple $(h',r,t')$ to compute hinge loss shown in Eq.~\ref{eq:hinge}, given a positive triple $(h,r,t)\in P$. The distribution of negative triple is denoted by $N(h,r,t)$. Previous work~\citep{TransE, AAAI159571, yang-etal-2015,STransE} generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds $\gamma$, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability $p_r$ and from the whole entity set with probability $1-p_r$. The choice of relation-dependent probability $p_r$ is specified in Appendix \ref{sec:domain_sampling}. In the rest of the paper, we refer to the new proposed sampling method as "domain sampling". \section{Experiments} \label{sec:experiments} \subsection{Setup} To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by \citet{TransE} and use the same training/validation/test split as in \citep{TransE}. The information of the two datasets is given in Table \ref{tab:datasets}. In knowledge base completion task, we evaluate model's performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation $r$ and tail $t$ in triple $(h,r,t)$, we compute the energy function $f_r(h', t)$ for each entity $h'$ in the knowledge base and rank all the entities according to the energy. We follow \citet{TransE} to report the \emph{filter} results, i.e., removing all other correct candidates $h'$ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top $10$ accuracy). Lower mean rank or higher Hits@10 mean better performance. \subsection{Implementation Details} We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution $\mathcal{N}(0,\,0.005^2)$. The entity and relation vectors of ITransF are initialized by TransE~\citep{TransE}, following~\citet{AAAI159571, ji-EtAl:2015:ACL-IJCNLP, Garcia-DuranBUG15, garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1}. We ran mini-batch SGD until convergence. We employ the {``\textit{Bernoulli}''} sampling method to generate incorrect triples as used in \citet{AAAI148531}, \citet{AAAI159571}, \citet{He:2015}, \citet{ji-EtAl:2015:ACL-IJCNLP} and \citet{lin-EtAl:2015:EMNLP1}. STransE~\citep{STransE} is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin $\gamma$ to $5$ and dimension of embedding $n$ to $50$ for WN18, and $\gamma = 1, n = 100$ for FB15k. We set the batch size to $20$ for WN18 and $1000$ for FB15k. The learning rate is $0.01$ on WN18 and $0.1$ on FB15k. We use $30$ matrices on WN18 and $300$ matrices on FB15k. All the models are implemented with Theano~\citep{bergstra2010theano}. The Softmax temperature is set to $1/4$. %\FIXME{T1, T2} \subsection{Results \& Analysis} The overall link prediction results\footnote{Note that although IRN~\citep{implicit} does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is $80.7$, similar to models without path information.} are reported in Table \ref{tab:main}. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. %In our framework, since Obama+IsBornIn$\approx$Honolulu, Honolulu+IsCityIn$\approx$US. Then we can expect Obama+IsBornIn+IsCityIn$\approx$US. %Projection matrices are not exactly the same An straightforward way of extending our proposed model to $k$-step path $P=\{r_i\}_{i=1}^{k}$ is to define a path energy function $\| \pmb{\alpha}_{P}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \sum_{r_i\in P}\mathbf{r}_i - \pmb{\alpha}_{P}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell}$, $\pmb{\alpha}_{P}^{H}$ is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. \iffalse In many knowledge bases, a small number of relations enjoy the majority of data, while a lot of relations are rare and hard to deal with. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. \fi \paragraph{Performance on Rare Relations} In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model's performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf's law. Since the frequencies of relations approximately follow a power distribution, their log frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure \ref{fig:rare}.\footnote{Domain sampling is not employed.} As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from $74.4$ to $92.0$, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix \ref{sec:rare_WN} where we can observe tha. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. \paragraph{Interpretability} In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure \ref{fig:att}. For WN18, the words ``hyponym'' and ``hypernym'' refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, ``instance\_hypernym'' is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is $(\textit{New York}, \texttt{instance\_hypernym}, \textit{city})$. This connection has also been discovered by our model, indicated by the fact that ``instance\_hypernym(T)'' and ``hypernym(T)'' share a common concept matrix. Finally, for symmetric relations like ``similar\_to'', we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in ``(somebody) won\_award\_for (some work)'' and ``(some work) award\_winning\_work (somebody)''. What's more, although relation ``won\_award\_for'' and ``was\_nominated\_for'' share the same concepts, their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. \begin{center} \end{center} \paragraph{Model Compression} \label{sec:compress} A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure \ref{fig:num_of_matrix} plots the average performance of ITransF against the number of projection matrices $m$, together with two baseline models. On FB15k, when we reduce the number of matrices from $2200$ to $30$ ($\sim90\times$ compression), our model performance decreases by only $0.09\%$ on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to $18$. \section{Analysis on Sparseness} Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. \paragraph{Dense Attention w/o $\ell_1$ regularization} Although $\ell_0$ constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with $\ell_1$ regularization. We set the $\ell_1$ coefficient to $0.001$ in our experiments and does not apply Softmax since the $\ell_1$ of a vector after Softmax is always $1$. We compare models in a setting where the computation time of dense attention model is acceptable\footnote{With $300$ projection matrices, it takes $1h1m$ to run one epoch for a model with dense attention.}. We use $22$ weight matrices on WN18 and $15$ weight matrices on FB15k and train both the models for $2000$ epochs. The results are reported in Table \ref{tab:dense}. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of model with $\ell_1$ regularized dense attention in Figure \ref{fig:att_l1}. We see that $\ell_1$ regularization does not produce a sparse attention, especially on FB15k. \paragraph{Nonnegative Sparse Encoding} In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking $|2R|$ pretrained projection matrices into a 3-dimensional tensor $X\in \mathbb{R}^{2|R| \times n \times n}$, similar sparsity can be induced by solving an $\ell_1$-regularized tensor completion problem $\min_{\mathbf{A},\mathbf{D}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \|\mathbf{A}\|_{\ell_1}$. Basically, $\mathbf{A}$ plays the same role as the attention vectors in our model. For more details, we refer readers to \citep{faruqui-EtAl:2015:ACL-IJCNLP}. For completeness, we compare our model with the aforementioned approach\footnote{We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP}.}. The comparison is summarized in table \ref{tab:optimization}. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. \iffalse To investigate whether our representation, we apply a non-negative sparse encoding method to obtain the sparse representation of projection matrices. We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP} and set the hyperparamters so that we obtain average degree of sparseness. We first train a STransE model which utilize separate projection matrices for different relations and stack all the projection matrices in a 3-Dimensional tensor $X\in \mathbb{R}^{2H\times n \times n}$ where $H$ is the number of relations. Then we minimize the following reconstruction loss \begin{equation} \begin{aligned} \min_{\mathbf{D},\mathbf{A}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \Omega(\mathbf{A}) + \gamma ||\mathbf{D}||_2^2 \end{aligned} \end{equation} where $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$ is the basis matrices and $\Omega$ is a regularizer to ensure sparseness representations. which utilize a $\ell_1$ regularizer. We \fi \section{Related Work} \label{sec:related_work} In KBC, CTransR~\citep{AAAI159571} enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement~\citep{turney2012domain} and relation adaptation~\citep{bollegala2015embedding}. Data sparsity is a common problem in many fields. Transfer learning~\citep{pan2010survey} has been shown to be promising to transfer knowledge and statistical strengths across similar models or languages. For example, \citet{D16-1153} transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. \citet{zoph-EtAl:2016:EMNLP2016} initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention~\citep{martins2016softmax, makhzani2013k,OUTRAGEOUSLY} share a similar idea of sorting the values before softmax and only keeping the $K$ largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN~\citep{LightRNN}. They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. \section{Conclusion and Future Work} In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. \section*{Acknowledgments} We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. \bibliographystyle{acl_natbib} \clearpage \appendix \section{Appendix} \subsection{Domain Sampling Probability} \label{sec:domain_sampling} In this section, we define the probability $p_r$ to generate a negative sample from the same domain mentioned in Section \ref{sec:sampling}. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. %To avoid generating false negative samples, we apply domain sampling with a higher probability on relations with a sparse domain. Specifically, let $\mathrm{M}^H_r=\{h \mid \exists t (h,r,t) \in P\}$ and $\mathrm{M}^T_r=\{t \mid \exists h (h,r,t) \in P\}$ denote the head or tail domain of relation $r$. Suppose $N_r=\{(h,r,t) \in P\}$ is the induced set of edges with relation $r$. We define the probability $p_r$ as \begin{equation} p_r=min(\frac{\lambda|\mathrm{M}^T_r| |\mathrm{M}^H_r|}{|N_r|}, 0.5) \label{eq:domain_sampling} \end{equation} Our motivation of such a formulation is as follows: Suppose $O_r$ is the set that contains all truthful fact triples on relation $r$, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is ${Pr((h,r,t)\in O_r \mid h\in \mathrm{M}^H_r, t \in \mathrm{M}^T_r) = \frac{|O_r|}{|\mathrm{M}^H_r||\mathrm{M}^T_r|}}$ Assume that all facts are missing with a probability $\lambda$, then $|N_r|=\lambda|O_r|$ and the above probability can be approximated by $ \frac{|N_r|}{\lambda|\mathrm{M}^H_r||\mathrm{M}^T_r|}$. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. \ref{eq:domain_sampling}. The results in section \ref{sec:experiments} are obtained with $\lambda$ set to $0.001$. We compare how different value of $\lambda$ would influence our model's performance in Table. \ref{tab:domain_sampling}. With large $\lambda$ and higher domain sampling probability, our model's Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. \subsection{Performance on individual relations of WN18} \label{sec:rare_WN} We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. \iffalse \subsection{Performance on different relations} \fi \end{document}
An Interpretable Knowledge Transfer Model for Knowledge Base Completion
1704.05908
Table 4: Different methods to obtain sparse representations
[ "[BOLD] Method", "[BOLD] WN18 MR", "[BOLD] WN18 H10", "[BOLD] FB15k MR", "[BOLD] FB15k H10" ]
[ [ "Sparse Encoding", "211", "86.6", "66", "79.1" ], [ "ITransF", "[BOLD] 205", "[BOLD] 94.2", "[BOLD] 65", "[BOLD] 81.0" ] ]
On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \aclfinalcopy % Uncomment this line for the final sumathbfission \def\aclpaperid{79} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{An Interpretable Knowledge Transfer Model \\ for Knowledge Base Completion} \author{Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, PA 15213, USA\\ {\tt \{qzxie, xuezhem, dzihang, hovy\}@cs.cmu.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. \end{abstract} \section{Introduction} Knowledge bases (KB), such as WordNet~\citep{FellbaumC98}, Freebase~\citep{Bollacker:2008}, YAGO ~\citep{Suchanek:2007} and DBpedia~\citep{LehmannIJJKMHMK15}, are useful resources for many applications such as question answering~\citep{berant-EtAl:2013:EMNLP,yih-EtAl:2015:ACL-IJCNLP,dai-li-xu:2016:P16-1} and information extraction~\citep{mintz-EtAl:2009:ACLIJCNLP}. However, knowledge bases suffer from incompleteness despite their formidable sizes ~\citep{NIPS2013_5028,West:2014:KBC:2566486.2568032}, leading to a number of studies on automatic knowledge base completion (KBC)~\citep{NickelMTG15} or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multi-relational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE}. As a seminal work, \citet{TransE} proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}, and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE~\citep{STransE} utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. \iffalse Of these studies, a number of neural network based techniques have emerged over the years to address the KBC task, among which embedding based models~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE} have stood out for its simplicity and effectiveness. \citet{TransE} proposed the TransE model that associated entities and relations with dense embedding vectors. To better model different aspects of the same entity, a variety of models map the entity embedding to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}. For instance, STransE~\citep{STransE} projected the head entity and tail entity to a relation-dependent space by multiplying two relation-specific projection matrices. \fi Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC, nguyen2016neighborhood}, alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multi-relation paths~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}. Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. \iffalse However, for some relations, in practice, there are not enough data to estimate the projection matrices.%(repretition) due to the data sparsity problem in knowledge bases. This led to a vast amount of research on utilizing external information, such as textual relations from web-scale corpus~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC} and relation path~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}, to enhance performance. %\FIXME{The performance decrease was not caused by data sparsity?} Unfortunately, such task-specific knowledge is costly to develop, making these models difficult to adapt to new tasks or new domains. \fi In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. \section{Notation and Previous Models} Let $E$ denote the set of entities and $R$ denote the set of relations. In knowledge base completion, given a training set $P$ of triples $(h, r, t)$ where $h,t\in E$ are the head and tail entities having a relation $r\in R$, e.g., (\textit{Steve Jobs}, \texttt{FounderOf}, \textit{Apple}), we want to predict missing facts such as (\textit{Steve Jobs}, \texttt{Profession}, \textit{Businessperson}). Most of the embedding models for knowledge base completion define an energy function $f_r(h,t)$ according to the fact's plausibility~\citep{bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,yang-etal-2015, guu-miller-liang:2015:EMNLP,STransE}. The models are learned to minimize energy $f_r(h,t)$ of a plausible triple $(h,r,t)$ and to maximize energy $f_r(h',t')$ of an implausible triple $(h',r,t')$. Motivated by the linear translation phenomenon observed in well trained word embeddings~\citep{mikolov2013distributed}, TransE~\citep{TransE} represents the head entity $h$, the relation $r$ and the tail entity $t$ with vectors $\mathbf{h}, \mathbf{r}$ and $\mathbf{t} \in \mathbb{R}^{n}$ respectively, which were trained so that $\mathbf{h}+\mathbf{r}\approx \mathbf{t}$. They define the energy function as $$f_r(h,t) = \| \mathbf{h} + \mathbf{r} - \mathbf{t} \|_{\ell}$$ where $\ell=1$ or $2$, which means either the $\ell_1$ or the $\ell_2$ norm of the vector $\mathbf{h} + \mathbf{r} - \mathbf{t}$ will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR~\citep{AAAI159571} uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE~\citep{STransE} extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is $$f_r(h,t) = \|\mathbf{W}_{r,1}\mathbf{h} + \mathbf{r} - \mathbf{W}_{r,2}\mathbf{t} \|_{\ell}$$ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. \section{Interpretable Knowledge Transfer} \subsection{Model} As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation ``(somebody) won award for (some work)'' and ``(somebody) was nominated for (some work)'' both describe a person's high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. \paragraph{Energy function} Specifically, we stack all the concept projection matrices to a 3-dimensional tensor $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$, where $m$ is the pre-specified number of concept projection matrices and $n$ is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: \begin{equation} f_r(h,t) = \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell} \label{eq:energy function} \end{equation} where $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T} \in [0,1]^m$, satisfying $\sum_i\pmb{\alpha}_{r,i}^{H}=\sum_i\pmb{\alpha}_{r,i}^{T}=1$, are normalized attention vectors used to compose all concept projection matrices in $\mathbf{D}$ by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use $m=2|R|$ concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section \ref{sec:compress}), though STransE always requires $2|R|$ projection matrices. We follow previous work to minimize the following hinge loss function: \begin{equation} \mathcal{L}=\sum_{\substack{(h,r,t) \sim P,\\ (h',r,t') \sim N}} \left[ \gamma + f_{r}(h,t) -f_{r}(h',t') \right]_+ \label{eq:hinge} \end{equation} where $P$ is the training set consisting of correct triples, $N$ is the distribution of corrupted triples defined in section \ref{sec:sampling}, and $[\cdot]_+ = \max(\cdot, 0)$. Note that we have omitted the dependence of $N$ on $(h,r,t)$ to avoid clutter. We normalize the entity vectors $\mathbf{h},\mathbf{t}$, and the projected entity vectors $\pmb{\alpha}_{r}^{H} \cdot \mathbf{D}\cdot \mathbf{h}$ and $\pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}$ to have unit length after each update, which is an effective regularization method that benefits all models. \paragraph{Sparse attention vectors} In Eq.~\eqref{eq:energy function}, we have defined $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T}$ to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of $m$ matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing $\ell_1$ regularization~\citep{tibshirani1996regression} on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce $\ell_0$ constraints on $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. In order to satisfy both the normalization condition and the $\ell_0$ constraints, we reparameterize the attention vectors in the following way: \begin{align*} \pmb{\alpha}_{r}^{H}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{H}, \mathbf{I}_{r}^{H}) \\ \pmb{\alpha}_{r}^{T}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{T}, \mathbf{I}_{r}^{T}) \end{align*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in \mathbb{R}^m$ are the pre-softmax scores, $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{0,1\}^{m}$ are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the $\mathrm{SparseSoftmax}$ is defined as \begin{equation*} \mathrm{SparseSoftmax}(\mathbf{v}, \mathbf{I})_i=\frac{\exp(\mathbf{v}_i / \tau) \mathbf{I}_i}{\sum_j \exp(\mathbf{v}_j / \tau) \mathbf{I}_j} \end{equation*} with $\tau$ being the temperature of Softmax. With this reparameterization, $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T}$ and $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ replace $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$ to become the real parameters of the model. Also, note that it is equivalent to pose the $\ell_0$ constraints on $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ instead of $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. Putting these modifications together, we can rewrite the optimization problem as \begin{equation} \begin{aligned} & {\text{minimize}} & & \mathcal{L} \\ & \text{subject to} & & \|\mathbf{I}_{r}^{H}\|_{0} \leq k,\|\mathbf{I}_{r}^{T}\|_{0} \leq k \end{aligned} \label{eq:l0_problem} \end{equation} where $\mathcal{L}$ is the loss function defined in Eq.~\eqref{eq:hinge}. \subsection{Block Iterative Optimization} Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under $\ell_0$ constraints. Thus, we resort to an approximated algorithm in this work. For convenience, we refer to the parameters with and without the sparse constraints as the \textit{sparse} partition and the \textit{dense} partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold \begin{enumerate}[itemsep=-1mm] \item the sparsity required by the $\ell_0$ constaint is maintained, and \item the cost define by Eq.~\eqref{eq:hinge} is decreased. \end{enumerate} Satisfying the two criterion seems to highly resemble the original problem defined in Eq.~\eqref{eq:l0_problem}. However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation $r$. In other words, the optimal choice of $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ is independent of $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ for any $r' \neq r$. Therefore, we only need to consider the optimization for a single relation $r$, which is essentially an assignment problem. Note that, however, $\mathbf{I}_{r}^{H}$ and $\mathbf{I}_{r}^{T}$ are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation $r$, we consider the induced cost $\mathcal{L}_{r,i}^{H}$ where only a single projection matrix $i$ is used for the head entity: \begin{equation*} \mathcal{L}_{r,i}^{H} = \sum_{\substack{(h,r,t) \sim P_r,\\ (h',r,t') \sim N_r}} \left[ \gamma + f_{r,i}^{H}(h,t) - f_{r,i}^{H}(h',t') \right]_+ \end{equation*} where $f_{r,i}^{H}(h,t) = \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|$ is the corresponding energy function, and the subscript in $P_r$ and $N_r$ denotes the subsets with relation $r$. Intuitively, $\mathcal{L}_{r,i}^{H}$ measures, given the current tail attention vector $\pmb{\alpha}_{r}^{T}$, if only one project matrix could be chosen for the head entity, how implausible $D_i$ would be. Hence, $i^* = \arg\min_i \mathcal{L}_{r,i}^{H}$ gives us the best single projection matrix on the head side given $\pmb{\alpha}_{r}^{T}$. Now, in order to choose the best $k$ matrices, we basically ignore the interaction among projection matrices, and update $\mathbf{I}_{r}^{H}$ in the following way: \begin{equation*} \mathbf{I}_{r,i}^{H} \leftarrow \begin{cases} 1, &i \in \mathrm{argpartition}_{i}(\mathcal{L}_{r,i}^{H}, k) \\ 0, &\text{otherwise} \end{cases}%, \, \forall i \end{equation*} where the function $\mathrm{argpartition}_{i}(x_i, k)$ produces the index set of the lowest-$k$ values of $x_i$. Analogously, we can define the single-matrix cost $\mathcal{L}_{r,i}^{T}$ and the energy function $f_{r,i}^{T}(h,t)$ on the tail side in a symmetric way. Then, the update rule for $\mathbf{I}_{r}^{H}$ follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section \ref{sec:experiments}, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. \iffalse Though sparseness is favorable in practice, even in linear regression, it has been shown to be an NP-hard problem to find the optimal solutions under $\ell_0$ constraints. %A lot of algorithms such as Approximated algorithms such as forward stepwise algorithm are proposed. Here we propose an approximated algorithm to solve it. We divide all of our parameters into two partitions: differentiable and non-differentiable, and we iteratively optimize those two. Differentiable parameters such as embeddings, projection matrices are optimized by SGD. Non-differentiable parameters are optimized by a greedy approximated process, aiming to minimize the cost function. Recall that we want the number of concepts associated with relations to be less than or equal to $k$, indicated by the $\ell_0$ constraint of attention vectors. We represent the mapping between relation $r$ and concepts by two indicator vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{-\infty ,1\}^{m}$, the indicator of non-zero entries in attention vectors. The $\ell_0$ constraint is guaranteed as the number of $1$s in $\mathbf{I}_{r}^{H},\mathbf{I}_{r}^{T}$ is less than or equal to $k$. Those mapping vectors constitutes the non-differentiable parameters in our framework. Formally, the attention vectors are calculated as follows: $$\pmb{\alpha}_{r}^{H}=\mathrm{Softmax}(\mathbf{v}_{r}^{H} \circ \mathbf{I}_{r}^{H})$$ $$\pmb{\alpha}_{r}^{T}=\mathrm{Softmax}(\mathbf{v}_{r}^{T} \circ \mathbf{I}_{r}^{T})$$ \begin{equation*} \mathrm{Softmax}(\mathbf{x})_i=\frac{\exp(\mathbf{x}_i / \tau)}{\sum_j \exp(\mathbf{x}_j / \tau)} \end{equation*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in (0, \infty)^m$ are the parameters for attention, $\circ$ is element wise multiplication, $\tau$ is the temperature of softmax ($\tau$ is set to $1/4$ in our experiments). Then there are only $k$ non-zero elements in $\pmb{\alpha}_{r}^{H}$ and $\pmb{\alpha}_{r}^{T}$ since $exp(-\infty)=0$. The algorithm is: \begin{itemize} \item[(1)] Randomly initialize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$, which is not differentiable. \item[(2)] Optimize differentiable parameters by SGD with fixed mapping vectors for several epochs. \item[(3)] If it is the early stage of training, fix the differentiable parameters learned in the previous step and optimize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ to minimize objective function. Go to step (2) if the model has not converged. \end{itemize} How do we optimize mapping vectors? A brute-force algorithm is to enumerate all possible values of mapping vectors. However, the time complexity of such a naive algorithm grows exponentially. Instead, we make several approximations to optimize the mapping vectors greedily. We define $J_{r,i}^{H}$ as an approximation to the cost $L$ when relation $r$ is mapped to concept $i$, i.e., $\textbf{I}_{r,i}^{H}=1$. We select the top $k$ concepts with smallest approximated cost when we optimize mapping vectors. The approximated cost takes the same hinge loss as the original cost function shown in Equation \ref{eq:hinge} with a concept specific energy function $f_{r,i}^{H}(h,t)$: \begin{equation*} J_{r,i}^{H}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}\max(\gamma + f_{r,i}^{H}(h,t) -f_{r,i}^{H}(h',t'), 0) \end{equation*} where the energy function is similar to the original function defined in Equation \ref{eq:energy function} except that relation $r$ is completely assigned with concept $i$: \begin{align*} f_{r,i}^{H}(h,t) &= \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\| \\ \end{align*} Similarly, the formulations for tail entities are defined as \begin{align*} f_{r,i}^{T}(h,t) &= \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \mathbf{D}_i \cdot \mathbf{t}\| \end{align*} $$J_{r,i}^{T}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}[\gamma + f_{r,i}^{T}(h,t) -f_{i}^{r,2}(h',t')]_+$$ The above process is a greedy algorithm. We make the following relaxations to ensure efficient and effective approximation: Firstly, concepts used to project head and tail entities are decoupled and selected independently. Secondly, $J_{r,i}^{H}$ and $J_{r,i}^{T}$ are evaluated on condition that concept $i$ is fully utilized, i.e., we ignore the interaction between concept $i$ and other concepts by setting attention $\pmb{\alpha}_{r,1,i}= 1$\footnote{The relaxation is made to reduce the computation complexity. Otherwise, to evaluate indicator vectors involving multiple matrices, we need to perform SGD to get the corresponding optimal values of $v_{r,1}$ and $v_{r,2}$. }. The greedy process works well empirically. We draw our inspiration from Expectation-Maximization (EM) ~\citep{dempster1977maximum} and LightRNN~\citep{LightRNN}. The above algorithm is similar to EM and LightRNN in the sense that some parameters can change rapidly based on the estimation of the corresponding cost. In other words, we are not changing and exploring the mapping vectors bit by bit but they can be reassigned with a completely different value in one step, leading to fast convergence. \fi \iffalse \begin{algorithm}[] Initialize $\mathbf{I}_{r}^{H}$,$\mathbf{I}_{r}^{T}$ randomly \\ \While {not convergence} { \For{\texttt{epoch} $= 1$ to $T_1$}{ Optimize $L$ by SGD on $\theta$ with $I_{r}^{H}, I_{r}^{T}$ fixed } $\texttt{tot\_epoch} = \texttt{tot\_epoch} + T_1$ \\ \If{$\texttt{tot\_epoch} \leq T_2$}{ { Find $\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{(T)'}$ which approximately maximize $L$. \\ Set $\mathbf{I}_{r}^{H}=\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{T}=\mathbf{I}_{r}^{(T)'}$. }} } \caption{Coordinate ascent optimization algorithm} \label{alg:opt} \end{algorithm} \fi \subsection{Corrupted Sample Generating Method} \label{sec:sampling} Recall that we need to sample a negative triple $(h',r,t')$ to compute hinge loss shown in Eq.~\ref{eq:hinge}, given a positive triple $(h,r,t)\in P$. The distribution of negative triple is denoted by $N(h,r,t)$. Previous work~\citep{TransE, AAAI159571, yang-etal-2015,STransE} generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds $\gamma$, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability $p_r$ and from the whole entity set with probability $1-p_r$. The choice of relation-dependent probability $p_r$ is specified in Appendix \ref{sec:domain_sampling}. In the rest of the paper, we refer to the new proposed sampling method as "domain sampling". \section{Experiments} \label{sec:experiments} \subsection{Setup} To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by \citet{TransE} and use the same training/validation/test split as in \citep{TransE}. The information of the two datasets is given in Table \ref{tab:datasets}. In knowledge base completion task, we evaluate model's performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation $r$ and tail $t$ in triple $(h,r,t)$, we compute the energy function $f_r(h', t)$ for each entity $h'$ in the knowledge base and rank all the entities according to the energy. We follow \citet{TransE} to report the \emph{filter} results, i.e., removing all other correct candidates $h'$ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top $10$ accuracy). Lower mean rank or higher Hits@10 mean better performance. \subsection{Implementation Details} We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution $\mathcal{N}(0,\,0.005^2)$. The entity and relation vectors of ITransF are initialized by TransE~\citep{TransE}, following~\citet{AAAI159571, ji-EtAl:2015:ACL-IJCNLP, Garcia-DuranBUG15, garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1}. We ran mini-batch SGD until convergence. We employ the {``\textit{Bernoulli}''} sampling method to generate incorrect triples as used in \citet{AAAI148531}, \citet{AAAI159571}, \citet{He:2015}, \citet{ji-EtAl:2015:ACL-IJCNLP} and \citet{lin-EtAl:2015:EMNLP1}. STransE~\citep{STransE} is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin $\gamma$ to $5$ and dimension of embedding $n$ to $50$ for WN18, and $\gamma = 1, n = 100$ for FB15k. We set the batch size to $20$ for WN18 and $1000$ for FB15k. The learning rate is $0.01$ on WN18 and $0.1$ on FB15k. We use $30$ matrices on WN18 and $300$ matrices on FB15k. All the models are implemented with Theano~\citep{bergstra2010theano}. The Softmax temperature is set to $1/4$. %\FIXME{T1, T2} \subsection{Results \& Analysis} The overall link prediction results\footnote{Note that although IRN~\citep{implicit} does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is $80.7$, similar to models without path information.} are reported in Table \ref{tab:main}. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. %In our framework, since Obama+IsBornIn$\approx$Honolulu, Honolulu+IsCityIn$\approx$US. Then we can expect Obama+IsBornIn+IsCityIn$\approx$US. %Projection matrices are not exactly the same An straightforward way of extending our proposed model to $k$-step path $P=\{r_i\}_{i=1}^{k}$ is to define a path energy function $\| \pmb{\alpha}_{P}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \sum_{r_i\in P}\mathbf{r}_i - \pmb{\alpha}_{P}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell}$, $\pmb{\alpha}_{P}^{H}$ is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. \iffalse In many knowledge bases, a small number of relations enjoy the majority of data, while a lot of relations are rare and hard to deal with. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. \fi \paragraph{Performance on Rare Relations} In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model's performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf's law. Since the frequencies of relations approximately follow a power distribution, their log frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure \ref{fig:rare}.\footnote{Domain sampling is not employed.} As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from $74.4$ to $92.0$, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix \ref{sec:rare_WN} where we can observe tha. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. \paragraph{Interpretability} In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure \ref{fig:att}. For WN18, the words ``hyponym'' and ``hypernym'' refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, ``instance\_hypernym'' is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is $(\textit{New York}, \texttt{instance\_hypernym}, \textit{city})$. This connection has also been discovered by our model, indicated by the fact that ``instance\_hypernym(T)'' and ``hypernym(T)'' share a common concept matrix. Finally, for symmetric relations like ``similar\_to'', we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in ``(somebody) won\_award\_for (some work)'' and ``(some work) award\_winning\_work (somebody)''. What's more, although relation ``won\_award\_for'' and ``was\_nominated\_for'' share the same concepts, their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. \begin{center} \end{center} \paragraph{Model Compression} \label{sec:compress} A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure \ref{fig:num_of_matrix} plots the average performance of ITransF against the number of projection matrices $m$, together with two baseline models. On FB15k, when we reduce the number of matrices from $2200$ to $30$ ($\sim90\times$ compression), our model performance decreases by only $0.09\%$ on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to $18$. \section{Analysis on Sparseness} Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. \paragraph{Dense Attention w/o $\ell_1$ regularization} Although $\ell_0$ constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with $\ell_1$ regularization. We set the $\ell_1$ coefficient to $0.001$ in our experiments and does not apply Softmax since the $\ell_1$ of a vector after Softmax is always $1$. We compare models in a setting where the computation time of dense attention model is acceptable\footnote{With $300$ projection matrices, it takes $1h1m$ to run one epoch for a model with dense attention.}. We use $22$ weight matrices on WN18 and $15$ weight matrices on FB15k and train both the models for $2000$ epochs. The results are reported in Table \ref{tab:dense}. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of model with $\ell_1$ regularized dense attention in Figure \ref{fig:att_l1}. We see that $\ell_1$ regularization does not produce a sparse attention, especially on FB15k. \paragraph{Nonnegative Sparse Encoding} In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking $|2R|$ pretrained projection matrices into a 3-dimensional tensor $X\in \mathbb{R}^{2|R| \times n \times n}$, similar sparsity can be induced by solving an $\ell_1$-regularized tensor completion problem $\min_{\mathbf{A},\mathbf{D}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \|\mathbf{A}\|_{\ell_1}$. Basically, $\mathbf{A}$ plays the same role as the attention vectors in our model. For more details, we refer readers to \citep{faruqui-EtAl:2015:ACL-IJCNLP}. For completeness, we compare our model with the aforementioned approach\footnote{We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP}.}. The comparison is summarized in table \ref{tab:optimization}. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. \iffalse To investigate whether our representation, we apply a non-negative sparse encoding method to obtain the sparse representation of projection matrices. We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP} and set the hyperparamters so that we obtain average degree of sparseness. We first train a STransE model which utilize separate projection matrices for different relations and stack all the projection matrices in a 3-Dimensional tensor $X\in \mathbb{R}^{2H\times n \times n}$ where $H$ is the number of relations. Then we minimize the following reconstruction loss \begin{equation} \begin{aligned} \min_{\mathbf{D},\mathbf{A}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \Omega(\mathbf{A}) + \gamma ||\mathbf{D}||_2^2 \end{aligned} \end{equation} where $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$ is the basis matrices and $\Omega$ is a regularizer to ensure sparseness representations. which utilize a $\ell_1$ regularizer. We \fi \section{Related Work} \label{sec:related_work} In KBC, CTransR~\citep{AAAI159571} enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement~\citep{turney2012domain} and relation adaptation~\citep{bollegala2015embedding}. Data sparsity is a common problem in many fields. Transfer learning~\citep{pan2010survey} has been shown to be promising to transfer knowledge and statistical strengths across similar models or languages. For example, \citet{D16-1153} transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. \citet{zoph-EtAl:2016:EMNLP2016} initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention~\citep{martins2016softmax, makhzani2013k,OUTRAGEOUSLY} share a similar idea of sorting the values before softmax and only keeping the $K$ largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN~\citep{LightRNN}. They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. \section{Conclusion and Future Work} In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. \section*{Acknowledgments} We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. \bibliographystyle{acl_natbib} \clearpage \appendix \section{Appendix} \subsection{Domain Sampling Probability} \label{sec:domain_sampling} In this section, we define the probability $p_r$ to generate a negative sample from the same domain mentioned in Section \ref{sec:sampling}. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. %To avoid generating false negative samples, we apply domain sampling with a higher probability on relations with a sparse domain. Specifically, let $\mathrm{M}^H_r=\{h \mid \exists t (h,r,t) \in P\}$ and $\mathrm{M}^T_r=\{t \mid \exists h (h,r,t) \in P\}$ denote the head or tail domain of relation $r$. Suppose $N_r=\{(h,r,t) \in P\}$ is the induced set of edges with relation $r$. We define the probability $p_r$ as \begin{equation} p_r=min(\frac{\lambda|\mathrm{M}^T_r| |\mathrm{M}^H_r|}{|N_r|}, 0.5) \label{eq:domain_sampling} \end{equation} Our motivation of such a formulation is as follows: Suppose $O_r$ is the set that contains all truthful fact triples on relation $r$, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is ${Pr((h,r,t)\in O_r \mid h\in \mathrm{M}^H_r, t \in \mathrm{M}^T_r) = \frac{|O_r|}{|\mathrm{M}^H_r||\mathrm{M}^T_r|}}$ Assume that all facts are missing with a probability $\lambda$, then $|N_r|=\lambda|O_r|$ and the above probability can be approximated by $ \frac{|N_r|}{\lambda|\mathrm{M}^H_r||\mathrm{M}^T_r|}$. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. \ref{eq:domain_sampling}. The results in section \ref{sec:experiments} are obtained with $\lambda$ set to $0.001$. We compare how different value of $\lambda$ would influence our model's performance in Table. \ref{tab:domain_sampling}. With large $\lambda$ and higher domain sampling probability, our model's Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. \subsection{Performance on individual relations of WN18} \label{sec:rare_WN} We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. \iffalse \subsection{Performance on different relations} \fi \end{document} \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \aclfinalcopy % Uncomment this line for the final sumathbfission \def\aclpaperid{79} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{An Interpretable Knowledge Transfer Model \\ for Knowledge Base Completion} \author{Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, PA 15213, USA\\ {\tt \{qzxie, xuezhem, dzihang, hovy\}@cs.cmu.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. \end{abstract} \section{Introduction} Knowledge bases (KB), such as WordNet~\citep{FellbaumC98}, Freebase~\citep{Bollacker:2008}, YAGO ~\citep{Suchanek:2007} and DBpedia~\citep{LehmannIJJKMHMK15}, are useful resources for many applications such as question answering~\citep{berant-EtAl:2013:EMNLP,yih-EtAl:2015:ACL-IJCNLP,dai-li-xu:2016:P16-1} and information extraction~\citep{mintz-EtAl:2009:ACLIJCNLP}. However, knowledge bases suffer from incompleteness despite their formidable sizes ~\citep{NIPS2013_5028,West:2014:KBC:2566486.2568032}, leading to a number of studies on automatic knowledge base completion (KBC)~\citep{NickelMTG15} or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multi-relational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE}. As a seminal work, \citet{TransE} proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}, and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE~\citep{STransE} utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. \iffalse Of these studies, a number of neural network based techniques have emerged over the years to address the KBC task, among which embedding based models~\citep{ICML2011Nickel_438,bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,guu-miller-liang:2015:EMNLP,STransE} have stood out for its simplicity and effectiveness. \citet{TransE} proposed the TransE model that associated entities and relations with dense embedding vectors. To better model different aspects of the same entity, a variety of models map the entity embedding to a relation-dependent space~\citep{Bordes2014SME,ji-EtAl:2015:ACL-IJCNLP,AAAI159571,STransE}. For instance, STransE~\citep{STransE} projected the head entity and tail entity to a relation-dependent space by multiplying two relation-specific projection matrices. \fi Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC, nguyen2016neighborhood}, alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multi-relation paths~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}. Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. \iffalse However, for some relations, in practice, there are not enough data to estimate the projection matrices.%(repretition) due to the data sparsity problem in knowledge bases. This led to a vast amount of research on utilizing external information, such as textual relations from web-scale corpus~\citep{toutanova-EtAl:2015:EMNLP, toutanova-chen:2015:CVSC} and relation path~\citep{garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1, implicit}, to enhance performance. %\FIXME{The performance decrease was not caused by data sparsity?} Unfortunately, such task-specific knowledge is costly to develop, making these models difficult to adapt to new tasks or new domains. \fi In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. \section{Notation and Previous Models} Let $E$ denote the set of entities and $R$ denote the set of relations. In knowledge base completion, given a training set $P$ of triples $(h, r, t)$ where $h,t\in E$ are the head and tail entities having a relation $r\in R$, e.g., (\textit{Steve Jobs}, \texttt{FounderOf}, \textit{Apple}), we want to predict missing facts such as (\textit{Steve Jobs}, \texttt{Profession}, \textit{Businessperson}). Most of the embedding models for knowledge base completion define an energy function $f_r(h,t)$ according to the fact's plausibility~\citep{bordes:2011,Bordes2014SME,TransE,NIPS2013_5028,AAAI148531,yang-etal-2015, guu-miller-liang:2015:EMNLP,STransE}. The models are learned to minimize energy $f_r(h,t)$ of a plausible triple $(h,r,t)$ and to maximize energy $f_r(h',t')$ of an implausible triple $(h',r,t')$. Motivated by the linear translation phenomenon observed in well trained word embeddings~\citep{mikolov2013distributed}, TransE~\citep{TransE} represents the head entity $h$, the relation $r$ and the tail entity $t$ with vectors $\mathbf{h}, \mathbf{r}$ and $\mathbf{t} \in \mathbb{R}^{n}$ respectively, which were trained so that $\mathbf{h}+\mathbf{r}\approx \mathbf{t}$. They define the energy function as $$f_r(h,t) = \| \mathbf{h} + \mathbf{r} - \mathbf{t} \|_{\ell}$$ where $\ell=1$ or $2$, which means either the $\ell_1$ or the $\ell_2$ norm of the vector $\mathbf{h} + \mathbf{r} - \mathbf{t}$ will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR~\citep{AAAI159571} uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE~\citep{STransE} extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is $$f_r(h,t) = \|\mathbf{W}_{r,1}\mathbf{h} + \mathbf{r} - \mathbf{W}_{r,2}\mathbf{t} \|_{\ell}$$ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. \section{Interpretable Knowledge Transfer} \subsection{Model} As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation ``(somebody) won award for (some work)'' and ``(somebody) was nominated for (some work)'' both describe a person's high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. \paragraph{Energy function} Specifically, we stack all the concept projection matrices to a 3-dimensional tensor $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$, where $m$ is the pre-specified number of concept projection matrices and $n$ is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: \begin{equation} f_r(h,t) = \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell} \label{eq:energy function} \end{equation} where $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T} \in [0,1]^m$, satisfying $\sum_i\pmb{\alpha}_{r,i}^{H}=\sum_i\pmb{\alpha}_{r,i}^{T}=1$, are normalized attention vectors used to compose all concept projection matrices in $\mathbf{D}$ by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use $m=2|R|$ concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section \ref{sec:compress}), though STransE always requires $2|R|$ projection matrices. We follow previous work to minimize the following hinge loss function: \begin{equation} \mathcal{L}=\sum_{\substack{(h,r,t) \sim P,\\ (h',r,t') \sim N}} \left[ \gamma + f_{r}(h,t) -f_{r}(h',t') \right]_+ \label{eq:hinge} \end{equation} where $P$ is the training set consisting of correct triples, $N$ is the distribution of corrupted triples defined in section \ref{sec:sampling}, and $[\cdot]_+ = \max(\cdot, 0)$. Note that we have omitted the dependence of $N$ on $(h,r,t)$ to avoid clutter. We normalize the entity vectors $\mathbf{h},\mathbf{t}$, and the projected entity vectors $\pmb{\alpha}_{r}^{H} \cdot \mathbf{D}\cdot \mathbf{h}$ and $\pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}$ to have unit length after each update, which is an effective regularization method that benefits all models. \paragraph{Sparse attention vectors} In Eq.~\eqref{eq:energy function}, we have defined $\pmb{\alpha}_{r}^{H},\pmb{\alpha}_{r}^{T}$ to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of $m$ matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing $\ell_1$ regularization~\citep{tibshirani1996regression} on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce $\ell_0$ constraints on $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. In order to satisfy both the normalization condition and the $\ell_0$ constraints, we reparameterize the attention vectors in the following way: \begin{align*} \pmb{\alpha}_{r}^{H}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{H}, \mathbf{I}_{r}^{H}) \\ \pmb{\alpha}_{r}^{T}&=\mathrm{SparseSoftmax}(\mathbf{v}_{r}^{T}, \mathbf{I}_{r}^{T}) \end{align*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in \mathbb{R}^m$ are the pre-softmax scores, $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{0,1\}^{m}$ are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the $\mathrm{SparseSoftmax}$ is defined as \begin{equation*} \mathrm{SparseSoftmax}(\mathbf{v}, \mathbf{I})_i=\frac{\exp(\mathbf{v}_i / \tau) \mathbf{I}_i}{\sum_j \exp(\mathbf{v}_j / \tau) \mathbf{I}_j} \end{equation*} with $\tau$ being the temperature of Softmax. With this reparameterization, $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T}$ and $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ replace $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$ to become the real parameters of the model. Also, note that it is equivalent to pose the $\ell_0$ constraints on $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ instead of $\pmb{\alpha}_{r}^T,\pmb{\alpha}_{r}^H$. Putting these modifications together, we can rewrite the optimization problem as \begin{equation} \begin{aligned} & {\text{minimize}} & & \mathcal{L} \\ & \text{subject to} & & \|\mathbf{I}_{r}^{H}\|_{0} \leq k,\|\mathbf{I}_{r}^{T}\|_{0} \leq k \end{aligned} \label{eq:l0_problem} \end{equation} where $\mathcal{L}$ is the loss function defined in Eq.~\eqref{eq:hinge}. \subsection{Block Iterative Optimization} Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under $\ell_0$ constraints. Thus, we resort to an approximated algorithm in this work. For convenience, we refer to the parameters with and without the sparse constraints as the \textit{sparse} partition and the \textit{dense} partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold \begin{enumerate}[itemsep=-1mm] \item the sparsity required by the $\ell_0$ constaint is maintained, and \item the cost define by Eq.~\eqref{eq:hinge} is decreased. \end{enumerate} Satisfying the two criterion seems to highly resemble the original problem defined in Eq.~\eqref{eq:l0_problem}. However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation $r$. In other words, the optimal choice of $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ is independent of $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ for any $r' \neq r$. Therefore, we only need to consider the optimization for a single relation $r$, which is essentially an assignment problem. Note that, however, $\mathbf{I}_{r}^{H}$ and $\mathbf{I}_{r}^{T}$ are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize $\mathbf{I}_{r'}^{H}, \mathbf{I}_{r'}^{T}$ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation $r$, we consider the induced cost $\mathcal{L}_{r,i}^{H}$ where only a single projection matrix $i$ is used for the head entity: \begin{equation*} \mathcal{L}_{r,i}^{H} = \sum_{\substack{(h,r,t) \sim P_r,\\ (h',r,t') \sim N_r}} \left[ \gamma + f_{r,i}^{H}(h,t) - f_{r,i}^{H}(h',t') \right]_+ \end{equation*} where $f_{r,i}^{H}(h,t) = \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|$ is the corresponding energy function, and the subscript in $P_r$ and $N_r$ denotes the subsets with relation $r$. Intuitively, $\mathcal{L}_{r,i}^{H}$ measures, given the current tail attention vector $\pmb{\alpha}_{r}^{T}$, if only one project matrix could be chosen for the head entity, how implausible $D_i$ would be. Hence, $i^* = \arg\min_i \mathcal{L}_{r,i}^{H}$ gives us the best single projection matrix on the head side given $\pmb{\alpha}_{r}^{T}$. Now, in order to choose the best $k$ matrices, we basically ignore the interaction among projection matrices, and update $\mathbf{I}_{r}^{H}$ in the following way: \begin{equation*} \mathbf{I}_{r,i}^{H} \leftarrow \begin{cases} 1, &i \in \mathrm{argpartition}_{i}(\mathcal{L}_{r,i}^{H}, k) \\ 0, &\text{otherwise} \end{cases}%, \, \forall i \end{equation*} where the function $\mathrm{argpartition}_{i}(x_i, k)$ produces the index set of the lowest-$k$ values of $x_i$. Analogously, we can define the single-matrix cost $\mathcal{L}_{r,i}^{T}$ and the energy function $f_{r,i}^{T}(h,t)$ on the tail side in a symmetric way. Then, the update rule for $\mathbf{I}_{r}^{H}$ follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section \ref{sec:experiments}, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. \iffalse Though sparseness is favorable in practice, even in linear regression, it has been shown to be an NP-hard problem to find the optimal solutions under $\ell_0$ constraints. %A lot of algorithms such as Approximated algorithms such as forward stepwise algorithm are proposed. Here we propose an approximated algorithm to solve it. We divide all of our parameters into two partitions: differentiable and non-differentiable, and we iteratively optimize those two. Differentiable parameters such as embeddings, projection matrices are optimized by SGD. Non-differentiable parameters are optimized by a greedy approximated process, aiming to minimize the cost function. Recall that we want the number of concepts associated with relations to be less than or equal to $k$, indicated by the $\ell_0$ constraint of attention vectors. We represent the mapping between relation $r$ and concepts by two indicator vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}\in \{-\infty ,1\}^{m}$, the indicator of non-zero entries in attention vectors. The $\ell_0$ constraint is guaranteed as the number of $1$s in $\mathbf{I}_{r}^{H},\mathbf{I}_{r}^{T}$ is less than or equal to $k$. Those mapping vectors constitutes the non-differentiable parameters in our framework. Formally, the attention vectors are calculated as follows: $$\pmb{\alpha}_{r}^{H}=\mathrm{Softmax}(\mathbf{v}_{r}^{H} \circ \mathbf{I}_{r}^{H})$$ $$\pmb{\alpha}_{r}^{T}=\mathrm{Softmax}(\mathbf{v}_{r}^{T} \circ \mathbf{I}_{r}^{T})$$ \begin{equation*} \mathrm{Softmax}(\mathbf{x})_i=\frac{\exp(\mathbf{x}_i / \tau)}{\sum_j \exp(\mathbf{x}_j / \tau)} \end{equation*} where $\mathbf{v}_{r}^{H}, \mathbf{v}_{r}^{T} \in (0, \infty)^m$ are the parameters for attention, $\circ$ is element wise multiplication, $\tau$ is the temperature of softmax ($\tau$ is set to $1/4$ in our experiments). Then there are only $k$ non-zero elements in $\pmb{\alpha}_{r}^{H}$ and $\pmb{\alpha}_{r}^{T}$ since $exp(-\infty)=0$. The algorithm is: \begin{itemize} \item[(1)] Randomly initialize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$, which is not differentiable. \item[(2)] Optimize differentiable parameters by SGD with fixed mapping vectors for several epochs. \item[(3)] If it is the early stage of training, fix the differentiable parameters learned in the previous step and optimize mapping vectors $\mathbf{I}_{r}^{H}, \mathbf{I}_{r}^{T}$ to minimize objective function. Go to step (2) if the model has not converged. \end{itemize} How do we optimize mapping vectors? A brute-force algorithm is to enumerate all possible values of mapping vectors. However, the time complexity of such a naive algorithm grows exponentially. Instead, we make several approximations to optimize the mapping vectors greedily. We define $J_{r,i}^{H}$ as an approximation to the cost $L$ when relation $r$ is mapped to concept $i$, i.e., $\textbf{I}_{r,i}^{H}=1$. We select the top $k$ concepts with smallest approximated cost when we optimize mapping vectors. The approximated cost takes the same hinge loss as the original cost function shown in Equation \ref{eq:hinge} with a concept specific energy function $f_{r,i}^{H}(h,t)$: \begin{equation*} J_{r,i}^{H}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}\max(\gamma + f_{r,i}^{H}(h,t) -f_{r,i}^{H}(h',t'), 0) \end{equation*} where the energy function is similar to the original function defined in Equation \ref{eq:energy function} except that relation $r$ is completely assigned with concept $i$: \begin{align*} f_{r,i}^{H}(h,t) &= \| \mathbf{D}_i \cdot \mathbf{h} + \mathbf{r} - \pmb{\alpha}_{r}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\| \\ \end{align*} Similarly, the formulations for tail entities are defined as \begin{align*} f_{r,i}^{T}(h,t) &= \| \pmb{\alpha}_{r}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \mathbf{r} - \mathbf{D}_i \cdot \mathbf{t}\| \end{align*} $$J_{r,i}^{T}=\sum_{\substack{(h,r,t) \in P, \\ (h',r,t') \sim N(h,r,t)}}[\gamma + f_{r,i}^{T}(h,t) -f_{i}^{r,2}(h',t')]_+$$ The above process is a greedy algorithm. We make the following relaxations to ensure efficient and effective approximation: Firstly, concepts used to project head and tail entities are decoupled and selected independently. Secondly, $J_{r,i}^{H}$ and $J_{r,i}^{T}$ are evaluated on condition that concept $i$ is fully utilized, i.e., we ignore the interaction between concept $i$ and other concepts by setting attention $\pmb{\alpha}_{r,1,i}= 1$\footnote{The relaxation is made to reduce the computation complexity. Otherwise, to evaluate indicator vectors involving multiple matrices, we need to perform SGD to get the corresponding optimal values of $v_{r,1}$ and $v_{r,2}$. }. The greedy process works well empirically. We draw our inspiration from Expectation-Maximization (EM) ~\citep{dempster1977maximum} and LightRNN~\citep{LightRNN}. The above algorithm is similar to EM and LightRNN in the sense that some parameters can change rapidly based on the estimation of the corresponding cost. In other words, we are not changing and exploring the mapping vectors bit by bit but they can be reassigned with a completely different value in one step, leading to fast convergence. \fi \iffalse \begin{algorithm}[] Initialize $\mathbf{I}_{r}^{H}$,$\mathbf{I}_{r}^{T}$ randomly \\ \While {not convergence} { \For{\texttt{epoch} $= 1$ to $T_1$}{ Optimize $L$ by SGD on $\theta$ with $I_{r}^{H}, I_{r}^{T}$ fixed } $\texttt{tot\_epoch} = \texttt{tot\_epoch} + T_1$ \\ \If{$\texttt{tot\_epoch} \leq T_2$}{ { Find $\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{(T)'}$ which approximately maximize $L$. \\ Set $\mathbf{I}_{r}^{H}=\mathbf{I}_{r}^{(H)'}, \mathbf{I}_{r}^{T}=\mathbf{I}_{r}^{(T)'}$. }} } \caption{Coordinate ascent optimization algorithm} \label{alg:opt} \end{algorithm} \fi \subsection{Corrupted Sample Generating Method} \label{sec:sampling} Recall that we need to sample a negative triple $(h',r,t')$ to compute hinge loss shown in Eq.~\ref{eq:hinge}, given a positive triple $(h,r,t)\in P$. The distribution of negative triple is denoted by $N(h,r,t)$. Previous work~\citep{TransE, AAAI159571, yang-etal-2015,STransE} generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds $\gamma$, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability $p_r$ and from the whole entity set with probability $1-p_r$. The choice of relation-dependent probability $p_r$ is specified in Appendix \ref{sec:domain_sampling}. In the rest of the paper, we refer to the new proposed sampling method as "domain sampling". \section{Experiments} \label{sec:experiments} \subsection{Setup} To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by \citet{TransE} and use the same training/validation/test split as in \citep{TransE}. The information of the two datasets is given in Table \ref{tab:datasets}. In knowledge base completion task, we evaluate model's performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation $r$ and tail $t$ in triple $(h,r,t)$, we compute the energy function $f_r(h', t)$ for each entity $h'$ in the knowledge base and rank all the entities according to the energy. We follow \citet{TransE} to report the \emph{filter} results, i.e., removing all other correct candidates $h'$ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top $10$ accuracy). Lower mean rank or higher Hits@10 mean better performance. \subsection{Implementation Details} We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution $\mathcal{N}(0,\,0.005^2)$. The entity and relation vectors of ITransF are initialized by TransE~\citep{TransE}, following~\citet{AAAI159571, ji-EtAl:2015:ACL-IJCNLP, Garcia-DuranBUG15, garciaduran-bordes-usunier:2015:EMNLP, lin-EtAl:2015:EMNLP1}. We ran mini-batch SGD until convergence. We employ the {``\textit{Bernoulli}''} sampling method to generate incorrect triples as used in \citet{AAAI148531}, \citet{AAAI159571}, \citet{He:2015}, \citet{ji-EtAl:2015:ACL-IJCNLP} and \citet{lin-EtAl:2015:EMNLP1}. STransE~\citep{STransE} is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin $\gamma$ to $5$ and dimension of embedding $n$ to $50$ for WN18, and $\gamma = 1, n = 100$ for FB15k. We set the batch size to $20$ for WN18 and $1000$ for FB15k. The learning rate is $0.01$ on WN18 and $0.1$ on FB15k. We use $30$ matrices on WN18 and $300$ matrices on FB15k. All the models are implemented with Theano~\citep{bergstra2010theano}. The Softmax temperature is set to $1/4$. %\FIXME{T1, T2} \subsection{Results \& Analysis} The overall link prediction results\footnote{Note that although IRN~\citep{implicit} does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is $80.7$, similar to models without path information.} are reported in Table \ref{tab:main}. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. %In our framework, since Obama+IsBornIn$\approx$Honolulu, Honolulu+IsCityIn$\approx$US. Then we can expect Obama+IsBornIn+IsCityIn$\approx$US. %Projection matrices are not exactly the same An straightforward way of extending our proposed model to $k$-step path $P=\{r_i\}_{i=1}^{k}$ is to define a path energy function $\| \pmb{\alpha}_{P}^{H} \cdot \mathbf{D} \cdot \mathbf{h} + \sum_{r_i\in P}\mathbf{r}_i - \pmb{\alpha}_{P}^{T} \cdot \mathbf{D} \cdot \mathbf{t}\|_{\ell}$, $\pmb{\alpha}_{P}^{H}$ is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. \iffalse In many knowledge bases, a small number of relations enjoy the majority of data, while a lot of relations are rare and hard to deal with. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. \fi \paragraph{Performance on Rare Relations} In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model's performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf's law. Since the frequencies of relations approximately follow a power distribution, their log frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure \ref{fig:stat}. We can clearly see that the distributions exhibit long tails, just like the Zipf's law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure \ref{fig:rare}.\footnote{Domain sampling is not employed.} As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from $74.4$ to $92.0$, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix \ref{sec:rare_WN} where we can observe tha. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. \paragraph{Interpretability} In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure \ref{fig:att}. For WN18, the words ``hyponym'' and ``hypernym'' refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, ``instance\_hypernym'' is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is $(\textit{New York}, \texttt{instance\_hypernym}, \textit{city})$. This connection has also been discovered by our model, indicated by the fact that ``instance\_hypernym(T)'' and ``hypernym(T)'' share a common concept matrix. Finally, for symmetric relations like ``similar\_to'', we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in ``(somebody) won\_award\_for (some work)'' and ``(some work) award\_winning\_work (somebody)''. What's more, although relation ``won\_award\_for'' and ``was\_nominated\_for'' share the same concepts, their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. \begin{center} \end{center} \paragraph{Model Compression} \label{sec:compress} A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure \ref{fig:num_of_matrix} plots the average performance of ITransF against the number of projection matrices $m$, together with two baseline models. On FB15k, when we reduce the number of matrices from $2200$ to $30$ ($\sim90\times$ compression), our model performance decreases by only $0.09\%$ on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to $18$. \section{Analysis on Sparseness} Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. \paragraph{Dense Attention w/o $\ell_1$ regularization} Although $\ell_0$ constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with $\ell_1$ regularization. We set the $\ell_1$ coefficient to $0.001$ in our experiments and does not apply Softmax since the $\ell_1$ of a vector after Softmax is always $1$. We compare models in a setting where the computation time of dense attention model is acceptable\footnote{With $300$ projection matrices, it takes $1h1m$ to run one epoch for a model with dense attention.}. We use $22$ weight matrices on WN18 and $15$ weight matrices on FB15k and train both the models for $2000$ epochs. The results are reported in Table \ref{tab:dense}. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of model with $\ell_1$ regularized dense attention in Figure \ref{fig:att_l1}. We see that $\ell_1$ regularization does not produce a sparse attention, especially on FB15k. \paragraph{Nonnegative Sparse Encoding} In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking $|2R|$ pretrained projection matrices into a 3-dimensional tensor $X\in \mathbb{R}^{2|R| \times n \times n}$, similar sparsity can be induced by solving an $\ell_1$-regularized tensor completion problem $\min_{\mathbf{A},\mathbf{D}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \|\mathbf{A}\|_{\ell_1}$. Basically, $\mathbf{A}$ plays the same role as the attention vectors in our model. For more details, we refer readers to \citep{faruqui-EtAl:2015:ACL-IJCNLP}. For completeness, we compare our model with the aforementioned approach\footnote{We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP}.}. The comparison is summarized in table \ref{tab:optimization}. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. \iffalse To investigate whether our representation, we apply a non-negative sparse encoding method to obtain the sparse representation of projection matrices. We use the toolkit provided by \citep{faruqui-EtAl:2015:ACL-IJCNLP} and set the hyperparamters so that we obtain average degree of sparseness. We first train a STransE model which utilize separate projection matrices for different relations and stack all the projection matrices in a 3-Dimensional tensor $X\in \mathbb{R}^{2H\times n \times n}$ where $H$ is the number of relations. Then we minimize the following reconstruction loss \begin{equation} \begin{aligned} \min_{\mathbf{D},\mathbf{A}} ||\mathbf{X}-\mathbf{DA}||_2^2 + \lambda \Omega(\mathbf{A}) + \gamma ||\mathbf{D}||_2^2 \end{aligned} \end{equation} where $\mathbf{D}\in \mathbb{R}^{m \times n \times n}$ is the basis matrices and $\Omega$ is a regularizer to ensure sparseness representations. which utilize a $\ell_1$ regularizer. We \fi \section{Related Work} \label{sec:related_work} In KBC, CTransR~\citep{AAAI159571} enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement~\citep{turney2012domain} and relation adaptation~\citep{bollegala2015embedding}. Data sparsity is a common problem in many fields. Transfer learning~\citep{pan2010survey} has been shown to be promising to transfer knowledge and statistical strengths across similar models or languages. For example, \citet{D16-1153} transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. \citet{zoph-EtAl:2016:EMNLP2016} initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention~\citep{martins2016softmax, makhzani2013k,OUTRAGEOUSLY} share a similar idea of sorting the values before softmax and only keeping the $K$ largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN~\citep{LightRNN}. They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. \section{Conclusion and Future Work} In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. \section*{Acknowledgments} We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. \bibliographystyle{acl_natbib} \clearpage \appendix \section{Appendix} \subsection{Domain Sampling Probability} \label{sec:domain_sampling} In this section, we define the probability $p_r$ to generate a negative sample from the same domain mentioned in Section \ref{sec:sampling}. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. %To avoid generating false negative samples, we apply domain sampling with a higher probability on relations with a sparse domain. Specifically, let $\mathrm{M}^H_r=\{h \mid \exists t (h,r,t) \in P\}$ and $\mathrm{M}^T_r=\{t \mid \exists h (h,r,t) \in P\}$ denote the head or tail domain of relation $r$. Suppose $N_r=\{(h,r,t) \in P\}$ is the induced set of edges with relation $r$. We define the probability $p_r$ as \begin{equation} p_r=min(\frac{\lambda|\mathrm{M}^T_r| |\mathrm{M}^H_r|}{|N_r|}, 0.5) \label{eq:domain_sampling} \end{equation} Our motivation of such a formulation is as follows: Suppose $O_r$ is the set that contains all truthful fact triples on relation $r$, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is ${Pr((h,r,t)\in O_r \mid h\in \mathrm{M}^H_r, t \in \mathrm{M}^T_r) = \frac{|O_r|}{|\mathrm{M}^H_r||\mathrm{M}^T_r|}}$ Assume that all facts are missing with a probability $\lambda$, then $|N_r|=\lambda|O_r|$ and the above probability can be approximated by $ \frac{|N_r|}{\lambda|\mathrm{M}^H_r||\mathrm{M}^T_r|}$. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. \ref{eq:domain_sampling}. The results in section \ref{sec:experiments} are obtained with $\lambda$ set to $0.001$. We compare how different value of $\lambda$ would influence our model's performance in Table. \ref{tab:domain_sampling}. With large $\lambda$ and higher domain sampling probability, our model's Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. \subsection{Performance on individual relations of WN18} \label{sec:rare_WN} We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. \iffalse \subsection{Performance on different relations} \fi \end{document}
Structured Attention Networks
1702.00887
Table 2: Performance (average length to failure %) of models on the tree-transduction task.
[ "Depth", "No Atten", "Simple", "Structured" ]
[ [ "2", "7.6", "87.4", "99.2" ], [ "3", "4.1", "49.6", "87.0" ], [ "4", "2.8", "23.3", "64.5" ], [ "5", "2.1", "15.0", "30.8" ], [ "6", "1.5", "8.5", "18.2" ] ]
Note that this task is fairly difficult as the encoder is quite simple. The baseline model (unsurprisingly) performs poorly as it has no information about the source ordering. The simple attention model performs better, but is significantly outperformed by the structured model with a tree structure bias. We hypothesize that the model is partially reconstructing the arithmetic tree.
\documentclass{article} % For LaTeX2e \usepackage[font=small,labelfont=bf]{caption} \usepackage[noend]{algpseudocode} \usetikzlibrary{matrix,arrows,backgrounds,calc,patterns,positioning,fit,shapes} \usepackage[titletoc,title]{appendix} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator{\softmax}{softmax} \DeclareMathOperator{\logadd}{logadd} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\signexp}{signexp} \DeclareMathOperator{\sigmoid}{sigmoid} \DeclareMathOperator{\softparent}{soft-parent} \DeclareMathOperator{\parent}{parent} \DeclareMathOperator{\head}{head} \DeclareMathOperator{\softhead}{soft-head} \DeclareMathOperator{\simf}{sim} \DeclareMathOperator{\relu}{ReLU} \DeclareMathOperator{\lstm}{LSTM} \DeclareMathOperator{\rnn}{RNN} \DeclareMathOperator{\mlp}{MLP} \newcommand{\oplusgets}{\gets_{\oplus}} \newcommand{\pgrad}{\nabla_{p}^\mathcal{L}} \newcommand{\alphagrad}{\nabla_{\alpha}^\mathcal{L}} \newcommand{\betagrad}{\nabla_{\beta}^\mathcal{L}} \newcommand{\thetagrad}{\log \nabla_{\theta}^\mathcal{L}} \newcommand{\xvec}{\mathbf{x}} \newcommand{\yvec}{\mathbf{y}} \newcommand{\cvec}{\mathbf{c}} \newcommand{\mvec}{\mathbf{m}} \newcommand{\zvec}{\mathbf{z}} \newcommand{\qvec}{\mathbf{q}} \newcommand{\svec}{\mathbf{s}} \newcommand{\tvec}{\mathbf{t}} \newcommand{\mcL}{\mathcal{L}} \newcommand{\mcT}{\mathcal{T}} \newcommand{\mcY}{\mathcal{Y}} \newcommand{\mcV}{\mathcal{V}} \newcommand{\mcC}{\mathcal{C}} \newcommand{\mcA}{\mathcal{A}} \newcommand{\mcZ}{\mathcal{Z}} \newcommand{\mcX}{\mathcal{X}} \newcommand{\context}{\mathbf{y}_{\mathrm{c}}} \newcommand{\embcontext}{\mathbf{\tilde{y}}_{\mathrm{c}}} \newcommand{\inpcontext}{\mathbf{\tilde{x}}} \newcommand{\start}{\mathbf{\tilde{y}}_{\mathrm{c0}}} \newcommand{\End}{\mathrm{\texttt{</s>}}} \newcommand{\Uvec}{\mathbf{U}} \newcommand{\Evec}{\mathbf{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\Gvec}{\mathbf{G}} \newcommand{\Fvec}{\mathbf{F}} \newcommand{\Pvec}{\mathbf{P}} \newcommand{\pvec}{\mathbf{p}} \newcommand{\Vvec}{\mathbf{V}} \newcommand{\Wvec}{\mathbf{W}} \newcommand{\hvec}{\mathbf{h}} \newcommand{\wvec}{\mathbf{w}} \newcommand{\uvec}{\mathbf{u}} \newcommand{\vvec}{\mathbf{v}} \newcommand{\bvec}{\mathbf{b}} \newcommand{\reals}{\mathbb{R}} \newcommand{\ind}{\mathbbm{1}} \newcommand\given{\,|\,} \title{Structured Attention Networks} \author{Yoon Kim\thanks{Equal contribution.} \hspace{5mm} Carl Denton$^*$ \hspace{5mm} Luong Hoang \hspace{5mm} Alexander M. Rush \\ \texttt{\small \{yoonkim@seas,carldenton@college,lhoang@g,srush@seas\}.harvard.edu}\\ School of Engineering and Applied Sciences\\ Harvard University\\ Cambridge, MA 02138, USA \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard soft-selection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linear-chain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks: tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention. \end{abstract} \section{Introduction} Attention networks are now a standard part of the deep learning toolkit, contributing to impressive results in neural machine translation \citep{Bahdanau2015,Luong2015}, image captioning \citep{Xu2015}, speech recognition \citep{Chorowski2015,Chan2015}, question answering \citep{Hermann2015,Sukhbaatar2015}, and algorithm-learning \citep{Graves2014,Vinyals2015c}, among many other applications (see \cite{Cho2015} for a comprehensive review). This approach alleviates the bottleneck of compressing a source into a fixed-dimensional vector by equipping a model with variable-length memory \citep{Weston2014,Graves2014,Graves2016}, thereby providing random access into the source as needed. Attention is implemented as a hidden layer which computes a categorical distribution (or hierarchy of categorical distributions) to make a soft-selection over source elements. Noting the empirical effectiveness of attention networks, we also observe that the standard attention-based architecture does not directly model any \textit{structural dependencies} that may exist among the source elements, and instead relies completely on the hidden layers of the network. While one might argue that these structural dependencies can be learned implicitly by a deep model with enough data, in practice, it may be useful to provide a structural bias. Modeling structural dependencies at the final, \textit{output} layer has been shown to be important in many deep learning applications, most notably in seminal work on graph transformers \citep{LeCun1998}, key work on NLP \citep{Collobert2011}, and in many other areas \citep[\textit{inter alia}]{Peng2009,Do2010,Jaderberg2014b,Chen2015b,Durrett2015,Lample2016}. In this work, we consider applications which may require structural dependencies at the attention layer, and develop \textit{internal} structured layers for modeling these directly. This approach generalizes categorical soft-selection attention layers by specifying possible structural dependencies in a soft manner. Key applications will be the development of an attention function that segments the source input into subsequences and one that takes into account the latent recursive structure (i.e. parse tree) of a source sentence. Our approach views the attention mechanism as a graphical model over a set of latent variables. The standard attention network can be seen as an expectation of an annotation function with respect to a single latent variable whose categorical distribution is parameterized to be a function of the source. In the general case we can specify a graphical model over multiple latent variables whose edges encode the desired structure. Computing forward attention requires performing inference to obtain the expectation of the annotation function, i.e. the \textit{context vector}. This expectation is computed over an exponentially-sized set of structures (through the machinery of graphical models/structured prediction), hence the name \textit{structured attention} network. Notably each step of this process (including inference) is differentiable, so the model can be trained end-to-end without having to resort to deep policy gradient methods \citep{schulman2015gradient}. The differentiability of inference algorithms over graphical models has previously been noted by various researchers \citep{Li2009,Domke2011,Stoyanov2011,Stoyanov2012,Gormley2015}, primarily outside the area of deep learning. For example, \citet{Gormley2015} treat an entire graphical model as a differentiable circuit and backpropagate risk through variational inference (loopy belief propagation) for minimium risk training of dependency parsers. Our contribution is to combine these ideas to produce structured \textit{internal} attention layers within deep networks, noting that these approaches allow us to use the resulting marginals to create new features, as long as we do so a differentiable way. We focus on two classes of structured attention: linear-chain conditional random fields (CRFs) \citep{Lafferty2001} and first-order graph-based dependency parsers \citep{Eisner1996}. The initial work of \cite{Bahdanau2015} was particularly interesting in the context of machine translation, as the model was able to implicitly learn an \textit{alignment model as a hidden layer}, effectively embedding inference into a neural network. In similar vein, under our framework the model has the capacity to learn a \textit{segmenter as a hidden layer} or a \textit{parser as a hidden layer}, without ever having to see a segmented sentence or a parse tree. Our experiments apply this approach to a difficult synthetic reordering task, as well as to machine translation, question answering, and natural language inference. We find that models trained with structured attention outperform standard attention models. Analysis of learned representations further reveal that interesting structures emerge as an internal layer of the model. All code is available at \url{http://github.com/harvardnlp/struct-attn}. \section{Background: Attention Networks} A standard neural network consist of a series of non-linear transformation layers, where each layer produces a fixed-dimensional hidden representation. For tasks with large input spaces, this paradigm makes it hard to control the interaction between components. For example in machine translation, the source consists of an entire sentence, and the output is a prediction for each word in the translated sentence. Utilizing a standard network leads to an information bottleneck, where one hidden layer must encode the entire source sentence. Attention provides an alternative approach.\footnote{Another line of work involves marginalizing over latent variables (e.g. latent alignments) for sequence-to-sequence transduction \citep{Kong2016,Lu2016,Yu2016,Yu2017}.} An attention network maintains a set of hidden representations that scale with the size of the source. The model uses an internal inference step to perform a soft-selection over these representations. This method allows the model to maintain a variable-length memory and has shown to be crucially important for scaling systems for many tasks. Formally, let $x = [x_1, \dots, x_n]$ represent a sequence of inputs, let $q$ be a query, and let $z$ be a categorical latent variable with sample space $\{1, \ldots, n\}$ that encodes the desired selection among these inputs. Our aim is to produce a \textit{context} $c$ based on the sequence and the query. To do so, we assume access to an \textit{attention distribution} $z \sim p(z \given x, q)$, where we condition $p$ on the inputs $x$ and a query $q$. The \textit{context} over a sequence is defined as expectation, $c = \E_{z \sim p(z \given x, q)} [f(x, z)]$ where $f(x, z)$ is an \textit{annotation function}. Attention of this form can be applied over any type of input, however, we will primarily be concerned with ``deep'' networks, where both the annotation function and attention distribution are parameterized with neural networks, and the context produced is a vector fed to a downstream network. For example, consider the case of attention-based neural machine translation \citep{Bahdanau2015}. Here the sequence of inputs $[\mathbf{x}_1, \ldots, \mathbf{x}_n]$ are the hidden states of a recurrent neural network (RNN), running over the words in the source sentence, $\mathbf{q}$ is the RNN hidden state of the target decoder (i.e. vector representation of the query $q$), and $z$ represents the source position to be attended to for translation. The attention distribution $p$ is simply $p(z = i \given x, q) = \softmax(\theta_i)$ where $\theta \in \reals^n$ is a parameterized potential typically based on a neural network, e.g. $\theta_i = \mlp([\mathbf{x}_i; \qvec])$. The annotation function is defined to simply return the selected hidden state, $f(\mathbf{x}, z) = \mathbf{x}_z$. The context vector can then be computed using a simple sum, \begin{equation}\label{vanilla-attn} \mathbf{c} = \E_{z \sim p(z \given x, q)} [f( x, z)] = \sum_{i=1}^n p(z = i \given x, q) \mathbf{x}_i \end{equation} Other tasks such as question answering use attention in a similar manner, for instance by replacing source $[x_1, \ldots, x_n]$ with a set of potential facts and $q$ with a representation of the question. In summary we interpret the attention mechanism as taking the expectation of an annotation function $f(x,z)$ with respect to a latent variable $z \sim p$, where $p$ is parameterized to be function of $x$ and $q$. \section{Structured Attention} Attention networks simulate selection from a set using a soft model. In this work we consider generalizing selection to types of attention, such as selecting chunks, segmenting inputs, or even attending to latent subtrees. One interpretation of this attention is as using soft-selection that considers all possible structures over the input, of which there may be exponentially many possibilities. Of course, this expectation can no longer be computed using a simple sum, and we need to incorporate the machinery of inference directly into our neural network. Define a structured attention model as being an attention model where $z$ is now a vector of discrete latent variables $[z_1, \ldots, z_m]$ and the attention distribution is $p(z \given x, q)$ is defined as a \textit{conditional random field} (CRF), specifying the independence structure of the $z$ variables. Formally, we assume an undirected graph structure with $m$ vertices. The CRF is parameterized with clique (log-)potentials $\theta_C(z_{C}) \in \reals$, where the $z_C$ indicates the subset of $z$ given by clique $C$. Under this definition, the attention probability is defined as, $p(z \given x, q; \theta) = \softmax(\sum_C \theta_C(z_C))$, where for symmetry we use $\softmax$ in a general sense, i.e. $\softmax(g(z)) = \frac{1}{Z} \exp(g(z))$ where $Z = \sum_{z'} \exp(g(z'))$ is the implied partition function. In practice we use a neural CRF, where $\theta$ comes from a deep model over $x, q$. In structured attention, we also assume that the annotation function $f$ factors (at least) into clique annotation functions $f(x, z) = \sum_C f_C(x, z_C)$. Under standard conditions on the conditional independence structure, inference techniques from graphical models can be used to compute the forward-pass expectations and the context: \[c = \E_{z \sim p(z \given x, q)} [f(x, z)] = \sum_{C} \E_{z \sim p(z_C \given x, q)} [f_C(x, z_C)]\] \subsection{Example 1: Subsequence Selection} \label{sec:subselect} Suppose instead of soft-selecting a single input, we wanted to explicitly model the selection of contiguous subsequences. We could naively apply categorical attention over all subsequences, or hope the model learns a multi-modal distribution to combine neighboring words. Structured attention provides an alternate approach. Concretely, let $m =n$, define $z$ to be a random vector $z = [z_1, \dots, z_n]$ with $z_i \in \{0, 1\}$, and define our annotation function to be, $f(x,z) = \sum_{i=1}^n f_{i} (x,z_{i})$ where $f_{i} (x,z_i) = \ind \{ z_i = 1\} \xvec_i$. The explicit expectation is then, \begin{equation}\label{struct-attn} \E_{z_1, \dots, z_n }[f(x,z)] = \sum_{i=1}^n p(z_i = 1 \given x, q) \xvec_i \end{equation} Equation (\ref{struct-attn}) is similar to equation (\ref{vanilla-attn})---both are a linear combination of the input representations where the scalar is between $[0,1]$ and represents how much attention should be focused on each input. However, (2) is fundamentally different in two ways: (i) it allows for multiple inputs (or no inputs) to be selected for a given query; (ii) we can incorporate structural dependencies across the $z_i$'s. For instance, we can model the distribution over $z$ with a linear-chain CRF with pairwise edges, \begin{align}\label{linear-chain} p(z_1, \dots, z_n \given x, q) = \softmax \left( \sum_{i=1}^{n-1} \theta_{i,i+1}(z_i, z_{i+1}) \right) \end{align} where $\theta_{k,l}$ is the pairwise potential for $z_i = k$ and $z_{i+1} = l$. This model is shown in Figure~\ref{fig:seq}c. Compare this model to the standard attention in Figure~\ref{fig:seq}a, or to a simple Bernoulli (sigmoid) selection method, $p(z_i = 1 \given x, q) = \sigmoid(\theta_{i}) $, shown in Figure~\ref{fig:seq}b. All three of these methods can use potentials from the same neural network or RNN that takes $x$ and $q$ as inputs. In the case of the linear-chain CRF in~(\ref{linear-chain}), the marginal distribution $p(z_i = 1 \given x)$ can be calculated efficiently in linear-time for all $i$ using message-passing, i.e. the forward-backward algorithm. These marginals allow us to calculate (\ref{struct-attn}), and in doing so we implicitly sum over an exponentially-sized set of structures (i.e. all binary sequences of length $n$) through dynamic programming. We refer to this type of attention layer as a \emph{segmentation attention} layer. Note that the forward-backward algorithm is being used as parameterized \textit{pooling} (as opposed to output computation), and can be thought of as generalizing the standard attention softmax. Crucially this generalization from vector softmax to forward-backward is just a series of differentiable steps,\footnote{As are other dynamic programming algorithms for inference in graphical models, such as (loopy and non-loopy) belief propagation.} and we can compute gradients of its output (marginals) with respect to its input (potentials). This will allow the structured attention model to be trained end-to-end as part of a deep model. \subsection{Example 2: Syntactic Tree Selection } This same approach can be used for more involved structural dependencies. One popular structure for natural language tasks is a dependency tree, which enforces a structural bias on the recursive dependencies common in many languages. In particular a dependency tree enforces that each word in a source sentence is assigned exactly one parent word (\textit{head word}), and that these assignments do not cross (projective structure). Employing this bias encourages the system to make a soft-selection based on learned syntactic dependencies, without requiring linguistic annotations or a pipelined decision. A dependency parser can be partially formalized as a graphical model with the following cliques \citep{Smith2008}: latent variables $z_{ij} \in \{0,1\}$ for all $i \ne j$, which indicates that the $i$-th word is the parent of the $j$-th word (i.e. $x_i \rightarrow x_j$); and a special global constraint that rules out configurations of $z_{ij}$'s that violate parsing constraints (e.g. one head, projectivity). The parameters to the graph-based CRF dependency parser are the potentials $\theta_{ij}$, which reflect the score of selecting $x_i$ as the parent of $x_j$. The probability of a parse tree $z$ given the sentence $x = [x_1, \ldots, x_n]$ is, \begin{equation} p(z \given x, q)= \softmax \left(\ind\{z\ \text{is valid}\} \sum_{i \neq j} \ind\{z_{ij} = 1\} \theta_{ij} \right) \end{equation} where $z$ is represented as a vector of $z_{ij}$'s for all $i \ne j$. It is possible to calculate the marginal probability of each edge $p(z_{ij} = 1\given x, q)$ for all $i, j$ in $O(n^3)$ time using the inside-outside algorithm \citep{Baker1979} on the data structures of \citet{Eisner1996}. The parsing contraints ensure that each word has exactly one head (i.e. $\sum_{i=1}^n z_{ij} = 1$). Therefore if we want to utilize the \emph{soft-head} selection of a position $j$, the context vector is defined as: \begin{align*} f_j(x, z) = \sum_{i=1}^n \ind\{z_{ij} = 1\} \xvec_i & & \cvec_j = \E_z [f_j(x, z)] = \sum_{i=1}^n p(z_{ij} = 1 \given x, q) \xvec_i \end{align*} Note that in this case the annotation function has the subscript $j$ to produce a context vector for each word in the sentence. Similar types of attention can be applied for other tree properties (e.g. soft-children). We refer to this type of attention layer as a \emph{syntactic attention} layer. \subsection{End-to-End Training}\label{sec:e2e} Graphical models of this form have been widely used as the final layer of deep models. Our contribution is to argue that these networks can be added within deep networks in place of simple attention layers. The whole model can then be trained end-to-end. The main complication in utilizing this approach within the network itself is the need to backpropagate the gradients through an inference algorithm as part of the structured attention network. Past work has demonstrated the techniques necessary for this approach (see \citet{Stoyanov2011}), but to our knowledge it is very rarely employed. Consider the case of the simple linear-chain CRF layer from equation (\ref{linear-chain}). Figure~\ref{fig:fb} (left) shows the standard forward-backward algorithm for computing the marginals $p(z_i = 1\given x, q; \theta)$. If we treat the forward-backward algorithm as a neural network layer, its input are the potentials $\theta$, and its output after the forward pass are these marginals.\footnote{Confusingly, ``forward'' in this case is different than in the \textit{forward}-backward algorithm, as the marginals themselves are the output. However the two uses of the term are actually quite related. The forward-backward algorithm can be interpreted as a forward and backpropagation pass on the log partition function. See \citet{Eisner2016} for further details (appropriately titled ``Inside-Outside and Forward-Backward Algorithms Are Just Backprop''). As such our full approach can be seen as computing second-order information. This interpretation is central to \citet{Li2009}.} To backpropagate a loss through this layer we need to compute the gradient of the loss $\mcL$ with respect to $\theta$, $\nabla_{\theta}^\mcL$, as a function of the gradient of the loss with respect to the marginals, $\nabla_{p}^\mcL$.\footnote{In general we use $\nabla^a_b$ to denote the Jacobian of $a$ with respect to $b$.} As the forward-backward algorithm consists of differentiable steps, this function can be derived using reverse-mode automatic differentiation of the forward-backward algorithm itself. Note that this reverse-mode algorithm conveniently has a parallel structure to the forward version, and can also be implemented using dynamic programming. \begin{wraptable}{r}{0.54\textwidth} \small \centering \begin{tabular}{cc|cc|cc} \toprule & & \multicolumn{2}{c|}{$\oplus$} & \multicolumn{2}{c}{$\otimes$} \\ $s_a$ & $s_b$ & $ l_{a+b} $ & $s_{a+b}$ & $ l_{a\cdot b}$ & $s_{a \cdot b}$\\ \midrule $+$ & $+$ & $l_a+\log (1 + d)$& $+$ & $l_a+l_b$ &$+$ \\ $+$ & $-$ & $l_a+\log (1 - d)$& $+$ & $l_a+l_b$ &$-$ \\ $-$ & $+$ & $l_a+\log (1 - d)$& $-$ & $l_a+l_b$ &$-$ \\ $-$ & $-$ & $l_a+\log (1 + d)$& $-$ & $l_a+l_b$ &$+$ \\ \bottomrule \end{tabular} \caption{\label{tab:dlog} \small Signed log-space semifield (from \cite{Li2009}). Each real number $a$ is represented as a pair $( l_a, s_a )$ where $l_a = \log |a|$ and $s_a = \sign(a)$. Therefore $a = s_a \exp(l_a)$. For the above we let $d = \exp(l_b- l_a)$ and assume $|a| > |b|$. } \end{wraptable} However, in practice, one cannot simply use current off-the-shelf tools for this task. For one, efficiency is quite important for these models and so the benefits of hand-optimizing the reverse-mode implementation still outweighs simplicity of automatic differentiation. Secondly, numerical precision becomes a major issue for structured attention networks. For computing the forward-pass and the marginals, it is important to use the standard log-space semifield over $\mathbb{R}\cup\{\pm \infty\}$ with binary operations $(\oplus = \logadd, \otimes = +)$ to avoid underflow of probabilities. For computing the backward-pass, we need to remain in log-space, but also handle log of negative values (since $\pgrad$ could be negative). This requires extending to the \textit{signed} log-space semifield over $\left[\mathbb{R}\cup\{\pm \infty\}\right] \times \{+, -\}$ with special $+$/$-$ operations. Table~\ref{tab:dlog}, based on \cite{Li2009}, demonstrates how to handle this issue, and Figure~\ref{fig:fb} (right) describes backpropagation through the forward-backward algorithm. For dependency parsing, the forward pass can be computed using the inside-outside implementation of Eisner's algorithm \citep{Eisner1996}. Similarly, the backpropagation parallels the inside-outside structure. Forward/backward pass through the inside-outside algorithm is described in Appendix~\ref{app:io}. \section{Experiments} We experiment with three instantiations of structured attention networks on four different tasks: (a) a simple, synthetic tree manipulation task using the syntactic attention layer, (b) machine translation with segmentation attention (i.e. two-state linear-chain CRF), (c) question answering using an $n$-state linear-chain CRF for multi-step inference over $n$ facts, and (d) natural language inference with syntactic tree attention. These experiments are not intended to boost the state-of-the-art for these tasks but to test whether these methods can be trained effectively in an end-to-end fashion, can yield improvements over standard selection-based attention, and can learn plausible latent structures. All model architectures, hyperparameters, and training details are further described in Appendix~\ref{app:model}. \subsection{Tree Transduction} The first set of experiments look at a tree-transduction task. These experiments use synthetic data to explore a failure case of soft-selection attention models. The task is to learn to convert a random formula given in prefix notation to one in infix notation, e.g., \begin{small} \begin{align*} (\,\,\,*\,\,\,(\,\,\,+\,\,\,(\,\,\,+\,\,\,15\,\,\,7\,\,\,)\,\,\,1\,\,\,8\,\,\,)\,\,\,(\,\,\,+\,\,\,19\,\,\,0\,\,\,11\,\,\,)\,\,\,) \,\, \Rightarrow (\,\,(\,\,15\,\,+\,\,7\,\,\,)\,\,+\,\,1\,\,+\,\,8\,\,\,)\,\,*\,\,(\,\,\,19\,\,+\,\,0\,\,+\,\,11\,\,\,) \end{align*} \end{small} The alphabet consists of symbols $\{(, ),+,*\}$, numbers between $0$ and $20$, and a special root symbol $\$$. This task is used as a preliminary task to see if the model is able to learn the implicit tree structure on the source side. The model itself is an encoder-decoder model, where the encoder is defined below and the decoder is an LSTM. See Appendix~\ref{app:tree} for the full model. Training uses $15$K prefix-infix pairs where the maximum nesting depth is set to be between $2$-$4$ (the above example has depth $3$), with $5$K pairs in each depth bucket. The number of expressions in each parenthesis is limited to be at most $4$. Test uses $1$K unseen sequences with depth between $2$-$6$ (note specifically deeper than train), with $200$ sequences for each depth. The performance is measured as the average proportion of correct target tokens produced until the first failure (as in \cite{Grefenstette2015}). For experiments we try using different forms of \textit{self}-attention over embedding-only encoders. Let $\mathbf{x}_j$ be an embedding for each source symbol; our three variants of the source representation $\hat{\xvec}_j$ are: (a) \textit{no atten}, just symbol embeddings by themselves, i.e. $\hat{\xvec}_j = \mathbf{x}_j$; (b) \textit{simple} attention, symbol embeddings and soft-pairing for each symbol, i.e. $ \hat{\xvec}_j = [\mathbf{x}_j; \mathbf{c}_j]$ where $ \mathbf{c}_j = \sum_{i=1}^n \softmax( \theta_{ij}) \mathbf{x}_i$ is calculated using soft-selection; (c) \textit{structured} attention, symbol embeddings and soft-parent, i.e. $\hat{\xvec}_j = [\mathbf{x}_j; \mathbf{c}_j]$ where $ \mathbf{c}_j = \sum_{i=1}^n p(z_{ij} = 1 \given x) \mathbf{x}_i $ is calculated using parsing marginals, obtained from the syntactic attention layer. None of these models use an explicit query value---the potentials come from running a bidirectional LSTM over the source, producing hidden vectors $\hvec_i$, and then computing \[\theta_{ij} = \tanh(\mathbf{s}^\top \tanh(\mathbf{W}_1 \mathbf{h}_i + \mathbf{W}_2 \mathbf{h}_j + \mathbf{b}))\] \noindent where $\mathbf{s}, \mathbf{b}, \mathbf{W}_1, \mathbf{W}_2$ are parameters (see Appendix~\ref{app:parsing}). \begin{wraptable}{l}{0.43\textwidth}\label{tree-perf} \small \begin{tabular}{c ccc} \toprule Depth & No Atten & Simple & Structured \\ \midrule $2$ & $7.6$ & $87.4$ & $99.2$ \\ $3$ & $4.1$ & $49.6$ & $87.0$ \\ $4$ & $2.8$ & $23.3$ & $64.5$ \\ $5$ & $2.1$ & $15.0$ & $30.8$ \\ $6$ & $1.5$ & $8.5$ & $18.2$ \\ \bottomrule \end{tabular} \caption{\label{tree-perf} \small Performance (average length to failure \%) of models on the tree-transduction task.} \end{wraptable} The source representation $[\hat{\xvec}_1, \dots, \hat{\xvec}_n]$ are attended over using the standard attention mechanism at each decoding step by an LSTM decoder.\footnote{Thus there are two attention mechanisms at work under this setup. First, structured attention over the source only to obtain soft-parents for each symbol (i.e. self-attention). Second, standard softmax alignment attention over the source representations during decoding.} Additionally, symbol embedding parameters are shared between the parsing LSTM and the source encoder. \paragraph{Results} Table~\ref{tree-perf} has the results for the task. Note that this task is fairly difficult as the encoder is quite simple. The baseline model (unsurprisingly) performs poorly as it has no information about the source ordering. The simple attention model performs better, but is significantly outperformed by the structured model with a tree structure bias. We hypothesize that the model is partially reconstructing the arithmetic tree. Figure~\ref{tree-viz} shows the attention distribution for the simple/structured models on the same source sequence, which indicates that the structured model is able to learn boundaries (i.e. parentheses). \subsection{Neural Machine Translation} Our second set of experiments use a full neural machine translation model utilizing attention over subsequences. Here both the encoder/decoder are LSTMs, and we replace standard simple attention with a segmentation attention layer. We experiment with two settings: translating directly from unsegmented Japanese characters to English words (effectively using structured attention to perform soft word segmentation), and translating from segmented Japanese words to English words (which can be interpreted as doing \emph{phrase-based} neural machine translation). Japanese word segmentation is done using the KyTea toolkit \citep{Neubig2011}. The data comes from the Workshop on Asian Translation (WAT) \citep{wat2016}. We randomly pick $500$K sentences from the original training set (of $3$M sentences) where the Japanese sentence was at most $50$ characters and the English sentence was at most $50$ words. We apply the same length filter on the provided validation/test sets for evaluation. The vocabulary consists of all tokens that occurred at least $10$ times in the training corpus. The segmentation attention layer is a two-state CRF where the unary potentials at the $j$-th decoder step are parameterized as \[ \theta_i(k)= \begin{cases} \hvec_i \Wvec \hvec_j,& k = 1 \\ 0, &k = 0 \end{cases} \] Here $[\hvec_1, \dots, \hvec_n]$ are the encoder hidden states and $\mathbf{h}_j'$ is the $j$-th decoder hidden state (i.e. the query vector). The pairwise potentials are parameterized linearly with $\mathbf{b}$, i.e. all together \[ \theta_{i,i+1}(z_i, z_{i+1}) = \theta_i(z_i) + \theta_{i+1}(z_{i+1}) + \mathbf{b}_{z_i, z_{i+1}} \] Therefore the segmentation attention layer requires just $4$ additional parameters. Appendix~\ref{app:nmt} describes the full model architecture. We experiment with three attention configurations: (a) standard {\it simple} attention, i.e. $\cvec_j = \sum_{i=1}^n \softmax(\theta_i) \hvec_i $; (b) \textit{sigmoid} attention: multiple selection with Bernoulli random variables, i.e. $\cvec_j = \sum_{i=1}^n \sigmoid(\theta_i) \hvec_i$; (c) \textit{structured} attention, encoded with normalized CRF marginals, \begin{align*} \cvec_j = \sum_{i=1}^n \frac{p(z_i=1 \given x, q)}{\gamma} \hvec_i & & \gamma = \frac{1}{\lambda} \sum_{i=1}^n p(z_i =1 \given x, q) \end{align*} The normalization term $\gamma$ is not ideal but we found it to be helpful for stable training.\footnote{With standard expectation (i.e. $\cvec_j = \sum_{i=1}^n p(z_i=1 \given x, q) \hvec_i$) we empirically observed the marginals to quickly saturate. We tried various strategies to overcome this, such as putting an $l_2$ penalty on the unary potentials and initializing with a pretrained sigmoid attention model, but simply normalizing the marginals proved to be the most effective. However, this changes the interpretation of the context vector as the expectation of an annotation function in this case.} $\lambda$ is a hyperparameter (we use $\lambda = 2$) and we further add an $l_2$ penalty of $0.005$ on the pairwise potentials $\bvec$. These values were found via grid search on the validation set. \begin{wraptable}{l}{0.43\textwidth}\label{nmt-perf} \small \begin{tabular}{c ccc} \toprule & Simple & Sigmoid & Structured \\ \midrule \textsc{Char} & $12.6$ & $13.1$ & $14.6$ \\ \textsc{Word} & $14.1$ & $13.8$ & $14.3$ \\ \bottomrule \end{tabular} \caption{\label{nmt-perf}\small Translation performance as measured by BLEU (higher is better) on character-to-word and word-to-word Japanese-English translation for the three different models.} \end{wraptable} \paragraph{Results} Results for the translation task on the test set are given in Table~\ref{nmt-perf}. Sigmoid attention outperforms simple (softmax) attention on the character-to-word task, potentially because it is able to learn many-to-one alignments. On the word-to-word task, the opposite is true, with simple attention outperforming sigmoid attention. Structured attention outperforms both models on both tasks, although improvements on the word-to-word task are modest and unlikely to be statistically significant. For further analysis, Figure~\ref{fig:vis3} shows a visualization of the different attention mechanisms on the character-to-word setup. The simple model generally focuses attention heavily on a single character. In contrast, the sigmoid and structured models are able to spread their attention distribution on contiguous subsequences. The structured attention learns additional parameters (i.e. $\bvec$) to smooth out this type of attention. \subsection{Question Answering} Our third experiment is on question answering (QA) with the linear-chain CRF attention layer for inference over multiple facts. We use the bAbI dataset \citep{Weston2015}, where the input is a set of sentences/facts paired with a question, and the answer is a single token. For many of the tasks the model has to attend to multiple supporting facts to arrive at the correct answer (see Figure~\ref{fig:vis4} for an example), and existing approaches use multiple `hops' to greedily attend to different facts. We experiment with employing structured attention to perform inference in a non-greedy way. As the ground truth supporting facts are given in the dataset, we are able to assess the model's inference accuracy. The baseline (simple) attention model is the End-To-End Memory Network \citep{Sukhbaatar2015} (MemN2N), which we briefly describe here. See Appendix~\ref{app:qa} for full model details. Let $\xvec_1, \dots, \xvec_n$ be the input embedding vectors for the $n$ sentences/facts and let $\mathbf{q}$ be the query embedding. In MemN2N, $z_k$ is the random variable for the sentence to select at the $k$-th inference step (i.e. $k$-th hop), and thus $z_k \in \{1, \dots, n\}$. The probability distribution over $z_k$ is given by $p(z_k = i \given x, q) = \softmax((\xvec_i^k)^\top\qvec^k)$, and the context vector is given by $\cvec^k = \sum_{i=1}^n p(z_k = i \given x, q) \mathbf{o}_i^k$, where $\xvec_i^k, \mathbf{o}_i^k$ are the input and output embedding for the $i$-th sentence at the $k$-th hop, respectively. The $k$-th context vector is used to modify the query $\qvec^{k+1} = \qvec^k + \cvec^k$, and this process repeats for $k = 1, \dots, K$ (for $k=1$ we have $\xvec_i^k = \xvec_i, \qvec^k = \qvec, \cvec^k = \mathbf{0}$). The $K$-th context and query vectors are used to obtain the final answer. The attention mechanism for a $K$-hop MemN2N network can therefore be interpreted as a greedy selection of a length-$K$ sequence of facts (i.e. $z_1, \dots, z_K$). For structured attention, we use an $n$-state, $K$-step linear-chain CRF.\footnote{Note that this differs from the segmentation attention for the neural machine translation experiments described above, which was a $K$-state (with $K =2$), $n$-step linear-chain CRF.} We experiment with two different settings: (a) a unary CRF model with node potentials $$\theta_k(i) = (\xvec_i^k)^\top \mathbf{q}^k$$ and (b) a binary CRF model with pairwise potentials $$\theta_{k,k+1}(i, j) = (\mathbf{x}_i^k)^\top\qvec^k + (\mathbf{x}_i^k)^\top \xvec_j^{k + 1} + (\mathbf{x}_j^{k + 1})^\top \mathbf{q}^{k + 1}$$ The binary CRF model is designed to test the model's ability to perform sequential reasoning. For both (a) and (b), a \emph{single} context vector is computed: $\mathbf{c} = \sum_{z_1,\ldots,z_K} p(z_1,\ldots,z_K \given x,q) f(x,z)$ (unlike MemN2N which computes $K$ context vectors). Evaluating $\mathbf{c}$ requires summing over all $n^K$ possible sequences of length $K$, which may not be practical for large values of $K$. However, if $f(x,z)$ factors over the components of $z$ (e.g. $f(x,z)= \sum_{k=1}^K f_k(x,z_k)$) then one can rewrite the above sum in terms of marginals: $\mathbf{c} = \sum_{k=1}^K \sum_{i=1}^n p(z_{k} = i \given x,q) f_{k}(x,z_{k})$. In our experiments, we use $f_k(x,z_k) = \mathbf{o}_{z_k}^k$. All three models are described in further detail in Appendix~\ref{app:qa}. \paragraph{Results} We use the version of the dataset with $1$K questions for each task. Since all models reduce to the same network for tasks with $1$ supporting fact, they are excluded from our experiments. The number of hops (i.e. $K$) is task-dependent, and the number of memories (i.e. $n$) is limited to be at most $25$ (note that many question have less than $25$ facts---e.g. the example in Figure~\ref{fig:vis4} has $9$ facts). Due to high variance in model performance, we train $20$ models with different initializations for each task and report the test accuracy of the model that performed the best on a $10\%$ held-out validation set (as is typically done for bAbI tasks). Results of the three different models are shown in Table~\ref{tab:results}. For correct answer seletion (Ans $\%$), we find that MemN2N and the Binary CRF model perform similarly while the Unary CRF model does worse, indicating the importance of including pairwise potentials. We also assess each model's ability to attend to the correct supporting facts in Table~\ref{tab:results} (Fact $\%$). Since ground truth supporting facts are provided for each query, we can check the sequence accuracy of supporting facts for each model (i.e. the rate of selecting the exact correct sequence of facts) by taking the highest probability sequence $\hat{z} = \argmax p(z_1, \dots, z_K \given x, q)$ from the model and checking against the ground truth. Overall the Binary CRF is able to recover supporting facts better than MemN2N. This improvement is significant and can be up to two-fold as seen for task $2$, $11$, $13$ \& $17$. However we observed that on many tasks it is sufficient to select only the last (or first) fact correctly to predict the answer, and thus higher sequence selection accuracy does not necessarily imply better answer accuracy (and vice versa). For example, all three models get $100 \%$ answer accuracy on task $15$ but have different supporting fact accuracies. Finally, in Figure~\ref{fig:vis4} we visualize of the output edge marginals produced by the Binary CRF model for a single question in task $16$. In this instance, the model is uncertain but ultimately able to select the right sequence of facts $5 \rightarrow 6 \rightarrow 8$. \subsection{Natural Language Inference} The final experiment looks at the task of natural language inference (NLI) with the syntactic attention layer. In NLI, the model is given two sentences (hypothesis/premise) and has to predict their relationship: entailment, contradiction, neutral. For this task, we use the Stanford NLI dataset \citep{Bowman2015} and model our approach off of the decomposable attention model of \cite{Parikh2016}. This model takes in the matrix of word embeddings as the input for each sentence and performs \textit{inter-sentence} attention to predict the answer. Appendix~\ref{app:nli} describes the full model. As in the transduction task, we focus on modifying the input representation to take into account soft parents via self-attention (i.e. \textit{intra-sentence} attention). In addition to the three baselines described for tree transduction (No Attention, Simple, Structured), we also explore two additional settings: (d) \textit{hard} pipeline parent selection, i.e. $\hat{\mathbf{x}}_j = [\mathbf{x}_j; \mathbf{x}_{\head(j)}]$, where $\head(j)$ is the index of $x_j$'s parent\footnote{The parents are obtained from running the dependency parser of \cite{Andor2016}, available at \\ \url{https://github.com/tensorflow/models/tree/master/syntaxnet}}; (e) \textit{pretrained} structured attention: structured attention where the parsing layer is pretrained for one epoch on a parsed dataset (which was enough for convergence). \paragraph{Results} Results of our models are shown in Table~\ref{tab:main}. Simple attention improves upon the no attention model, and this is consistent with improvements observed by \cite{Parikh2016} with their intra-sentence attention model. The pipelined model with hard parents also slightly improves upon the baseline. Structured attention outperforms both models, though surprisingly, pretraining the syntactic attention layer on the parse trees performs worse than training it from scratch---it is possible that the pretrained attention is too strict for this task. We also obtain the hard parse for an example sentence by running the Viterbi algorithm on the syntactic attention layer with the non-pretrained model: \begin{center} \includegraphics[scale=0.8]{tikz1.pdf} \end{center} Despite being trained without ever being exposed to an explicit parse tree, the syntactic attention layer learns an almost plausible dependency structure. In the above example it is able to correctly identify the main verb \texttt{fighting}, but makes mistakes on determiners (e.g. head of \texttt{The} should be \texttt{men}). We generally observed this pattern across sentences, possibly because the verb structure is more important for the inference task. \section{Conclusion} This work outlines structured attention networks, which incorporate graphical models to generalize simple attention, and describes the technical machinery and computational techniques for backpropagating through models of this form. We implement two classes of structured attention layers: a linear-chain CRF (for neural machine translation and question answering) and a more complicated first-order dependency parser (for tree transduction and natural language inference). Experiments show that this method can learn interesting structural properties and improve on top of standard models. Structured attention could also be a way of learning latent labelers or parsers through attention on other tasks. It should be noted that the additional complexity in computing the attention distribution increases run-time---for example, structured attention was approximately $5 \times$ slower to train than simple attention for the neural machine translation experiments, even though both attention layers have the same asymptotic run-time (i.e. $O(n)$). Embedding \textit{differentiable inference} (and more generally, \textit{differentiable algorithms}) into deep models is an exciting area of research. While we have focused on models that admit (tractable) exact inference, similar technique can be used to embed approximate inference methods. Many optimization algorithms (e.g. gradient descent, LBFGS) are also differentiable \citep{domke2012generic,Maclaurin2015}, and have been used as output layers for structured prediction in energy-based models \citep{Belanger2016,wang2016nips}. Incorporating them as internal neural network layers is an interesting avenue for future work. \subsubsection*{Acknowledgments} We thank Tao Lei, Ankur Parikh, Tim Vieira, Matt Gormley, Andr{\'e} Martins, Jason Eisner, Yoav Goldberg, and the anonymous reviewers for helpful comments, discussion, notes, and code. We additionally thank Yasumasa Miyamoto for verifying Japanese-English translations. \bibliographystyle{iclr2017_conference} \newpage \appendix \section*{APPENDICES} \section{Model Details}\label{app:model} \subsection{Syntactic Attention}\label{app:parsing} The syntactic attention layer (for tree transduction and natural language inference) is similar to the first-order graph-based dependency parser of \cite{Kipperwasser2016}. Given an input sentence $[x_1, \dots, x_n]$ and the corresponding word vectors $[\xvec_1, \dots, \xvec_n]$, we use a bidirectional LSTM to get the hidden states for each time step $i \in [1, \dots, n]$, \begin{align*} \hvec_i^{\text{fwd}} = \lstm(\xvec_i, \hvec_{i-1}^{\text{fwd}}) & & \hvec_i^{\text{bwd}} = \lstm(\xvec_i, \hvec_{i+1}^{\text{bwd}}) & & \hvec_i = [\hvec_i^{\text{fwd}} ; \hvec_i^{\text{bwd}}] \end{align*} where the forward and backward LSTMs have their own parameters. The score for $x_i \rightarrow x_j$ (i.e. $x_i$ is the parent of $x_j$), is given by an MLP \begin{equation*} \theta_{ij} = \tanh( \svec^\top\tanh(\Wvec_1 \hvec_i + \Wvec_2 \hvec_j + \bvec)) \end{equation*} These scores are used as input to the inside-outside algorithm (see Appendix~\ref{app:io}) to obtain the probability of each word's parent $p(z_{ij} = 1 \given x)$, which is used to obtain the soft-parent $\cvec_j$ for each word $x_j$. In the non-structured case we simply have $p(z_{ij} = 1 \given x) = \softmax(\theta_{ij})$. \subsection{Tree Transduction}\label{app:tree} Let $[x_1, \dots, x_n],[y_1, \dots, y_m]$ be the sequence of source/target symbols, with the associated embeddings $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$ with $\xvec_i, \yvec_j \in \reals^l$. In the simplest baseline model we take the source representation to be the matrix of the symbol embeddings. The decoder is a one-layer LSTM which produces the hidden states $\hvec_j' = \lstm(\yvec_j, \hvec_{j-1}')$, with $\hvec_j' \in \reals^l$. The hidden states are combined with the input representation via a bilinear map $\Wvec \in \reals^{l \times l}$ to produce the attention distribution used to obtain the vector $\mvec_i$, which is combined with the decoder hidden state as follows, \begin{align*} \alpha_i = \frac{\exp \xvec_i \Wvec \hvec_j'}{\sum_{k=1}^n \exp \xvec_k \Wvec \hvec_j'} & & \mvec_i = \sum_{i=1}^n \alpha_i \xvec_i & & \hat{\hvec}_j = \tanh (\Uvec [\mvec_i ; \hvec_j'] ) \end{align*} Here we have $\Wvec \in \reals^{l \times l}$ and $\Uvec \in \reals^{2l \times l}$. Finally, $\hat{\hvec}_j$ is used to to obtain a distribution over the next symbol $y_{j+1}$, \begin{equation*} p(y_{j+1} \given x_1, \dots, x_n, y_1, \dots, y_j) = \softmax(\Vvec \hat{\hvec}_j + \bvec) \end{equation*} For structured/simple models, the $j$-th source representation are respectively \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n p(z_{ki} = 1 \given x ) \, \xvec_k\right] & &\hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n \softmax (\theta_{ki})\, \xvec_k\right] \end{align*} where $\theta_{ij}$ comes from the bidirectional LSTM described in~\ref{app:parsing}. Then $\alpha_i$ and $\mvec_i$ changed accordingly, \begin{align*} \alpha_i = \frac{\exp \hat{\xvec}_i \Wvec \hvec_j'}{\sum_{k=1}^n \exp \hat{\xvec}_k \Wvec \hvec_j'} & & \mvec_i = \sum_{i=1}^n \alpha_i \hat{\xvec}_i \end{align*} Note that in this case we have $\Wvec \in \reals^{2l \times l}$ and $\Uvec \in \reals^{3l \times l}$. We use $l = 50$ in all our experiments. The forward/backward LSTMs for the parsing LSTM are also $50$-dimensional. Symbol embeddings are shared between the encoder and the parsing LSTMs. Additional training details include: batch size of $20$; training for $13$ epochs with a learning rate of $1.0$, which starts decaying by half after epoch $9$ (or the epoch at which performance does not improve on validation, whichever comes first); parameter initialization over a uniform distribution $U[-0.1, 0.1]$; gradient normalization at $1$ (i.e. renormalize the gradients to have norm $1$ if the $l_2$ norm exceeds $1$). Decoding is done with beam search (beam size $ = 5$). \subsection{Neural Machine Translation}\label{app:nmt} The baseline NMT system is from \cite{Luong2015}. Let $[x_1, \dots, x_n],[y_1, \dots, y_m]$ be the source/target sentence, with the associated word embeddings $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$. The encoder is an LSTM over the source sentence, which produces the hidden states $[\hvec_1, \dots, \hvec_n]$ where \begin{equation*} \hvec_i = \lstm(\xvec_i, \hvec_{i-1}) \end{equation*} and $\hvec_i \in \reals^l$. The decoder is another LSTM which produces the hidden states $\hvec_j' \in \reals^l$. In the simple attention case with categorical attention, the hidden states are combined with the input representation via a bilinear map $\Wvec \in \reals^{l \times l}$ and this distribution is used to obtain the context vector at the $j$-th time step, \begin{align*} \theta_i = \hvec_i \Wvec \hvec_j' & & \cvec_j = \sum_{i=1}^n \softmax(\theta_i)\hvec_i \end{align*} The Bernoulli attention network has the same $\theta_i$ but instead uses a $\sigmoid$ to obtain the weights of the linear combination, i.e., \begin{align*} \cvec_j = \sum_{i=1}^n \sigmoid(\theta_i) \hvec_i \end{align*} And finally, the structured attention model uses a bilinear map to parameterize one of the unary potentials \[ \theta_i(k)= \begin{cases} \hvec_i \Wvec \hvec_j',& k = 1 \\ 0, &k = 0 \end{cases} \] \begin{align*} \theta_{i,i+1}(z_i, z_{i+1}) &= \theta_i(z_i) + \theta_{i+1}(z_{i+1}) + \mathbf{b}_{z_i, z_{i+1}} \end{align*} where $\bvec$ are the pairwise potentials. These potentials are used as inputs to the forward-backward algorithm to obtain the marginals $p(z_i = 1 \given x, q)$, which are further normalized to obtain the context vector \begin{align*} \cvec_j = \sum_{i=1}^n \frac{p(z_i=1 \given x, q)}{\gamma} \hvec_i & & \gamma = \frac{1}{\lambda} \sum_i^n p(z_i =1 \given x, q) \end{align*} We use $\lambda = 2$ and also add an $l_2$ penalty of $0.005$ on the pairwise potentials $\bvec$. The context vector is then combined with the decoder hidden state \begin{align*} \hat{\hvec}_j = \tanh (\Uvec[\cvec_j ; \hvec_j']) \end{align*} and $\hat{\hvec}_j$ is used to obtain the distribution over the next target word $y_{j+1}$ \begin{align*} p(y_{j+1} \given x_1, \dots, x_n, y_1, \dots y_j) = \softmax(\Vvec \hat{\hvec}_j + \bvec) \end{align*} The encoder/decoder LSTMs have $2$ layers and $500$ hidden units (i.e. $l = 500$). Additional training details include: batch size of $128$; training for $30$ epochs with a learning rate of $1.0$, which starts decaying by half after the first epoch at which performance does not improve on validation; dropout with probability $0.3$; parameter initialization over a uniform distribution $U[-0.1, 0.1]$; gradient normalization at $1$. We generate target translations with beam search (beam size $= 5$), and evaluate with \texttt{multi-bleu.perl} from Moses.\footnote{ \url{https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl}} \subsection{Question Answering}\label{app:qa} Our baseline model (MemN2N) is implemented following the same architecture as described in \cite{Sukhbaatar2015}. In particular, let $x = [x_1, \dots, x_n]$ represent the sequence of $n$ facts with the associated embeddings $[\mathbf{x}_1, \dots, \xvec_n]$ and let $\qvec$ be the embedding of the query $q$. The embeddings are obtained by simply adding the word embeddings in each sentence or query. The full model with $K$ hops is as follows: \begin{align*} &p(z_k = i \given x, q) = \softmax( (\mathbf{x}_i^k)^\top \mathbf{q}^k ) \\ &\mathbf{c}^k = \sum_{i=1}^n p(z_k = i \given x, q) \mathbf{o}_i^k \\ &\mathbf{q}^{k + 1} = \mathbf{q}^k + \mathbf{c}^k \\ &p(y \given x, q) = \softmax(\Wvec (\mathbf{q}^K + \mathbf{c}^K)) \end{align*} where $p(y \given x, q)$ is the distribution over the answer vocabulary. At each layer, $\{\mathbf{x}_i^k\}$ and $\{\mathbf{o}_i^k\}$ are computed using embedding matrices $\mathbf{X}^k$ and $\mathbf{O}^k$. We use the \emph{adjacent weight tying scheme} from the paper so that $\mathbf{X}^{k+1} = \mathbf{O}^k, \mathbf{W}^T = \mathbf{O}^K$. $\mathbf{X}^1$ is also used to compute the query embedding at the first hop. For $k=1$ we have $\xvec_i^k = \xvec_i, \qvec^k = \qvec, \cvec^k = \mathbf{0}$. For both the Unary and the Binary CRF models, the same input fact and query representations are computed (i.e. same embedding matrices with weight tying scheme). For the unary model, the potentials are parameterized as \[ \theta_{k}(i) = (\xvec_i^k)^\top \mathbf{q}^k \] and for the binary model we compute pairwise potentials as \[ \theta_{k,k+1}(i, j) = (\mathbf{x}_i^k)^\top\qvec^k + (\mathbf{x}_i^k)^\top \xvec_j^{k + 1} + (\mathbf{x}_j^{k + 1})^\top \mathbf{q}^{k + 1} \] The $\qvec^k$'s are updated simply with a linear mapping, i.e. \[ \mathbf{q}^{k+1} = \mathbf{Q} \mathbf{q}^k \] In the case of the Binary CRF, to discourage the model from selecting the same fact again we additionally set $\theta_{k,k+1}(i,i) = -\infty$ for all $i \in \{1, \dots, n\}$. Given these potentials, we compute the marginals $p(z_k = i, z_{k+1} = j \given x, q)$ using the forward-backward algorithm, which is then used to compute the context vector: \begin{align*} \mathbf{c} = \sum_{z_1,\ldots,z_K} p(z_1,\ldots,z_K \given x,q) f(x,z) & & f(x,z) = \sum_{k=1}^K f_k(x, z_k) & & f_k(x,z_k) = \mathbf{o}_{z_k}^k \end{align*} Note that if $f(x,z)$ factors over the components of $z$ (as is the case above) then computing $\cvec$ only requires evaluating the marginals $p(z_k \given x,q)$. Finally, given the context vector the prediction is made in a similar fashion to MemN2N: \begin{align*} p(y \given x, q) = \softmax(\Wvec (\mathbf{q}^K + \mathbf{c})) \end{align*} Other training setup is similar to \cite{Sukhbaatar2015}: we use stochastic gradient descent with learning rate $0.01$, which is divided by $2$ every $25$ epochs until $100$ epochs are reached. Capacity of the memory is limited to $25$ sentences. The embedding vectors are of size $20$ and gradients are renormalized if the norm exceeds $40$. All models implement \emph{position encoding}, \emph{temporal encoding}, and \emph{linear start} from the original paper. For linear start, the $\softmax(\cdot)$ function in the attention layer is removed at the beginning and re-inserted after $20$ epochs for MemN2N, while for the CRF models we apply a $\log(\softmax(\cdot))$ layer on the $\qvec^k$ after $20$ epochs. Each model is trained separately for each task. \subsection{Natural Language Inference}\label{app:nli} Our baseline model/setup is essentially the same as that of \cite{Parikh2016}. Let $[x_1, \dots, x_n], [y_1, \dots, y_m]$ be the premise/hypothesis, with the corresponding input representations $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$. The input representations are obtained by a linear transformation of the $300$-dimensional pretrained GloVe embeddings \citep{Pennington2014} after normalizing the GloVe embeddings to have unit norm.\footnote{We use the GloVe embeddings pretrained over the $840$ billion word Common Crawl, publicly available at \url{http://nlp.stanford.edu/projects/glove/}} The pretrained embeddings remain fixed but the linear layer (which is also $300$-dimensional) is trained. Words not in the pretrained vocabulary are hashed to one of $100$ Gaussian embeddings with mean $0$ and standard deviation $1$. We concatenate each input representation with a convex combination of the other sentence's input representations (essentially performing \textit{inter-sentence} attention), where the weights are determined through a dot product followed by a softmax, \begin{align*} e_{ij} = f(\xvec_i)^\top f(\yvec_j) & & \bar{\xvec}_{i} = \left[\xvec_i ; \sum_{j=1}^m \frac{\exp e_{ij}}{\sum_{k=1}^m \exp e_{ik}} \yvec_{j}\right] & & \bar{\yvec}_{j} = \left[\yvec_j ; \sum_{i=1}^n \frac{\exp e_{ij}}{\sum_{k=1}^n \exp e_{kj}} \xvec_{i}\right] \end{align*} Here $f(\cdot)$ is an MLP. The new representations are fed through another MLP $g(\cdot)$, summed, combined with the final MLP $h(\cdot)$ and fed through a softmax layer to obtain a distribution over the labels $l$, \begin{align*} \bar{\xvec} &= \sum_{i=1}^n g(\bar{\xvec}_{i}) \hspace{20mm} \bar{\yvec} = \sum_{j=1}^m g(\bar{\yvec}_{j}) \\ p(l \given x_1, &\dots, x_n, y_1, \dots, y_m)= \softmax(\Vvec h([\bar{\xvec}; \bar{\yvec}]) + \bvec) \end{align*} All the MLPs have $2$-layers, $300$ $\relu$ units, and dropout probability of $0.2$. For structured/simple models, we first employ the bidirectional parsing LSTM (see \ref{app:parsing}) to obtain the scores $\theta_{ij}$. In the structured case each word representation is simply concatenated with its soft-parent \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n p(z_{ki} = 1 \given x ) \xvec_k\right] % & & \hat{\yvec}_j = [\yvec_j ; \sum_{l=1}^m p(z'_{lj} = 1 \given y) \yvec_l] \end{align*} and $\hat{\xvec}_i$ (and analogously $\hat{\yvec}_j$) is used as the input to the above model. In the simple case (which closely corresponds to the \emph{intra-sentence} attention model of \cite{Parikh2016}), we have \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n \frac{\exp \theta_{ki}}{\sum_{l=1}^n \exp \theta_{li}} \xvec_k \right] \end{align*} The word embeddings for the parsing LSTMs are also initialized with GloVe, and the parsing layer is shared between the two sentences. The forward/backward LSTMs for the parsing layer are $100$-dimensional. Additional training details include: batch size of $32$; training for $100$ epochs with Adagrad \citep{Duchi2011} where the global learning rate is $0.05$ and sum of gradient squared is initialized to $0.1$; parameter intialization over a Gaussian distribution with mean $0$ and standard deviation $0.01$; gradient normalization at $5$. In the pretrained scenario, pretraining is done with Adam \citep{Kingma2015} with learning rate equal to $0.01$, and $\beta_1 = 0.9$, $\beta_2 = 0.999$. \section{Forward/Backward through the Inside-Outside Algorithm}\label{app:io} Figure~\ref{fig:io-fprop} shows the procedure for obtaining the parsing marginals from the input potentials. This corresponds to running the inside-outside version of Eisner's algorithm \citep{Eisner1996}. The intermediate data structures used during the dynamic programming algorithm are the (log) inside tables $\alpha$, and the (log) outside tables $\beta$. Both $\alpha, \beta$ are of size $n \times n \times 2 \times 2$, where $n$ is the sentence length. First two dimensions encode the start/end index of the span (i.e. subtree). The third dimension encodes whether the root of the subtree is the left ($L$) or right ($R$) index of the span. The fourth dimension indicates if the span is complete ($1$) or incomplete ($0$). We can calculate the marginal distribution of each word's parent (for all words) in $O(n^3)$ using this algorithm. Backward pass through the inside-outside algorithm is slightly more involved, but still takes $O(n^3)$ time. Figure~\ref{fig:io-bprop} illustrates the backward procedure, which receives the gradient of the loss $\mcL$ with respect to the marginals, $\nabla^\mcL_p$, and computes the gradient of the loss with respect to the potentials $\nabla^\mcL_\theta$. The computations must be performed in the signed log-space semifield to handle log of negative values. See section~\ref{sec:e2e} and Table~\ref{tab:dlog} for more details. \end{document}
Structured Attention Networks
1702.00887
Table 3: Translation performance as measured by BLEU (higher is better) on character-to-word and word-to-word Japanese-English translation for the three different models.
[ "[EMPTY]", "Simple", "Sigmoid", "Structured" ]
[ [ "Char", "12.6", "13.1", "14.6" ], [ "Word", "14.1", "13.8", "14.3" ] ]
Sigmoid attention outperforms simple (softmax) attention on the character-to-word task, potentially because it is able to learn many-to-one alignments. On the word-to-word task, the opposite is true, with simple attention outperforming sigmoid attention. Structured attention outperforms both models on both tasks, although improvements on the word-to-word task are modest and unlikely to be statistically significant.
\documentclass{article} % For LaTeX2e \usepackage[font=small,labelfont=bf]{caption} \usepackage[noend]{algpseudocode} \usetikzlibrary{matrix,arrows,backgrounds,calc,patterns,positioning,fit,shapes} \usepackage[titletoc,title]{appendix} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator{\softmax}{softmax} \DeclareMathOperator{\logadd}{logadd} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\signexp}{signexp} \DeclareMathOperator{\sigmoid}{sigmoid} \DeclareMathOperator{\softparent}{soft-parent} \DeclareMathOperator{\parent}{parent} \DeclareMathOperator{\head}{head} \DeclareMathOperator{\softhead}{soft-head} \DeclareMathOperator{\simf}{sim} \DeclareMathOperator{\relu}{ReLU} \DeclareMathOperator{\lstm}{LSTM} \DeclareMathOperator{\rnn}{RNN} \DeclareMathOperator{\mlp}{MLP} \newcommand{\oplusgets}{\gets_{\oplus}} \newcommand{\pgrad}{\nabla_{p}^\mathcal{L}} \newcommand{\alphagrad}{\nabla_{\alpha}^\mathcal{L}} \newcommand{\betagrad}{\nabla_{\beta}^\mathcal{L}} \newcommand{\thetagrad}{\log \nabla_{\theta}^\mathcal{L}} \newcommand{\xvec}{\mathbf{x}} \newcommand{\yvec}{\mathbf{y}} \newcommand{\cvec}{\mathbf{c}} \newcommand{\mvec}{\mathbf{m}} \newcommand{\zvec}{\mathbf{z}} \newcommand{\qvec}{\mathbf{q}} \newcommand{\svec}{\mathbf{s}} \newcommand{\tvec}{\mathbf{t}} \newcommand{\mcL}{\mathcal{L}} \newcommand{\mcT}{\mathcal{T}} \newcommand{\mcY}{\mathcal{Y}} \newcommand{\mcV}{\mathcal{V}} \newcommand{\mcC}{\mathcal{C}} \newcommand{\mcA}{\mathcal{A}} \newcommand{\mcZ}{\mathcal{Z}} \newcommand{\mcX}{\mathcal{X}} \newcommand{\context}{\mathbf{y}_{\mathrm{c}}} \newcommand{\embcontext}{\mathbf{\tilde{y}}_{\mathrm{c}}} \newcommand{\inpcontext}{\mathbf{\tilde{x}}} \newcommand{\start}{\mathbf{\tilde{y}}_{\mathrm{c0}}} \newcommand{\End}{\mathrm{\texttt{</s>}}} \newcommand{\Uvec}{\mathbf{U}} \newcommand{\Evec}{\mathbf{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\Gvec}{\mathbf{G}} \newcommand{\Fvec}{\mathbf{F}} \newcommand{\Pvec}{\mathbf{P}} \newcommand{\pvec}{\mathbf{p}} \newcommand{\Vvec}{\mathbf{V}} \newcommand{\Wvec}{\mathbf{W}} \newcommand{\hvec}{\mathbf{h}} \newcommand{\wvec}{\mathbf{w}} \newcommand{\uvec}{\mathbf{u}} \newcommand{\vvec}{\mathbf{v}} \newcommand{\bvec}{\mathbf{b}} \newcommand{\reals}{\mathbb{R}} \newcommand{\ind}{\mathbbm{1}} \newcommand\given{\,|\,} \title{Structured Attention Networks} \author{Yoon Kim\thanks{Equal contribution.} \hspace{5mm} Carl Denton$^*$ \hspace{5mm} Luong Hoang \hspace{5mm} Alexander M. Rush \\ \texttt{\small \{yoonkim@seas,carldenton@college,lhoang@g,srush@seas\}.harvard.edu}\\ School of Engineering and Applied Sciences\\ Harvard University\\ Cambridge, MA 02138, USA \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard soft-selection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linear-chain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks: tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention. \end{abstract} \section{Introduction} Attention networks are now a standard part of the deep learning toolkit, contributing to impressive results in neural machine translation \citep{Bahdanau2015,Luong2015}, image captioning \citep{Xu2015}, speech recognition \citep{Chorowski2015,Chan2015}, question answering \citep{Hermann2015,Sukhbaatar2015}, and algorithm-learning \citep{Graves2014,Vinyals2015c}, among many other applications (see \cite{Cho2015} for a comprehensive review). This approach alleviates the bottleneck of compressing a source into a fixed-dimensional vector by equipping a model with variable-length memory \citep{Weston2014,Graves2014,Graves2016}, thereby providing random access into the source as needed. Attention is implemented as a hidden layer which computes a categorical distribution (or hierarchy of categorical distributions) to make a soft-selection over source elements. Noting the empirical effectiveness of attention networks, we also observe that the standard attention-based architecture does not directly model any \textit{structural dependencies} that may exist among the source elements, and instead relies completely on the hidden layers of the network. While one might argue that these structural dependencies can be learned implicitly by a deep model with enough data, in practice, it may be useful to provide a structural bias. Modeling structural dependencies at the final, \textit{output} layer has been shown to be important in many deep learning applications, most notably in seminal work on graph transformers \citep{LeCun1998}, key work on NLP \citep{Collobert2011}, and in many other areas \citep[\textit{inter alia}]{Peng2009,Do2010,Jaderberg2014b,Chen2015b,Durrett2015,Lample2016}. In this work, we consider applications which may require structural dependencies at the attention layer, and develop \textit{internal} structured layers for modeling these directly. This approach generalizes categorical soft-selection attention layers by specifying possible structural dependencies in a soft manner. Key applications will be the development of an attention function that segments the source input into subsequences and one that takes into account the latent recursive structure (i.e. parse tree) of a source sentence. Our approach views the attention mechanism as a graphical model over a set of latent variables. The standard attention network can be seen as an expectation of an annotation function with respect to a single latent variable whose categorical distribution is parameterized to be a function of the source. In the general case we can specify a graphical model over multiple latent variables whose edges encode the desired structure. Computing forward attention requires performing inference to obtain the expectation of the annotation function, i.e. the \textit{context vector}. This expectation is computed over an exponentially-sized set of structures (through the machinery of graphical models/structured prediction), hence the name \textit{structured attention} network. Notably each step of this process (including inference) is differentiable, so the model can be trained end-to-end without having to resort to deep policy gradient methods \citep{schulman2015gradient}. The differentiability of inference algorithms over graphical models has previously been noted by various researchers \citep{Li2009,Domke2011,Stoyanov2011,Stoyanov2012,Gormley2015}, primarily outside the area of deep learning. For example, \citet{Gormley2015} treat an entire graphical model as a differentiable circuit and backpropagate risk through variational inference (loopy belief propagation) for minimium risk training of dependency parsers. Our contribution is to combine these ideas to produce structured \textit{internal} attention layers within deep networks, noting that these approaches allow us to use the resulting marginals to create new features, as long as we do so a differentiable way. We focus on two classes of structured attention: linear-chain conditional random fields (CRFs) \citep{Lafferty2001} and first-order graph-based dependency parsers \citep{Eisner1996}. The initial work of \cite{Bahdanau2015} was particularly interesting in the context of machine translation, as the model was able to implicitly learn an \textit{alignment model as a hidden layer}, effectively embedding inference into a neural network. In similar vein, under our framework the model has the capacity to learn a \textit{segmenter as a hidden layer} or a \textit{parser as a hidden layer}, without ever having to see a segmented sentence or a parse tree. Our experiments apply this approach to a difficult synthetic reordering task, as well as to machine translation, question answering, and natural language inference. We find that models trained with structured attention outperform standard attention models. Analysis of learned representations further reveal that interesting structures emerge as an internal layer of the model. All code is available at \url{http://github.com/harvardnlp/struct-attn}. \section{Background: Attention Networks} A standard neural network consist of a series of non-linear transformation layers, where each layer produces a fixed-dimensional hidden representation. For tasks with large input spaces, this paradigm makes it hard to control the interaction between components. For example in machine translation, the source consists of an entire sentence, and the output is a prediction for each word in the translated sentence. Utilizing a standard network leads to an information bottleneck, where one hidden layer must encode the entire source sentence. Attention provides an alternative approach.\footnote{Another line of work involves marginalizing over latent variables (e.g. latent alignments) for sequence-to-sequence transduction \citep{Kong2016,Lu2016,Yu2016,Yu2017}.} An attention network maintains a set of hidden representations that scale with the size of the source. The model uses an internal inference step to perform a soft-selection over these representations. This method allows the model to maintain a variable-length memory and has shown to be crucially important for scaling systems for many tasks. Formally, let $x = [x_1, \dots, x_n]$ represent a sequence of inputs, let $q$ be a query, and let $z$ be a categorical latent variable with sample space $\{1, \ldots, n\}$ that encodes the desired selection among these inputs. Our aim is to produce a \textit{context} $c$ based on the sequence and the query. To do so, we assume access to an \textit{attention distribution} $z \sim p(z \given x, q)$, where we condition $p$ on the inputs $x$ and a query $q$. The \textit{context} over a sequence is defined as expectation, $c = \E_{z \sim p(z \given x, q)} [f(x, z)]$ where $f(x, z)$ is an \textit{annotation function}. Attention of this form can be applied over any type of input, however, we will primarily be concerned with ``deep'' networks, where both the annotation function and attention distribution are parameterized with neural networks, and the context produced is a vector fed to a downstream network. For example, consider the case of attention-based neural machine translation \citep{Bahdanau2015}. Here the sequence of inputs $[\mathbf{x}_1, \ldots, \mathbf{x}_n]$ are the hidden states of a recurrent neural network (RNN), running over the words in the source sentence, $\mathbf{q}$ is the RNN hidden state of the target decoder (i.e. vector representation of the query $q$), and $z$ represents the source position to be attended to for translation. The attention distribution $p$ is simply $p(z = i \given x, q) = \softmax(\theta_i)$ where $\theta \in \reals^n$ is a parameterized potential typically based on a neural network, e.g. $\theta_i = \mlp([\mathbf{x}_i; \qvec])$. The annotation function is defined to simply return the selected hidden state, $f(\mathbf{x}, z) = \mathbf{x}_z$. The context vector can then be computed using a simple sum, \begin{equation}\label{vanilla-attn} \mathbf{c} = \E_{z \sim p(z \given x, q)} [f( x, z)] = \sum_{i=1}^n p(z = i \given x, q) \mathbf{x}_i \end{equation} Other tasks such as question answering use attention in a similar manner, for instance by replacing source $[x_1, \ldots, x_n]$ with a set of potential facts and $q$ with a representation of the question. In summary we interpret the attention mechanism as taking the expectation of an annotation function $f(x,z)$ with respect to a latent variable $z \sim p$, where $p$ is parameterized to be function of $x$ and $q$. \section{Structured Attention} Attention networks simulate selection from a set using a soft model. In this work we consider generalizing selection to types of attention, such as selecting chunks, segmenting inputs, or even attending to latent subtrees. One interpretation of this attention is as using soft-selection that considers all possible structures over the input, of which there may be exponentially many possibilities. Of course, this expectation can no longer be computed using a simple sum, and we need to incorporate the machinery of inference directly into our neural network. Define a structured attention model as being an attention model where $z$ is now a vector of discrete latent variables $[z_1, \ldots, z_m]$ and the attention distribution is $p(z \given x, q)$ is defined as a \textit{conditional random field} (CRF), specifying the independence structure of the $z$ variables. Formally, we assume an undirected graph structure with $m$ vertices. The CRF is parameterized with clique (log-)potentials $\theta_C(z_{C}) \in \reals$, where the $z_C$ indicates the subset of $z$ given by clique $C$. Under this definition, the attention probability is defined as, $p(z \given x, q; \theta) = \softmax(\sum_C \theta_C(z_C))$, where for symmetry we use $\softmax$ in a general sense, i.e. $\softmax(g(z)) = \frac{1}{Z} \exp(g(z))$ where $Z = \sum_{z'} \exp(g(z'))$ is the implied partition function. In practice we use a neural CRF, where $\theta$ comes from a deep model over $x, q$. In structured attention, we also assume that the annotation function $f$ factors (at least) into clique annotation functions $f(x, z) = \sum_C f_C(x, z_C)$. Under standard conditions on the conditional independence structure, inference techniques from graphical models can be used to compute the forward-pass expectations and the context: \[c = \E_{z \sim p(z \given x, q)} [f(x, z)] = \sum_{C} \E_{z \sim p(z_C \given x, q)} [f_C(x, z_C)]\] \subsection{Example 1: Subsequence Selection} \label{sec:subselect} Suppose instead of soft-selecting a single input, we wanted to explicitly model the selection of contiguous subsequences. We could naively apply categorical attention over all subsequences, or hope the model learns a multi-modal distribution to combine neighboring words. Structured attention provides an alternate approach. Concretely, let $m =n$, define $z$ to be a random vector $z = [z_1, \dots, z_n]$ with $z_i \in \{0, 1\}$, and define our annotation function to be, $f(x,z) = \sum_{i=1}^n f_{i} (x,z_{i})$ where $f_{i} (x,z_i) = \ind \{ z_i = 1\} \xvec_i$. The explicit expectation is then, \begin{equation}\label{struct-attn} \E_{z_1, \dots, z_n }[f(x,z)] = \sum_{i=1}^n p(z_i = 1 \given x, q) \xvec_i \end{equation} Equation (\ref{struct-attn}) is similar to equation (\ref{vanilla-attn})---both are a linear combination of the input representations where the scalar is between $[0,1]$ and represents how much attention should be focused on each input. However, (2) is fundamentally different in two ways: (i) it allows for multiple inputs (or no inputs) to be selected for a given query; (ii) we can incorporate structural dependencies across the $z_i$'s. For instance, we can model the distribution over $z$ with a linear-chain CRF with pairwise edges, \begin{align}\label{linear-chain} p(z_1, \dots, z_n \given x, q) = \softmax \left( \sum_{i=1}^{n-1} \theta_{i,i+1}(z_i, z_{i+1}) \right) \end{align} where $\theta_{k,l}$ is the pairwise potential for $z_i = k$ and $z_{i+1} = l$. This model is shown in Figure~\ref{fig:seq}c. Compare this model to the standard attention in Figure~\ref{fig:seq}a, or to a simple Bernoulli (sigmoid) selection method, $p(z_i = 1 \given x, q) = \sigmoid(\theta_{i}) $, shown in Figure~\ref{fig:seq}b. All three of these methods can use potentials from the same neural network or RNN that takes $x$ and $q$ as inputs. In the case of the linear-chain CRF in~(\ref{linear-chain}), the marginal distribution $p(z_i = 1 \given x)$ can be calculated efficiently in linear-time for all $i$ using message-passing, i.e. the forward-backward algorithm. These marginals allow us to calculate (\ref{struct-attn}), and in doing so we implicitly sum over an exponentially-sized set of structures (i.e. all binary sequences of length $n$) through dynamic programming. We refer to this type of attention layer as a \emph{segmentation attention} layer. Note that the forward-backward algorithm is being used as parameterized \textit{pooling} (as opposed to output computation), and can be thought of as generalizing the standard attention softmax. Crucially this generalization from vector softmax to forward-backward is just a series of differentiable steps,\footnote{As are other dynamic programming algorithms for inference in graphical models, such as (loopy and non-loopy) belief propagation.} and we can compute gradients of its output (marginals) with respect to its input (potentials). This will allow the structured attention model to be trained end-to-end as part of a deep model. \subsection{Example 2: Syntactic Tree Selection } This same approach can be used for more involved structural dependencies. One popular structure for natural language tasks is a dependency tree, which enforces a structural bias on the recursive dependencies common in many languages. In particular a dependency tree enforces that each word in a source sentence is assigned exactly one parent word (\textit{head word}), and that these assignments do not cross (projective structure). Employing this bias encourages the system to make a soft-selection based on learned syntactic dependencies, without requiring linguistic annotations or a pipelined decision. A dependency parser can be partially formalized as a graphical model with the following cliques \citep{Smith2008}: latent variables $z_{ij} \in \{0,1\}$ for all $i \ne j$, which indicates that the $i$-th word is the parent of the $j$-th word (i.e. $x_i \rightarrow x_j$); and a special global constraint that rules out configurations of $z_{ij}$'s that violate parsing constraints (e.g. one head, projectivity). The parameters to the graph-based CRF dependency parser are the potentials $\theta_{ij}$, which reflect the score of selecting $x_i$ as the parent of $x_j$. The probability of a parse tree $z$ given the sentence $x = [x_1, \ldots, x_n]$ is, \begin{equation} p(z \given x, q)= \softmax \left(\ind\{z\ \text{is valid}\} \sum_{i \neq j} \ind\{z_{ij} = 1\} \theta_{ij} \right) \end{equation} where $z$ is represented as a vector of $z_{ij}$'s for all $i \ne j$. It is possible to calculate the marginal probability of each edge $p(z_{ij} = 1\given x, q)$ for all $i, j$ in $O(n^3)$ time using the inside-outside algorithm \citep{Baker1979} on the data structures of \citet{Eisner1996}. The parsing contraints ensure that each word has exactly one head (i.e. $\sum_{i=1}^n z_{ij} = 1$). Therefore if we want to utilize the \emph{soft-head} selection of a position $j$, the context vector is defined as: \begin{align*} f_j(x, z) = \sum_{i=1}^n \ind\{z_{ij} = 1\} \xvec_i & & \cvec_j = \E_z [f_j(x, z)] = \sum_{i=1}^n p(z_{ij} = 1 \given x, q) \xvec_i \end{align*} Note that in this case the annotation function has the subscript $j$ to produce a context vector for each word in the sentence. Similar types of attention can be applied for other tree properties (e.g. soft-children). We refer to this type of attention layer as a \emph{syntactic attention} layer. \subsection{End-to-End Training}\label{sec:e2e} Graphical models of this form have been widely used as the final layer of deep models. Our contribution is to argue that these networks can be added within deep networks in place of simple attention layers. The whole model can then be trained end-to-end. The main complication in utilizing this approach within the network itself is the need to backpropagate the gradients through an inference algorithm as part of the structured attention network. Past work has demonstrated the techniques necessary for this approach (see \citet{Stoyanov2011}), but to our knowledge it is very rarely employed. Consider the case of the simple linear-chain CRF layer from equation (\ref{linear-chain}). Figure~\ref{fig:fb} (left) shows the standard forward-backward algorithm for computing the marginals $p(z_i = 1\given x, q; \theta)$. If we treat the forward-backward algorithm as a neural network layer, its input are the potentials $\theta$, and its output after the forward pass are these marginals.\footnote{Confusingly, ``forward'' in this case is different than in the \textit{forward}-backward algorithm, as the marginals themselves are the output. However the two uses of the term are actually quite related. The forward-backward algorithm can be interpreted as a forward and backpropagation pass on the log partition function. See \citet{Eisner2016} for further details (appropriately titled ``Inside-Outside and Forward-Backward Algorithms Are Just Backprop''). As such our full approach can be seen as computing second-order information. This interpretation is central to \citet{Li2009}.} To backpropagate a loss through this layer we need to compute the gradient of the loss $\mcL$ with respect to $\theta$, $\nabla_{\theta}^\mcL$, as a function of the gradient of the loss with respect to the marginals, $\nabla_{p}^\mcL$.\footnote{In general we use $\nabla^a_b$ to denote the Jacobian of $a$ with respect to $b$.} As the forward-backward algorithm consists of differentiable steps, this function can be derived using reverse-mode automatic differentiation of the forward-backward algorithm itself. Note that this reverse-mode algorithm conveniently has a parallel structure to the forward version, and can also be implemented using dynamic programming. \begin{wraptable}{r}{0.54\textwidth} \small \centering \begin{tabular}{cc|cc|cc} \toprule & & \multicolumn{2}{c|}{$\oplus$} & \multicolumn{2}{c}{$\otimes$} \\ $s_a$ & $s_b$ & $ l_{a+b} $ & $s_{a+b}$ & $ l_{a\cdot b}$ & $s_{a \cdot b}$\\ \midrule $+$ & $+$ & $l_a+\log (1 + d)$& $+$ & $l_a+l_b$ &$+$ \\ $+$ & $-$ & $l_a+\log (1 - d)$& $+$ & $l_a+l_b$ &$-$ \\ $-$ & $+$ & $l_a+\log (1 - d)$& $-$ & $l_a+l_b$ &$-$ \\ $-$ & $-$ & $l_a+\log (1 + d)$& $-$ & $l_a+l_b$ &$+$ \\ \bottomrule \end{tabular} \caption{\label{tab:dlog} \small Signed log-space semifield (from \cite{Li2009}). Each real number $a$ is represented as a pair $( l_a, s_a )$ where $l_a = \log |a|$ and $s_a = \sign(a)$. Therefore $a = s_a \exp(l_a)$. For the above we let $d = \exp(l_b- l_a)$ and assume $|a| > |b|$. } \end{wraptable} However, in practice, one cannot simply use current off-the-shelf tools for this task. For one, efficiency is quite important for these models and so the benefits of hand-optimizing the reverse-mode implementation still outweighs simplicity of automatic differentiation. Secondly, numerical precision becomes a major issue for structured attention networks. For computing the forward-pass and the marginals, it is important to use the standard log-space semifield over $\mathbb{R}\cup\{\pm \infty\}$ with binary operations $(\oplus = \logadd, \otimes = +)$ to avoid underflow of probabilities. For computing the backward-pass, we need to remain in log-space, but also handle log of negative values (since $\pgrad$ could be negative). This requires extending to the \textit{signed} log-space semifield over $\left[\mathbb{R}\cup\{\pm \infty\}\right] \times \{+, -\}$ with special $+$/$-$ operations. Table~\ref{tab:dlog}, based on \cite{Li2009}, demonstrates how to handle this issue, and Figure~\ref{fig:fb} (right) describes backpropagation through the forward-backward algorithm. For dependency parsing, the forward pass can be computed using the inside-outside implementation of Eisner's algorithm \citep{Eisner1996}. Similarly, the backpropagation parallels the inside-outside structure. Forward/backward pass through the inside-outside algorithm is described in Appendix~\ref{app:io}. \section{Experiments} We experiment with three instantiations of structured attention networks on four different tasks: (a) a simple, synthetic tree manipulation task using the syntactic attention layer, (b) machine translation with segmentation attention (i.e. two-state linear-chain CRF), (c) question answering using an $n$-state linear-chain CRF for multi-step inference over $n$ facts, and (d) natural language inference with syntactic tree attention. These experiments are not intended to boost the state-of-the-art for these tasks but to test whether these methods can be trained effectively in an end-to-end fashion, can yield improvements over standard selection-based attention, and can learn plausible latent structures. All model architectures, hyperparameters, and training details are further described in Appendix~\ref{app:model}. \subsection{Tree Transduction} The first set of experiments look at a tree-transduction task. These experiments use synthetic data to explore a failure case of soft-selection attention models. The task is to learn to convert a random formula given in prefix notation to one in infix notation, e.g., \begin{small} \begin{align*} (\,\,\,*\,\,\,(\,\,\,+\,\,\,(\,\,\,+\,\,\,15\,\,\,7\,\,\,)\,\,\,1\,\,\,8\,\,\,)\,\,\,(\,\,\,+\,\,\,19\,\,\,0\,\,\,11\,\,\,)\,\,\,) \,\, \Rightarrow (\,\,(\,\,15\,\,+\,\,7\,\,\,)\,\,+\,\,1\,\,+\,\,8\,\,\,)\,\,*\,\,(\,\,\,19\,\,+\,\,0\,\,+\,\,11\,\,\,) \end{align*} \end{small} The alphabet consists of symbols $\{(, ),+,*\}$, numbers between $0$ and $20$, and a special root symbol $\$$. This task is used as a preliminary task to see if the model is able to learn the implicit tree structure on the source side. The model itself is an encoder-decoder model, where the encoder is defined below and the decoder is an LSTM. See Appendix~\ref{app:tree} for the full model. Training uses $15$K prefix-infix pairs where the maximum nesting depth is set to be between $2$-$4$ (the above example has depth $3$), with $5$K pairs in each depth bucket. The number of expressions in each parenthesis is limited to be at most $4$. Test uses $1$K unseen sequences with depth between $2$-$6$ (note specifically deeper than train), with $200$ sequences for each depth. The performance is measured as the average proportion of correct target tokens produced until the first failure (as in \cite{Grefenstette2015}). For experiments we try using different forms of \textit{self}-attention over embedding-only encoders. Let $\mathbf{x}_j$ be an embedding for each source symbol; our three variants of the source representation $\hat{\xvec}_j$ are: (a) \textit{no atten}, just symbol embeddings by themselves, i.e. $\hat{\xvec}_j = \mathbf{x}_j$; (b) \textit{simple} attention, symbol embeddings and soft-pairing for each symbol, i.e. $ \hat{\xvec}_j = [\mathbf{x}_j; \mathbf{c}_j]$ where $ \mathbf{c}_j = \sum_{i=1}^n \softmax( \theta_{ij}) \mathbf{x}_i$ is calculated using soft-selection; (c) \textit{structured} attention, symbol embeddings and soft-parent, i.e. $\hat{\xvec}_j = [\mathbf{x}_j; \mathbf{c}_j]$ where $ \mathbf{c}_j = \sum_{i=1}^n p(z_{ij} = 1 \given x) \mathbf{x}_i $ is calculated using parsing marginals, obtained from the syntactic attention layer. None of these models use an explicit query value---the potentials come from running a bidirectional LSTM over the source, producing hidden vectors $\hvec_i$, and then computing \[\theta_{ij} = \tanh(\mathbf{s}^\top \tanh(\mathbf{W}_1 \mathbf{h}_i + \mathbf{W}_2 \mathbf{h}_j + \mathbf{b}))\] \noindent where $\mathbf{s}, \mathbf{b}, \mathbf{W}_1, \mathbf{W}_2$ are parameters (see Appendix~\ref{app:parsing}). \begin{wraptable}{l}{0.43\textwidth}\label{tree-perf} \small \begin{tabular}{c ccc} \toprule Depth & No Atten & Simple & Structured \\ \midrule $2$ & $7.6$ & $87.4$ & $99.2$ \\ $3$ & $4.1$ & $49.6$ & $87.0$ \\ $4$ & $2.8$ & $23.3$ & $64.5$ \\ $5$ & $2.1$ & $15.0$ & $30.8$ \\ $6$ & $1.5$ & $8.5$ & $18.2$ \\ \bottomrule \end{tabular} \caption{\label{tree-perf} \small Performance (average length to failure \%) of models on the tree-transduction task.} \end{wraptable} The source representation $[\hat{\xvec}_1, \dots, \hat{\xvec}_n]$ are attended over using the standard attention mechanism at each decoding step by an LSTM decoder.\footnote{Thus there are two attention mechanisms at work under this setup. First, structured attention over the source only to obtain soft-parents for each symbol (i.e. self-attention). Second, standard softmax alignment attention over the source representations during decoding.} Additionally, symbol embedding parameters are shared between the parsing LSTM and the source encoder. \paragraph{Results} Table~\ref{tree-perf} has the results for the task. Note that this task is fairly difficult as the encoder is quite simple. The baseline model (unsurprisingly) performs poorly as it has no information about the source ordering. The simple attention model performs better, but is significantly outperformed by the structured model with a tree structure bias. We hypothesize that the model is partially reconstructing the arithmetic tree. Figure~\ref{tree-viz} shows the attention distribution for the simple/structured models on the same source sequence, which indicates that the structured model is able to learn boundaries (i.e. parentheses). \subsection{Neural Machine Translation} Our second set of experiments use a full neural machine translation model utilizing attention over subsequences. Here both the encoder/decoder are LSTMs, and we replace standard simple attention with a segmentation attention layer. We experiment with two settings: translating directly from unsegmented Japanese characters to English words (effectively using structured attention to perform soft word segmentation), and translating from segmented Japanese words to English words (which can be interpreted as doing \emph{phrase-based} neural machine translation). Japanese word segmentation is done using the KyTea toolkit \citep{Neubig2011}. The data comes from the Workshop on Asian Translation (WAT) \citep{wat2016}. We randomly pick $500$K sentences from the original training set (of $3$M sentences) where the Japanese sentence was at most $50$ characters and the English sentence was at most $50$ words. We apply the same length filter on the provided validation/test sets for evaluation. The vocabulary consists of all tokens that occurred at least $10$ times in the training corpus. The segmentation attention layer is a two-state CRF where the unary potentials at the $j$-th decoder step are parameterized as \[ \theta_i(k)= \begin{cases} \hvec_i \Wvec \hvec_j,& k = 1 \\ 0, &k = 0 \end{cases} \] Here $[\hvec_1, \dots, \hvec_n]$ are the encoder hidden states and $\mathbf{h}_j'$ is the $j$-th decoder hidden state (i.e. the query vector). The pairwise potentials are parameterized linearly with $\mathbf{b}$, i.e. all together \[ \theta_{i,i+1}(z_i, z_{i+1}) = \theta_i(z_i) + \theta_{i+1}(z_{i+1}) + \mathbf{b}_{z_i, z_{i+1}} \] Therefore the segmentation attention layer requires just $4$ additional parameters. Appendix~\ref{app:nmt} describes the full model architecture. We experiment with three attention configurations: (a) standard {\it simple} attention, i.e. $\cvec_j = \sum_{i=1}^n \softmax(\theta_i) \hvec_i $; (b) \textit{sigmoid} attention: multiple selection with Bernoulli random variables, i.e. $\cvec_j = \sum_{i=1}^n \sigmoid(\theta_i) \hvec_i$; (c) \textit{structured} attention, encoded with normalized CRF marginals, \begin{align*} \cvec_j = \sum_{i=1}^n \frac{p(z_i=1 \given x, q)}{\gamma} \hvec_i & & \gamma = \frac{1}{\lambda} \sum_{i=1}^n p(z_i =1 \given x, q) \end{align*} The normalization term $\gamma$ is not ideal but we found it to be helpful for stable training.\footnote{With standard expectation (i.e. $\cvec_j = \sum_{i=1}^n p(z_i=1 \given x, q) \hvec_i$) we empirically observed the marginals to quickly saturate. We tried various strategies to overcome this, such as putting an $l_2$ penalty on the unary potentials and initializing with a pretrained sigmoid attention model, but simply normalizing the marginals proved to be the most effective. However, this changes the interpretation of the context vector as the expectation of an annotation function in this case.} $\lambda$ is a hyperparameter (we use $\lambda = 2$) and we further add an $l_2$ penalty of $0.005$ on the pairwise potentials $\bvec$. These values were found via grid search on the validation set. \begin{wraptable}{l}{0.43\textwidth}\label{nmt-perf} \small \begin{tabular}{c ccc} \toprule & Simple & Sigmoid & Structured \\ \midrule \textsc{Char} & $12.6$ & $13.1$ & $14.6$ \\ \textsc{Word} & $14.1$ & $13.8$ & $14.3$ \\ \bottomrule \end{tabular} \caption{\label{nmt-perf}\small Translation performance as measured by BLEU (higher is better) on character-to-word and word-to-word Japanese-English translation for the three different models.} \end{wraptable} \paragraph{Results} Results for the translation task on the test set are given in Table~\ref{nmt-perf}. Sigmoid attention outperforms simple (softmax) attention on the character-to-word task, potentially because it is able to learn many-to-one alignments. On the word-to-word task, the opposite is true, with simple attention outperforming sigmoid attention. Structured attention outperforms both models on both tasks, although improvements on the word-to-word task are modest and unlikely to be statistically significant. For further analysis, Figure~\ref{fig:vis3} shows a visualization of the different attention mechanisms on the character-to-word setup. The simple model generally focuses attention heavily on a single character. In contrast, the sigmoid and structured models are able to spread their attention distribution on contiguous subsequences. The structured attention learns additional parameters (i.e. $\bvec$) to smooth out this type of attention. \subsection{Question Answering} Our third experiment is on question answering (QA) with the linear-chain CRF attention layer for inference over multiple facts. We use the bAbI dataset \citep{Weston2015}, where the input is a set of sentences/facts paired with a question, and the answer is a single token. For many of the tasks the model has to attend to multiple supporting facts to arrive at the correct answer (see Figure~\ref{fig:vis4} for an example), and existing approaches use multiple `hops' to greedily attend to different facts. We experiment with employing structured attention to perform inference in a non-greedy way. As the ground truth supporting facts are given in the dataset, we are able to assess the model's inference accuracy. The baseline (simple) attention model is the End-To-End Memory Network \citep{Sukhbaatar2015} (MemN2N), which we briefly describe here. See Appendix~\ref{app:qa} for full model details. Let $\xvec_1, \dots, \xvec_n$ be the input embedding vectors for the $n$ sentences/facts and let $\mathbf{q}$ be the query embedding. In MemN2N, $z_k$ is the random variable for the sentence to select at the $k$-th inference step (i.e. $k$-th hop), and thus $z_k \in \{1, \dots, n\}$. The probability distribution over $z_k$ is given by $p(z_k = i \given x, q) = \softmax((\xvec_i^k)^\top\qvec^k)$, and the context vector is given by $\cvec^k = \sum_{i=1}^n p(z_k = i \given x, q) \mathbf{o}_i^k$, where $\xvec_i^k, \mathbf{o}_i^k$ are the input and output embedding for the $i$-th sentence at the $k$-th hop, respectively. The $k$-th context vector is used to modify the query $\qvec^{k+1} = \qvec^k + \cvec^k$, and this process repeats for $k = 1, \dots, K$ (for $k=1$ we have $\xvec_i^k = \xvec_i, \qvec^k = \qvec, \cvec^k = \mathbf{0}$). The $K$-th context and query vectors are used to obtain the final answer. The attention mechanism for a $K$-hop MemN2N network can therefore be interpreted as a greedy selection of a length-$K$ sequence of facts (i.e. $z_1, \dots, z_K$). For structured attention, we use an $n$-state, $K$-step linear-chain CRF.\footnote{Note that this differs from the segmentation attention for the neural machine translation experiments described above, which was a $K$-state (with $K =2$), $n$-step linear-chain CRF.} We experiment with two different settings: (a) a unary CRF model with node potentials $$\theta_k(i) = (\xvec_i^k)^\top \mathbf{q}^k$$ and (b) a binary CRF model with pairwise potentials $$\theta_{k,k+1}(i, j) = (\mathbf{x}_i^k)^\top\qvec^k + (\mathbf{x}_i^k)^\top \xvec_j^{k + 1} + (\mathbf{x}_j^{k + 1})^\top \mathbf{q}^{k + 1}$$ The binary CRF model is designed to test the model's ability to perform sequential reasoning. For both (a) and (b), a \emph{single} context vector is computed: $\mathbf{c} = \sum_{z_1,\ldots,z_K} p(z_1,\ldots,z_K \given x,q) f(x,z)$ (unlike MemN2N which computes $K$ context vectors). Evaluating $\mathbf{c}$ requires summing over all $n^K$ possible sequences of length $K$, which may not be practical for large values of $K$. However, if $f(x,z)$ factors over the components of $z$ (e.g. $f(x,z)= \sum_{k=1}^K f_k(x,z_k)$) then one can rewrite the above sum in terms of marginals: $\mathbf{c} = \sum_{k=1}^K \sum_{i=1}^n p(z_{k} = i \given x,q) f_{k}(x,z_{k})$. In our experiments, we use $f_k(x,z_k) = \mathbf{o}_{z_k}^k$. All three models are described in further detail in Appendix~\ref{app:qa}. \paragraph{Results} We use the version of the dataset with $1$K questions for each task. Since all models reduce to the same network for tasks with $1$ supporting fact, they are excluded from our experiments. The number of hops (i.e. $K$) is task-dependent, and the number of memories (i.e. $n$) is limited to be at most $25$ (note that many question have less than $25$ facts---e.g. the example in Figure~\ref{fig:vis4} has $9$ facts). Due to high variance in model performance, we train $20$ models with different initializations for each task and report the test accuracy of the model that performed the best on a $10\%$ held-out validation set (as is typically done for bAbI tasks). Results of the three different models are shown in Table~\ref{tab:results}. For correct answer seletion (Ans $\%$), we find that MemN2N and the Binary CRF model perform similarly while the Unary CRF model does worse, indicating the importance of including pairwise potentials. We also assess each model's ability to attend to the correct supporting facts in Table~\ref{tab:results} (Fact $\%$). Since ground truth supporting facts are provided for each query, we can check the sequence accuracy of supporting facts for each model (i.e. the rate of selecting the exact correct sequence of facts) by taking the highest probability sequence $\hat{z} = \argmax p(z_1, \dots, z_K \given x, q)$ from the model and checking against the ground truth. Overall the Binary CRF is able to recover supporting facts better than MemN2N. This improvement is significant and can be up to two-fold as seen for task $2$, $11$, $13$ \& $17$. However we observed that on many tasks it is sufficient to select only the last (or first) fact correctly to predict the answer, and thus higher sequence selection accuracy does not necessarily imply better answer accuracy (and vice versa). For example, all three models get $100 \%$ answer accuracy on task $15$ but have different supporting fact accuracies. Finally, in Figure~\ref{fig:vis4} we visualize of the output edge marginals produced by the Binary CRF model for a single question in task $16$. In this instance, the model is uncertain but ultimately able to select the right sequence of facts $5 \rightarrow 6 \rightarrow 8$. \subsection{Natural Language Inference} The final experiment looks at the task of natural language inference (NLI) with the syntactic attention layer. In NLI, the model is given two sentences (hypothesis/premise) and has to predict their relationship: entailment, contradiction, neutral. For this task, we use the Stanford NLI dataset \citep{Bowman2015} and model our approach off of the decomposable attention model of \cite{Parikh2016}. This model takes in the matrix of word embeddings as the input for each sentence and performs \textit{inter-sentence} attention to predict the answer. Appendix~\ref{app:nli} describes the full model. As in the transduction task, we focus on modifying the input representation to take into account soft parents via self-attention (i.e. \textit{intra-sentence} attention). In addition to the three baselines described for tree transduction (No Attention, Simple, Structured), we also explore two additional settings: (d) \textit{hard} pipeline parent selection, i.e. $\hat{\mathbf{x}}_j = [\mathbf{x}_j; \mathbf{x}_{\head(j)}]$, where $\head(j)$ is the index of $x_j$'s parent\footnote{The parents are obtained from running the dependency parser of \cite{Andor2016}, available at \\ \url{https://github.com/tensorflow/models/tree/master/syntaxnet}}; (e) \textit{pretrained} structured attention: structured attention where the parsing layer is pretrained for one epoch on a parsed dataset (which was enough for convergence). \paragraph{Results} Results of our models are shown in Table~\ref{tab:main}. Simple attention improves upon the no attention model, and this is consistent with improvements observed by \cite{Parikh2016} with their intra-sentence attention model. The pipelined model with hard parents also slightly improves upon the baseline. Structured attention outperforms both models, though surprisingly, pretraining the syntactic attention layer on the parse trees performs worse than training it from scratch---it is possible that the pretrained attention is too strict for this task. We also obtain the hard parse for an example sentence by running the Viterbi algorithm on the syntactic attention layer with the non-pretrained model: \begin{center} \includegraphics[scale=0.8]{tikz1.pdf} \end{center} Despite being trained without ever being exposed to an explicit parse tree, the syntactic attention layer learns an almost plausible dependency structure. In the above example it is able to correctly identify the main verb \texttt{fighting}, but makes mistakes on determiners (e.g. head of \texttt{The} should be \texttt{men}). We generally observed this pattern across sentences, possibly because the verb structure is more important for the inference task. \section{Conclusion} This work outlines structured attention networks, which incorporate graphical models to generalize simple attention, and describes the technical machinery and computational techniques for backpropagating through models of this form. We implement two classes of structured attention layers: a linear-chain CRF (for neural machine translation and question answering) and a more complicated first-order dependency parser (for tree transduction and natural language inference). Experiments show that this method can learn interesting structural properties and improve on top of standard models. Structured attention could also be a way of learning latent labelers or parsers through attention on other tasks. It should be noted that the additional complexity in computing the attention distribution increases run-time---for example, structured attention was approximately $5 \times$ slower to train than simple attention for the neural machine translation experiments, even though both attention layers have the same asymptotic run-time (i.e. $O(n)$). Embedding \textit{differentiable inference} (and more generally, \textit{differentiable algorithms}) into deep models is an exciting area of research. While we have focused on models that admit (tractable) exact inference, similar technique can be used to embed approximate inference methods. Many optimization algorithms (e.g. gradient descent, LBFGS) are also differentiable \citep{domke2012generic,Maclaurin2015}, and have been used as output layers for structured prediction in energy-based models \citep{Belanger2016,wang2016nips}. Incorporating them as internal neural network layers is an interesting avenue for future work. \subsubsection*{Acknowledgments} We thank Tao Lei, Ankur Parikh, Tim Vieira, Matt Gormley, Andr{\'e} Martins, Jason Eisner, Yoav Goldberg, and the anonymous reviewers for helpful comments, discussion, notes, and code. We additionally thank Yasumasa Miyamoto for verifying Japanese-English translations. \bibliographystyle{iclr2017_conference} \newpage \appendix \section*{APPENDICES} \section{Model Details}\label{app:model} \subsection{Syntactic Attention}\label{app:parsing} The syntactic attention layer (for tree transduction and natural language inference) is similar to the first-order graph-based dependency parser of \cite{Kipperwasser2016}. Given an input sentence $[x_1, \dots, x_n]$ and the corresponding word vectors $[\xvec_1, \dots, \xvec_n]$, we use a bidirectional LSTM to get the hidden states for each time step $i \in [1, \dots, n]$, \begin{align*} \hvec_i^{\text{fwd}} = \lstm(\xvec_i, \hvec_{i-1}^{\text{fwd}}) & & \hvec_i^{\text{bwd}} = \lstm(\xvec_i, \hvec_{i+1}^{\text{bwd}}) & & \hvec_i = [\hvec_i^{\text{fwd}} ; \hvec_i^{\text{bwd}}] \end{align*} where the forward and backward LSTMs have their own parameters. The score for $x_i \rightarrow x_j$ (i.e. $x_i$ is the parent of $x_j$), is given by an MLP \begin{equation*} \theta_{ij} = \tanh( \svec^\top\tanh(\Wvec_1 \hvec_i + \Wvec_2 \hvec_j + \bvec)) \end{equation*} These scores are used as input to the inside-outside algorithm (see Appendix~\ref{app:io}) to obtain the probability of each word's parent $p(z_{ij} = 1 \given x)$, which is used to obtain the soft-parent $\cvec_j$ for each word $x_j$. In the non-structured case we simply have $p(z_{ij} = 1 \given x) = \softmax(\theta_{ij})$. \subsection{Tree Transduction}\label{app:tree} Let $[x_1, \dots, x_n],[y_1, \dots, y_m]$ be the sequence of source/target symbols, with the associated embeddings $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$ with $\xvec_i, \yvec_j \in \reals^l$. In the simplest baseline model we take the source representation to be the matrix of the symbol embeddings. The decoder is a one-layer LSTM which produces the hidden states $\hvec_j' = \lstm(\yvec_j, \hvec_{j-1}')$, with $\hvec_j' \in \reals^l$. The hidden states are combined with the input representation via a bilinear map $\Wvec \in \reals^{l \times l}$ to produce the attention distribution used to obtain the vector $\mvec_i$, which is combined with the decoder hidden state as follows, \begin{align*} \alpha_i = \frac{\exp \xvec_i \Wvec \hvec_j'}{\sum_{k=1}^n \exp \xvec_k \Wvec \hvec_j'} & & \mvec_i = \sum_{i=1}^n \alpha_i \xvec_i & & \hat{\hvec}_j = \tanh (\Uvec [\mvec_i ; \hvec_j'] ) \end{align*} Here we have $\Wvec \in \reals^{l \times l}$ and $\Uvec \in \reals^{2l \times l}$. Finally, $\hat{\hvec}_j$ is used to to obtain a distribution over the next symbol $y_{j+1}$, \begin{equation*} p(y_{j+1} \given x_1, \dots, x_n, y_1, \dots, y_j) = \softmax(\Vvec \hat{\hvec}_j + \bvec) \end{equation*} For structured/simple models, the $j$-th source representation are respectively \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n p(z_{ki} = 1 \given x ) \, \xvec_k\right] & &\hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n \softmax (\theta_{ki})\, \xvec_k\right] \end{align*} where $\theta_{ij}$ comes from the bidirectional LSTM described in~\ref{app:parsing}. Then $\alpha_i$ and $\mvec_i$ changed accordingly, \begin{align*} \alpha_i = \frac{\exp \hat{\xvec}_i \Wvec \hvec_j'}{\sum_{k=1}^n \exp \hat{\xvec}_k \Wvec \hvec_j'} & & \mvec_i = \sum_{i=1}^n \alpha_i \hat{\xvec}_i \end{align*} Note that in this case we have $\Wvec \in \reals^{2l \times l}$ and $\Uvec \in \reals^{3l \times l}$. We use $l = 50$ in all our experiments. The forward/backward LSTMs for the parsing LSTM are also $50$-dimensional. Symbol embeddings are shared between the encoder and the parsing LSTMs. Additional training details include: batch size of $20$; training for $13$ epochs with a learning rate of $1.0$, which starts decaying by half after epoch $9$ (or the epoch at which performance does not improve on validation, whichever comes first); parameter initialization over a uniform distribution $U[-0.1, 0.1]$; gradient normalization at $1$ (i.e. renormalize the gradients to have norm $1$ if the $l_2$ norm exceeds $1$). Decoding is done with beam search (beam size $ = 5$). \subsection{Neural Machine Translation}\label{app:nmt} The baseline NMT system is from \cite{Luong2015}. Let $[x_1, \dots, x_n],[y_1, \dots, y_m]$ be the source/target sentence, with the associated word embeddings $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$. The encoder is an LSTM over the source sentence, which produces the hidden states $[\hvec_1, \dots, \hvec_n]$ where \begin{equation*} \hvec_i = \lstm(\xvec_i, \hvec_{i-1}) \end{equation*} and $\hvec_i \in \reals^l$. The decoder is another LSTM which produces the hidden states $\hvec_j' \in \reals^l$. In the simple attention case with categorical attention, the hidden states are combined with the input representation via a bilinear map $\Wvec \in \reals^{l \times l}$ and this distribution is used to obtain the context vector at the $j$-th time step, \begin{align*} \theta_i = \hvec_i \Wvec \hvec_j' & & \cvec_j = \sum_{i=1}^n \softmax(\theta_i)\hvec_i \end{align*} The Bernoulli attention network has the same $\theta_i$ but instead uses a $\sigmoid$ to obtain the weights of the linear combination, i.e., \begin{align*} \cvec_j = \sum_{i=1}^n \sigmoid(\theta_i) \hvec_i \end{align*} And finally, the structured attention model uses a bilinear map to parameterize one of the unary potentials \[ \theta_i(k)= \begin{cases} \hvec_i \Wvec \hvec_j',& k = 1 \\ 0, &k = 0 \end{cases} \] \begin{align*} \theta_{i,i+1}(z_i, z_{i+1}) &= \theta_i(z_i) + \theta_{i+1}(z_{i+1}) + \mathbf{b}_{z_i, z_{i+1}} \end{align*} where $\bvec$ are the pairwise potentials. These potentials are used as inputs to the forward-backward algorithm to obtain the marginals $p(z_i = 1 \given x, q)$, which are further normalized to obtain the context vector \begin{align*} \cvec_j = \sum_{i=1}^n \frac{p(z_i=1 \given x, q)}{\gamma} \hvec_i & & \gamma = \frac{1}{\lambda} \sum_i^n p(z_i =1 \given x, q) \end{align*} We use $\lambda = 2$ and also add an $l_2$ penalty of $0.005$ on the pairwise potentials $\bvec$. The context vector is then combined with the decoder hidden state \begin{align*} \hat{\hvec}_j = \tanh (\Uvec[\cvec_j ; \hvec_j']) \end{align*} and $\hat{\hvec}_j$ is used to obtain the distribution over the next target word $y_{j+1}$ \begin{align*} p(y_{j+1} \given x_1, \dots, x_n, y_1, \dots y_j) = \softmax(\Vvec \hat{\hvec}_j + \bvec) \end{align*} The encoder/decoder LSTMs have $2$ layers and $500$ hidden units (i.e. $l = 500$). Additional training details include: batch size of $128$; training for $30$ epochs with a learning rate of $1.0$, which starts decaying by half after the first epoch at which performance does not improve on validation; dropout with probability $0.3$; parameter initialization over a uniform distribution $U[-0.1, 0.1]$; gradient normalization at $1$. We generate target translations with beam search (beam size $= 5$), and evaluate with \texttt{multi-bleu.perl} from Moses.\footnote{ \url{https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl}} \subsection{Question Answering}\label{app:qa} Our baseline model (MemN2N) is implemented following the same architecture as described in \cite{Sukhbaatar2015}. In particular, let $x = [x_1, \dots, x_n]$ represent the sequence of $n$ facts with the associated embeddings $[\mathbf{x}_1, \dots, \xvec_n]$ and let $\qvec$ be the embedding of the query $q$. The embeddings are obtained by simply adding the word embeddings in each sentence or query. The full model with $K$ hops is as follows: \begin{align*} &p(z_k = i \given x, q) = \softmax( (\mathbf{x}_i^k)^\top \mathbf{q}^k ) \\ &\mathbf{c}^k = \sum_{i=1}^n p(z_k = i \given x, q) \mathbf{o}_i^k \\ &\mathbf{q}^{k + 1} = \mathbf{q}^k + \mathbf{c}^k \\ &p(y \given x, q) = \softmax(\Wvec (\mathbf{q}^K + \mathbf{c}^K)) \end{align*} where $p(y \given x, q)$ is the distribution over the answer vocabulary. At each layer, $\{\mathbf{x}_i^k\}$ and $\{\mathbf{o}_i^k\}$ are computed using embedding matrices $\mathbf{X}^k$ and $\mathbf{O}^k$. We use the \emph{adjacent weight tying scheme} from the paper so that $\mathbf{X}^{k+1} = \mathbf{O}^k, \mathbf{W}^T = \mathbf{O}^K$. $\mathbf{X}^1$ is also used to compute the query embedding at the first hop. For $k=1$ we have $\xvec_i^k = \xvec_i, \qvec^k = \qvec, \cvec^k = \mathbf{0}$. For both the Unary and the Binary CRF models, the same input fact and query representations are computed (i.e. same embedding matrices with weight tying scheme). For the unary model, the potentials are parameterized as \[ \theta_{k}(i) = (\xvec_i^k)^\top \mathbf{q}^k \] and for the binary model we compute pairwise potentials as \[ \theta_{k,k+1}(i, j) = (\mathbf{x}_i^k)^\top\qvec^k + (\mathbf{x}_i^k)^\top \xvec_j^{k + 1} + (\mathbf{x}_j^{k + 1})^\top \mathbf{q}^{k + 1} \] The $\qvec^k$'s are updated simply with a linear mapping, i.e. \[ \mathbf{q}^{k+1} = \mathbf{Q} \mathbf{q}^k \] In the case of the Binary CRF, to discourage the model from selecting the same fact again we additionally set $\theta_{k,k+1}(i,i) = -\infty$ for all $i \in \{1, \dots, n\}$. Given these potentials, we compute the marginals $p(z_k = i, z_{k+1} = j \given x, q)$ using the forward-backward algorithm, which is then used to compute the context vector: \begin{align*} \mathbf{c} = \sum_{z_1,\ldots,z_K} p(z_1,\ldots,z_K \given x,q) f(x,z) & & f(x,z) = \sum_{k=1}^K f_k(x, z_k) & & f_k(x,z_k) = \mathbf{o}_{z_k}^k \end{align*} Note that if $f(x,z)$ factors over the components of $z$ (as is the case above) then computing $\cvec$ only requires evaluating the marginals $p(z_k \given x,q)$. Finally, given the context vector the prediction is made in a similar fashion to MemN2N: \begin{align*} p(y \given x, q) = \softmax(\Wvec (\mathbf{q}^K + \mathbf{c})) \end{align*} Other training setup is similar to \cite{Sukhbaatar2015}: we use stochastic gradient descent with learning rate $0.01$, which is divided by $2$ every $25$ epochs until $100$ epochs are reached. Capacity of the memory is limited to $25$ sentences. The embedding vectors are of size $20$ and gradients are renormalized if the norm exceeds $40$. All models implement \emph{position encoding}, \emph{temporal encoding}, and \emph{linear start} from the original paper. For linear start, the $\softmax(\cdot)$ function in the attention layer is removed at the beginning and re-inserted after $20$ epochs for MemN2N, while for the CRF models we apply a $\log(\softmax(\cdot))$ layer on the $\qvec^k$ after $20$ epochs. Each model is trained separately for each task. \subsection{Natural Language Inference}\label{app:nli} Our baseline model/setup is essentially the same as that of \cite{Parikh2016}. Let $[x_1, \dots, x_n], [y_1, \dots, y_m]$ be the premise/hypothesis, with the corresponding input representations $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$. The input representations are obtained by a linear transformation of the $300$-dimensional pretrained GloVe embeddings \citep{Pennington2014} after normalizing the GloVe embeddings to have unit norm.\footnote{We use the GloVe embeddings pretrained over the $840$ billion word Common Crawl, publicly available at \url{http://nlp.stanford.edu/projects/glove/}} The pretrained embeddings remain fixed but the linear layer (which is also $300$-dimensional) is trained. Words not in the pretrained vocabulary are hashed to one of $100$ Gaussian embeddings with mean $0$ and standard deviation $1$. We concatenate each input representation with a convex combination of the other sentence's input representations (essentially performing \textit{inter-sentence} attention), where the weights are determined through a dot product followed by a softmax, \begin{align*} e_{ij} = f(\xvec_i)^\top f(\yvec_j) & & \bar{\xvec}_{i} = \left[\xvec_i ; \sum_{j=1}^m \frac{\exp e_{ij}}{\sum_{k=1}^m \exp e_{ik}} \yvec_{j}\right] & & \bar{\yvec}_{j} = \left[\yvec_j ; \sum_{i=1}^n \frac{\exp e_{ij}}{\sum_{k=1}^n \exp e_{kj}} \xvec_{i}\right] \end{align*} Here $f(\cdot)$ is an MLP. The new representations are fed through another MLP $g(\cdot)$, summed, combined with the final MLP $h(\cdot)$ and fed through a softmax layer to obtain a distribution over the labels $l$, \begin{align*} \bar{\xvec} &= \sum_{i=1}^n g(\bar{\xvec}_{i}) \hspace{20mm} \bar{\yvec} = \sum_{j=1}^m g(\bar{\yvec}_{j}) \\ p(l \given x_1, &\dots, x_n, y_1, \dots, y_m)= \softmax(\Vvec h([\bar{\xvec}; \bar{\yvec}]) + \bvec) \end{align*} All the MLPs have $2$-layers, $300$ $\relu$ units, and dropout probability of $0.2$. For structured/simple models, we first employ the bidirectional parsing LSTM (see \ref{app:parsing}) to obtain the scores $\theta_{ij}$. In the structured case each word representation is simply concatenated with its soft-parent \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n p(z_{ki} = 1 \given x ) \xvec_k\right] % & & \hat{\yvec}_j = [\yvec_j ; \sum_{l=1}^m p(z'_{lj} = 1 \given y) \yvec_l] \end{align*} and $\hat{\xvec}_i$ (and analogously $\hat{\yvec}_j$) is used as the input to the above model. In the simple case (which closely corresponds to the \emph{intra-sentence} attention model of \cite{Parikh2016}), we have \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n \frac{\exp \theta_{ki}}{\sum_{l=1}^n \exp \theta_{li}} \xvec_k \right] \end{align*} The word embeddings for the parsing LSTMs are also initialized with GloVe, and the parsing layer is shared between the two sentences. The forward/backward LSTMs for the parsing layer are $100$-dimensional. Additional training details include: batch size of $32$; training for $100$ epochs with Adagrad \citep{Duchi2011} where the global learning rate is $0.05$ and sum of gradient squared is initialized to $0.1$; parameter intialization over a Gaussian distribution with mean $0$ and standard deviation $0.01$; gradient normalization at $5$. In the pretrained scenario, pretraining is done with Adam \citep{Kingma2015} with learning rate equal to $0.01$, and $\beta_1 = 0.9$, $\beta_2 = 0.999$. \section{Forward/Backward through the Inside-Outside Algorithm}\label{app:io} Figure~\ref{fig:io-fprop} shows the procedure for obtaining the parsing marginals from the input potentials. This corresponds to running the inside-outside version of Eisner's algorithm \citep{Eisner1996}. The intermediate data structures used during the dynamic programming algorithm are the (log) inside tables $\alpha$, and the (log) outside tables $\beta$. Both $\alpha, \beta$ are of size $n \times n \times 2 \times 2$, where $n$ is the sentence length. First two dimensions encode the start/end index of the span (i.e. subtree). The third dimension encodes whether the root of the subtree is the left ($L$) or right ($R$) index of the span. The fourth dimension indicates if the span is complete ($1$) or incomplete ($0$). We can calculate the marginal distribution of each word's parent (for all words) in $O(n^3)$ using this algorithm. Backward pass through the inside-outside algorithm is slightly more involved, but still takes $O(n^3)$ time. Figure~\ref{fig:io-bprop} illustrates the backward procedure, which receives the gradient of the loss $\mcL$ with respect to the marginals, $\nabla^\mcL_p$, and computes the gradient of the loss with respect to the potentials $\nabla^\mcL_\theta$. The computations must be performed in the signed log-space semifield to handle log of negative values. See section~\ref{sec:e2e} and Table~\ref{tab:dlog} for more details. \end{document}
Structured Attention Networks
1702.00887
Table 4: Answer accuracy (Ans %) and supporting fact selection accuracy (Fact %) of the three QA models on the 1K bAbI dataset. K indicates the number of hops/inference steps used for each task. Task 7 and 8 both contain variable number of facts and hence they are excluded from the fact accuracy measurement. Supporting fact selection accuracy is calculated by taking the average of 10 best runs (out of 20) for each task.
[ "Task", "[ITALIC] K", "MemN2N Ans %", "MemN2N Fact %", "Binary CRF Ans %", "Binary CRF Fact %", "Unary CRF Ans %", "Unary CRF Fact %" ]
[ [ "Task 02 - Two Supporting Facts", "2", "87.3", "46.8", "84.7", "81.8", "43.5", "22.3" ], [ "Task 03 - Three Supporting Facts", "3", "52.6", "1.4", "40.5", "0.1", "28.2", "0.0" ], [ "Task 07 - Counting", "3", "83.2", "−", "83.5", "−", "79.3", "−" ], [ "Task 08 - Lists Sets", "3", "94.1", "−", "93.3", "−", "87.1", "−" ], [ "Task 11 - Indefinite Knowledge", "2", "97.8", "38.2", "97.7", "80.8", "88.6", "0.0" ], [ "Task 13 - Compound Coreference", "2", "95.6", "14.8", "97.0", "36.4", "94.4", "9.3" ], [ "Task 14 - Time Reasoning", "2", "99.9", "77.6", "99.7", "98.2", "90.5", "30.2" ], [ "Task 15 - Basic Deduction", "2", "100.0", "59.3", "100.0", "89.5", "100.0", "51.4" ], [ "Task 16 - Basic Induction", "3", "97.1", "91.0", "97.9", "85.6", "98.0", "41.4" ], [ "Task 17 - Positional Reasoning", "2", "61.1", "23.9", "60.6", "49.6", "59.7", "10.5" ], [ "Task 18 - Size Reasoning", "2", "86.4", "3.3", "92.2", "3.9", "92.0", "1.4" ], [ "Task 19 - Path Finding", "2", "21.3", "10.2", "24.4", "11.5", "24.3", "7.8" ], [ "Average", "−", "81.4", "39.6", "81.0", "53.7", "73.8", "17.4" ] ]
For correct answer seletion (Ans %), we find that MemN2N and the Binary CRF model perform similarly while the Unary CRF model does worse, indicating the importance of including pairwise potentials. Since ground truth supporting facts are provided for each query, we can check the sequence accuracy of supporting facts for each model (i.e. the rate of selecting the exact correct sequence of facts) by taking the highest probability sequence ^z = argmaxp(z1,…,zK|x,q) from the model and checking against the ground truth. Overall the Binary CRF is able to recover supporting facts better than MemN2N. This improvement is significant and can be up to two-fold as seen for task 2, 11, 13 & 17. However we observed that on many tasks it is sufficient to select only the last (or first) fact correctly to predict the answer, and thus higher sequence selection accuracy does not necessarily imply better answer accuracy (and vice versa). For example, all three models get 100% answer accuracy on task 15 but have different supporting fact accuracies.
\documentclass{article} % For LaTeX2e \usepackage[font=small,labelfont=bf]{caption} \usepackage[noend]{algpseudocode} \usetikzlibrary{matrix,arrows,backgrounds,calc,patterns,positioning,fit,shapes} \usepackage[titletoc,title]{appendix} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator{\softmax}{softmax} \DeclareMathOperator{\logadd}{logadd} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\signexp}{signexp} \DeclareMathOperator{\sigmoid}{sigmoid} \DeclareMathOperator{\softparent}{soft-parent} \DeclareMathOperator{\parent}{parent} \DeclareMathOperator{\head}{head} \DeclareMathOperator{\softhead}{soft-head} \DeclareMathOperator{\simf}{sim} \DeclareMathOperator{\relu}{ReLU} \DeclareMathOperator{\lstm}{LSTM} \DeclareMathOperator{\rnn}{RNN} \DeclareMathOperator{\mlp}{MLP} \newcommand{\oplusgets}{\gets_{\oplus}} \newcommand{\pgrad}{\nabla_{p}^\mathcal{L}} \newcommand{\alphagrad}{\nabla_{\alpha}^\mathcal{L}} \newcommand{\betagrad}{\nabla_{\beta}^\mathcal{L}} \newcommand{\thetagrad}{\log \nabla_{\theta}^\mathcal{L}} \newcommand{\xvec}{\mathbf{x}} \newcommand{\yvec}{\mathbf{y}} \newcommand{\cvec}{\mathbf{c}} \newcommand{\mvec}{\mathbf{m}} \newcommand{\zvec}{\mathbf{z}} \newcommand{\qvec}{\mathbf{q}} \newcommand{\svec}{\mathbf{s}} \newcommand{\tvec}{\mathbf{t}} \newcommand{\mcL}{\mathcal{L}} \newcommand{\mcT}{\mathcal{T}} \newcommand{\mcY}{\mathcal{Y}} \newcommand{\mcV}{\mathcal{V}} \newcommand{\mcC}{\mathcal{C}} \newcommand{\mcA}{\mathcal{A}} \newcommand{\mcZ}{\mathcal{Z}} \newcommand{\mcX}{\mathcal{X}} \newcommand{\context}{\mathbf{y}_{\mathrm{c}}} \newcommand{\embcontext}{\mathbf{\tilde{y}}_{\mathrm{c}}} \newcommand{\inpcontext}{\mathbf{\tilde{x}}} \newcommand{\start}{\mathbf{\tilde{y}}_{\mathrm{c0}}} \newcommand{\End}{\mathrm{\texttt{</s>}}} \newcommand{\Uvec}{\mathbf{U}} \newcommand{\Evec}{\mathbf{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\Gvec}{\mathbf{G}} \newcommand{\Fvec}{\mathbf{F}} \newcommand{\Pvec}{\mathbf{P}} \newcommand{\pvec}{\mathbf{p}} \newcommand{\Vvec}{\mathbf{V}} \newcommand{\Wvec}{\mathbf{W}} \newcommand{\hvec}{\mathbf{h}} \newcommand{\wvec}{\mathbf{w}} \newcommand{\uvec}{\mathbf{u}} \newcommand{\vvec}{\mathbf{v}} \newcommand{\bvec}{\mathbf{b}} \newcommand{\reals}{\mathbb{R}} \newcommand{\ind}{\mathbbm{1}} \newcommand\given{\,|\,} \title{Structured Attention Networks} \author{Yoon Kim\thanks{Equal contribution.} \hspace{5mm} Carl Denton$^*$ \hspace{5mm} Luong Hoang \hspace{5mm} Alexander M. Rush \\ \texttt{\small \{yoonkim@seas,carldenton@college,lhoang@g,srush@seas\}.harvard.edu}\\ School of Engineering and Applied Sciences\\ Harvard University\\ Cambridge, MA 02138, USA \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard soft-selection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linear-chain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks: tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention. \end{abstract} \section{Introduction} Attention networks are now a standard part of the deep learning toolkit, contributing to impressive results in neural machine translation \citep{Bahdanau2015,Luong2015}, image captioning \citep{Xu2015}, speech recognition \citep{Chorowski2015,Chan2015}, question answering \citep{Hermann2015,Sukhbaatar2015}, and algorithm-learning \citep{Graves2014,Vinyals2015c}, among many other applications (see \cite{Cho2015} for a comprehensive review). This approach alleviates the bottleneck of compressing a source into a fixed-dimensional vector by equipping a model with variable-length memory \citep{Weston2014,Graves2014,Graves2016}, thereby providing random access into the source as needed. Attention is implemented as a hidden layer which computes a categorical distribution (or hierarchy of categorical distributions) to make a soft-selection over source elements. Noting the empirical effectiveness of attention networks, we also observe that the standard attention-based architecture does not directly model any \textit{structural dependencies} that may exist among the source elements, and instead relies completely on the hidden layers of the network. While one might argue that these structural dependencies can be learned implicitly by a deep model with enough data, in practice, it may be useful to provide a structural bias. Modeling structural dependencies at the final, \textit{output} layer has been shown to be important in many deep learning applications, most notably in seminal work on graph transformers \citep{LeCun1998}, key work on NLP \citep{Collobert2011}, and in many other areas \citep[\textit{inter alia}]{Peng2009,Do2010,Jaderberg2014b,Chen2015b,Durrett2015,Lample2016}. In this work, we consider applications which may require structural dependencies at the attention layer, and develop \textit{internal} structured layers for modeling these directly. This approach generalizes categorical soft-selection attention layers by specifying possible structural dependencies in a soft manner. Key applications will be the development of an attention function that segments the source input into subsequences and one that takes into account the latent recursive structure (i.e. parse tree) of a source sentence. Our approach views the attention mechanism as a graphical model over a set of latent variables. The standard attention network can be seen as an expectation of an annotation function with respect to a single latent variable whose categorical distribution is parameterized to be a function of the source. In the general case we can specify a graphical model over multiple latent variables whose edges encode the desired structure. Computing forward attention requires performing inference to obtain the expectation of the annotation function, i.e. the \textit{context vector}. This expectation is computed over an exponentially-sized set of structures (through the machinery of graphical models/structured prediction), hence the name \textit{structured attention} network. Notably each step of this process (including inference) is differentiable, so the model can be trained end-to-end without having to resort to deep policy gradient methods \citep{schulman2015gradient}. The differentiability of inference algorithms over graphical models has previously been noted by various researchers \citep{Li2009,Domke2011,Stoyanov2011,Stoyanov2012,Gormley2015}, primarily outside the area of deep learning. For example, \citet{Gormley2015} treat an entire graphical model as a differentiable circuit and backpropagate risk through variational inference (loopy belief propagation) for minimium risk training of dependency parsers. Our contribution is to combine these ideas to produce structured \textit{internal} attention layers within deep networks, noting that these approaches allow us to use the resulting marginals to create new features, as long as we do so a differentiable way. We focus on two classes of structured attention: linear-chain conditional random fields (CRFs) \citep{Lafferty2001} and first-order graph-based dependency parsers \citep{Eisner1996}. The initial work of \cite{Bahdanau2015} was particularly interesting in the context of machine translation, as the model was able to implicitly learn an \textit{alignment model as a hidden layer}, effectively embedding inference into a neural network. In similar vein, under our framework the model has the capacity to learn a \textit{segmenter as a hidden layer} or a \textit{parser as a hidden layer}, without ever having to see a segmented sentence or a parse tree. Our experiments apply this approach to a difficult synthetic reordering task, as well as to machine translation, question answering, and natural language inference. We find that models trained with structured attention outperform standard attention models. Analysis of learned representations further reveal that interesting structures emerge as an internal layer of the model. All code is available at \url{http://github.com/harvardnlp/struct-attn}. \section{Background: Attention Networks} A standard neural network consist of a series of non-linear transformation layers, where each layer produces a fixed-dimensional hidden representation. For tasks with large input spaces, this paradigm makes it hard to control the interaction between components. For example in machine translation, the source consists of an entire sentence, and the output is a prediction for each word in the translated sentence. Utilizing a standard network leads to an information bottleneck, where one hidden layer must encode the entire source sentence. Attention provides an alternative approach.\footnote{Another line of work involves marginalizing over latent variables (e.g. latent alignments) for sequence-to-sequence transduction \citep{Kong2016,Lu2016,Yu2016,Yu2017}.} An attention network maintains a set of hidden representations that scale with the size of the source. The model uses an internal inference step to perform a soft-selection over these representations. This method allows the model to maintain a variable-length memory and has shown to be crucially important for scaling systems for many tasks. Formally, let $x = [x_1, \dots, x_n]$ represent a sequence of inputs, let $q$ be a query, and let $z$ be a categorical latent variable with sample space $\{1, \ldots, n\}$ that encodes the desired selection among these inputs. Our aim is to produce a \textit{context} $c$ based on the sequence and the query. To do so, we assume access to an \textit{attention distribution} $z \sim p(z \given x, q)$, where we condition $p$ on the inputs $x$ and a query $q$. The \textit{context} over a sequence is defined as expectation, $c = \E_{z \sim p(z \given x, q)} [f(x, z)]$ where $f(x, z)$ is an \textit{annotation function}. Attention of this form can be applied over any type of input, however, we will primarily be concerned with ``deep'' networks, where both the annotation function and attention distribution are parameterized with neural networks, and the context produced is a vector fed to a downstream network. For example, consider the case of attention-based neural machine translation \citep{Bahdanau2015}. Here the sequence of inputs $[\mathbf{x}_1, \ldots, \mathbf{x}_n]$ are the hidden states of a recurrent neural network (RNN), running over the words in the source sentence, $\mathbf{q}$ is the RNN hidden state of the target decoder (i.e. vector representation of the query $q$), and $z$ represents the source position to be attended to for translation. The attention distribution $p$ is simply $p(z = i \given x, q) = \softmax(\theta_i)$ where $\theta \in \reals^n$ is a parameterized potential typically based on a neural network, e.g. $\theta_i = \mlp([\mathbf{x}_i; \qvec])$. The annotation function is defined to simply return the selected hidden state, $f(\mathbf{x}, z) = \mathbf{x}_z$. The context vector can then be computed using a simple sum, \begin{equation}\label{vanilla-attn} \mathbf{c} = \E_{z \sim p(z \given x, q)} [f( x, z)] = \sum_{i=1}^n p(z = i \given x, q) \mathbf{x}_i \end{equation} Other tasks such as question answering use attention in a similar manner, for instance by replacing source $[x_1, \ldots, x_n]$ with a set of potential facts and $q$ with a representation of the question. In summary we interpret the attention mechanism as taking the expectation of an annotation function $f(x,z)$ with respect to a latent variable $z \sim p$, where $p$ is parameterized to be function of $x$ and $q$. \section{Structured Attention} Attention networks simulate selection from a set using a soft model. In this work we consider generalizing selection to types of attention, such as selecting chunks, segmenting inputs, or even attending to latent subtrees. One interpretation of this attention is as using soft-selection that considers all possible structures over the input, of which there may be exponentially many possibilities. Of course, this expectation can no longer be computed using a simple sum, and we need to incorporate the machinery of inference directly into our neural network. Define a structured attention model as being an attention model where $z$ is now a vector of discrete latent variables $[z_1, \ldots, z_m]$ and the attention distribution is $p(z \given x, q)$ is defined as a \textit{conditional random field} (CRF), specifying the independence structure of the $z$ variables. Formally, we assume an undirected graph structure with $m$ vertices. The CRF is parameterized with clique (log-)potentials $\theta_C(z_{C}) \in \reals$, where the $z_C$ indicates the subset of $z$ given by clique $C$. Under this definition, the attention probability is defined as, $p(z \given x, q; \theta) = \softmax(\sum_C \theta_C(z_C))$, where for symmetry we use $\softmax$ in a general sense, i.e. $\softmax(g(z)) = \frac{1}{Z} \exp(g(z))$ where $Z = \sum_{z'} \exp(g(z'))$ is the implied partition function. In practice we use a neural CRF, where $\theta$ comes from a deep model over $x, q$. In structured attention, we also assume that the annotation function $f$ factors (at least) into clique annotation functions $f(x, z) = \sum_C f_C(x, z_C)$. Under standard conditions on the conditional independence structure, inference techniques from graphical models can be used to compute the forward-pass expectations and the context: \[c = \E_{z \sim p(z \given x, q)} [f(x, z)] = \sum_{C} \E_{z \sim p(z_C \given x, q)} [f_C(x, z_C)]\] \subsection{Example 1: Subsequence Selection} \label{sec:subselect} Suppose instead of soft-selecting a single input, we wanted to explicitly model the selection of contiguous subsequences. We could naively apply categorical attention over all subsequences, or hope the model learns a multi-modal distribution to combine neighboring words. Structured attention provides an alternate approach. Concretely, let $m =n$, define $z$ to be a random vector $z = [z_1, \dots, z_n]$ with $z_i \in \{0, 1\}$, and define our annotation function to be, $f(x,z) = \sum_{i=1}^n f_{i} (x,z_{i})$ where $f_{i} (x,z_i) = \ind \{ z_i = 1\} \xvec_i$. The explicit expectation is then, \begin{equation}\label{struct-attn} \E_{z_1, \dots, z_n }[f(x,z)] = \sum_{i=1}^n p(z_i = 1 \given x, q) \xvec_i \end{equation} Equation (\ref{struct-attn}) is similar to equation (\ref{vanilla-attn})---both are a linear combination of the input representations where the scalar is between $[0,1]$ and represents how much attention should be focused on each input. However, (2) is fundamentally different in two ways: (i) it allows for multiple inputs (or no inputs) to be selected for a given query; (ii) we can incorporate structural dependencies across the $z_i$'s. For instance, we can model the distribution over $z$ with a linear-chain CRF with pairwise edges, \begin{align}\label{linear-chain} p(z_1, \dots, z_n \given x, q) = \softmax \left( \sum_{i=1}^{n-1} \theta_{i,i+1}(z_i, z_{i+1}) \right) \end{align} where $\theta_{k,l}$ is the pairwise potential for $z_i = k$ and $z_{i+1} = l$. This model is shown in Figure~\ref{fig:seq}c. Compare this model to the standard attention in Figure~\ref{fig:seq}a, or to a simple Bernoulli (sigmoid) selection method, $p(z_i = 1 \given x, q) = \sigmoid(\theta_{i}) $, shown in Figure~\ref{fig:seq}b. All three of these methods can use potentials from the same neural network or RNN that takes $x$ and $q$ as inputs. In the case of the linear-chain CRF in~(\ref{linear-chain}), the marginal distribution $p(z_i = 1 \given x)$ can be calculated efficiently in linear-time for all $i$ using message-passing, i.e. the forward-backward algorithm. These marginals allow us to calculate (\ref{struct-attn}), and in doing so we implicitly sum over an exponentially-sized set of structures (i.e. all binary sequences of length $n$) through dynamic programming. We refer to this type of attention layer as a \emph{segmentation attention} layer. Note that the forward-backward algorithm is being used as parameterized \textit{pooling} (as opposed to output computation), and can be thought of as generalizing the standard attention softmax. Crucially this generalization from vector softmax to forward-backward is just a series of differentiable steps,\footnote{As are other dynamic programming algorithms for inference in graphical models, such as (loopy and non-loopy) belief propagation.} and we can compute gradients of its output (marginals) with respect to its input (potentials). This will allow the structured attention model to be trained end-to-end as part of a deep model. \subsection{Example 2: Syntactic Tree Selection } This same approach can be used for more involved structural dependencies. One popular structure for natural language tasks is a dependency tree, which enforces a structural bias on the recursive dependencies common in many languages. In particular a dependency tree enforces that each word in a source sentence is assigned exactly one parent word (\textit{head word}), and that these assignments do not cross (projective structure). Employing this bias encourages the system to make a soft-selection based on learned syntactic dependencies, without requiring linguistic annotations or a pipelined decision. A dependency parser can be partially formalized as a graphical model with the following cliques \citep{Smith2008}: latent variables $z_{ij} \in \{0,1\}$ for all $i \ne j$, which indicates that the $i$-th word is the parent of the $j$-th word (i.e. $x_i \rightarrow x_j$); and a special global constraint that rules out configurations of $z_{ij}$'s that violate parsing constraints (e.g. one head, projectivity). The parameters to the graph-based CRF dependency parser are the potentials $\theta_{ij}$, which reflect the score of selecting $x_i$ as the parent of $x_j$. The probability of a parse tree $z$ given the sentence $x = [x_1, \ldots, x_n]$ is, \begin{equation} p(z \given x, q)= \softmax \left(\ind\{z\ \text{is valid}\} \sum_{i \neq j} \ind\{z_{ij} = 1\} \theta_{ij} \right) \end{equation} where $z$ is represented as a vector of $z_{ij}$'s for all $i \ne j$. It is possible to calculate the marginal probability of each edge $p(z_{ij} = 1\given x, q)$ for all $i, j$ in $O(n^3)$ time using the inside-outside algorithm \citep{Baker1979} on the data structures of \citet{Eisner1996}. The parsing contraints ensure that each word has exactly one head (i.e. $\sum_{i=1}^n z_{ij} = 1$). Therefore if we want to utilize the \emph{soft-head} selection of a position $j$, the context vector is defined as: \begin{align*} f_j(x, z) = \sum_{i=1}^n \ind\{z_{ij} = 1\} \xvec_i & & \cvec_j = \E_z [f_j(x, z)] = \sum_{i=1}^n p(z_{ij} = 1 \given x, q) \xvec_i \end{align*} Note that in this case the annotation function has the subscript $j$ to produce a context vector for each word in the sentence. Similar types of attention can be applied for other tree properties (e.g. soft-children). We refer to this type of attention layer as a \emph{syntactic attention} layer. \subsection{End-to-End Training}\label{sec:e2e} Graphical models of this form have been widely used as the final layer of deep models. Our contribution is to argue that these networks can be added within deep networks in place of simple attention layers. The whole model can then be trained end-to-end. The main complication in utilizing this approach within the network itself is the need to backpropagate the gradients through an inference algorithm as part of the structured attention network. Past work has demonstrated the techniques necessary for this approach (see \citet{Stoyanov2011}), but to our knowledge it is very rarely employed. Consider the case of the simple linear-chain CRF layer from equation (\ref{linear-chain}). Figure~\ref{fig:fb} (left) shows the standard forward-backward algorithm for computing the marginals $p(z_i = 1\given x, q; \theta)$. If we treat the forward-backward algorithm as a neural network layer, its input are the potentials $\theta$, and its output after the forward pass are these marginals.\footnote{Confusingly, ``forward'' in this case is different than in the \textit{forward}-backward algorithm, as the marginals themselves are the output. However the two uses of the term are actually quite related. The forward-backward algorithm can be interpreted as a forward and backpropagation pass on the log partition function. See \citet{Eisner2016} for further details (appropriately titled ``Inside-Outside and Forward-Backward Algorithms Are Just Backprop''). As such our full approach can be seen as computing second-order information. This interpretation is central to \citet{Li2009}.} To backpropagate a loss through this layer we need to compute the gradient of the loss $\mcL$ with respect to $\theta$, $\nabla_{\theta}^\mcL$, as a function of the gradient of the loss with respect to the marginals, $\nabla_{p}^\mcL$.\footnote{In general we use $\nabla^a_b$ to denote the Jacobian of $a$ with respect to $b$.} As the forward-backward algorithm consists of differentiable steps, this function can be derived using reverse-mode automatic differentiation of the forward-backward algorithm itself. Note that this reverse-mode algorithm conveniently has a parallel structure to the forward version, and can also be implemented using dynamic programming. \begin{wraptable}{r}{0.54\textwidth} \small \centering \begin{tabular}{cc|cc|cc} \toprule & & \multicolumn{2}{c|}{$\oplus$} & \multicolumn{2}{c}{$\otimes$} \\ $s_a$ & $s_b$ & $ l_{a+b} $ & $s_{a+b}$ & $ l_{a\cdot b}$ & $s_{a \cdot b}$\\ \midrule $+$ & $+$ & $l_a+\log (1 + d)$& $+$ & $l_a+l_b$ &$+$ \\ $+$ & $-$ & $l_a+\log (1 - d)$& $+$ & $l_a+l_b$ &$-$ \\ $-$ & $+$ & $l_a+\log (1 - d)$& $-$ & $l_a+l_b$ &$-$ \\ $-$ & $-$ & $l_a+\log (1 + d)$& $-$ & $l_a+l_b$ &$+$ \\ \bottomrule \end{tabular} \caption{\label{tab:dlog} \small Signed log-space semifield (from \cite{Li2009}). Each real number $a$ is represented as a pair $( l_a, s_a )$ where $l_a = \log |a|$ and $s_a = \sign(a)$. Therefore $a = s_a \exp(l_a)$. For the above we let $d = \exp(l_b- l_a)$ and assume $|a| > |b|$. } \end{wraptable} However, in practice, one cannot simply use current off-the-shelf tools for this task. For one, efficiency is quite important for these models and so the benefits of hand-optimizing the reverse-mode implementation still outweighs simplicity of automatic differentiation. Secondly, numerical precision becomes a major issue for structured attention networks. For computing the forward-pass and the marginals, it is important to use the standard log-space semifield over $\mathbb{R}\cup\{\pm \infty\}$ with binary operations $(\oplus = \logadd, \otimes = +)$ to avoid underflow of probabilities. For computing the backward-pass, we need to remain in log-space, but also handle log of negative values (since $\pgrad$ could be negative). This requires extending to the \textit{signed} log-space semifield over $\left[\mathbb{R}\cup\{\pm \infty\}\right] \times \{+, -\}$ with special $+$/$-$ operations. Table~\ref{tab:dlog}, based on \cite{Li2009}, demonstrates how to handle this issue, and Figure~\ref{fig:fb} (right) describes backpropagation through the forward-backward algorithm. For dependency parsing, the forward pass can be computed using the inside-outside implementation of Eisner's algorithm \citep{Eisner1996}. Similarly, the backpropagation parallels the inside-outside structure. Forward/backward pass through the inside-outside algorithm is described in Appendix~\ref{app:io}. \section{Experiments} We experiment with three instantiations of structured attention networks on four different tasks: (a) a simple, synthetic tree manipulation task using the syntactic attention layer, (b) machine translation with segmentation attention (i.e. two-state linear-chain CRF), (c) question answering using an $n$-state linear-chain CRF for multi-step inference over $n$ facts, and (d) natural language inference with syntactic tree attention. These experiments are not intended to boost the state-of-the-art for these tasks but to test whether these methods can be trained effectively in an end-to-end fashion, can yield improvements over standard selection-based attention, and can learn plausible latent structures. All model architectures, hyperparameters, and training details are further described in Appendix~\ref{app:model}. \subsection{Tree Transduction} The first set of experiments look at a tree-transduction task. These experiments use synthetic data to explore a failure case of soft-selection attention models. The task is to learn to convert a random formula given in prefix notation to one in infix notation, e.g., \begin{small} \begin{align*} (\,\,\,*\,\,\,(\,\,\,+\,\,\,(\,\,\,+\,\,\,15\,\,\,7\,\,\,)\,\,\,1\,\,\,8\,\,\,)\,\,\,(\,\,\,+\,\,\,19\,\,\,0\,\,\,11\,\,\,)\,\,\,) \,\, \Rightarrow (\,\,(\,\,15\,\,+\,\,7\,\,\,)\,\,+\,\,1\,\,+\,\,8\,\,\,)\,\,*\,\,(\,\,\,19\,\,+\,\,0\,\,+\,\,11\,\,\,) \end{align*} \end{small} The alphabet consists of symbols $\{(, ),+,*\}$, numbers between $0$ and $20$, and a special root symbol $\$$. This task is used as a preliminary task to see if the model is able to learn the implicit tree structure on the source side. The model itself is an encoder-decoder model, where the encoder is defined below and the decoder is an LSTM. See Appendix~\ref{app:tree} for the full model. Training uses $15$K prefix-infix pairs where the maximum nesting depth is set to be between $2$-$4$ (the above example has depth $3$), with $5$K pairs in each depth bucket. The number of expressions in each parenthesis is limited to be at most $4$. Test uses $1$K unseen sequences with depth between $2$-$6$ (note specifically deeper than train), with $200$ sequences for each depth. The performance is measured as the average proportion of correct target tokens produced until the first failure (as in \cite{Grefenstette2015}). For experiments we try using different forms of \textit{self}-attention over embedding-only encoders. Let $\mathbf{x}_j$ be an embedding for each source symbol; our three variants of the source representation $\hat{\xvec}_j$ are: (a) \textit{no atten}, just symbol embeddings by themselves, i.e. $\hat{\xvec}_j = \mathbf{x}_j$; (b) \textit{simple} attention, symbol embeddings and soft-pairing for each symbol, i.e. $ \hat{\xvec}_j = [\mathbf{x}_j; \mathbf{c}_j]$ where $ \mathbf{c}_j = \sum_{i=1}^n \softmax( \theta_{ij}) \mathbf{x}_i$ is calculated using soft-selection; (c) \textit{structured} attention, symbol embeddings and soft-parent, i.e. $\hat{\xvec}_j = [\mathbf{x}_j; \mathbf{c}_j]$ where $ \mathbf{c}_j = \sum_{i=1}^n p(z_{ij} = 1 \given x) \mathbf{x}_i $ is calculated using parsing marginals, obtained from the syntactic attention layer. None of these models use an explicit query value---the potentials come from running a bidirectional LSTM over the source, producing hidden vectors $\hvec_i$, and then computing \[\theta_{ij} = \tanh(\mathbf{s}^\top \tanh(\mathbf{W}_1 \mathbf{h}_i + \mathbf{W}_2 \mathbf{h}_j + \mathbf{b}))\] \noindent where $\mathbf{s}, \mathbf{b}, \mathbf{W}_1, \mathbf{W}_2$ are parameters (see Appendix~\ref{app:parsing}). \begin{wraptable}{l}{0.43\textwidth}\label{tree-perf} \small \begin{tabular}{c ccc} \toprule Depth & No Atten & Simple & Structured \\ \midrule $2$ & $7.6$ & $87.4$ & $99.2$ \\ $3$ & $4.1$ & $49.6$ & $87.0$ \\ $4$ & $2.8$ & $23.3$ & $64.5$ \\ $5$ & $2.1$ & $15.0$ & $30.8$ \\ $6$ & $1.5$ & $8.5$ & $18.2$ \\ \bottomrule \end{tabular} \caption{\label{tree-perf} \small Performance (average length to failure \%) of models on the tree-transduction task.} \end{wraptable} The source representation $[\hat{\xvec}_1, \dots, \hat{\xvec}_n]$ are attended over using the standard attention mechanism at each decoding step by an LSTM decoder.\footnote{Thus there are two attention mechanisms at work under this setup. First, structured attention over the source only to obtain soft-parents for each symbol (i.e. self-attention). Second, standard softmax alignment attention over the source representations during decoding.} Additionally, symbol embedding parameters are shared between the parsing LSTM and the source encoder. \paragraph{Results} Table~\ref{tree-perf} has the results for the task. Note that this task is fairly difficult as the encoder is quite simple. The baseline model (unsurprisingly) performs poorly as it has no information about the source ordering. The simple attention model performs better, but is significantly outperformed by the structured model with a tree structure bias. We hypothesize that the model is partially reconstructing the arithmetic tree. Figure~\ref{tree-viz} shows the attention distribution for the simple/structured models on the same source sequence, which indicates that the structured model is able to learn boundaries (i.e. parentheses). \subsection{Neural Machine Translation} Our second set of experiments use a full neural machine translation model utilizing attention over subsequences. Here both the encoder/decoder are LSTMs, and we replace standard simple attention with a segmentation attention layer. We experiment with two settings: translating directly from unsegmented Japanese characters to English words (effectively using structured attention to perform soft word segmentation), and translating from segmented Japanese words to English words (which can be interpreted as doing \emph{phrase-based} neural machine translation). Japanese word segmentation is done using the KyTea toolkit \citep{Neubig2011}. The data comes from the Workshop on Asian Translation (WAT) \citep{wat2016}. We randomly pick $500$K sentences from the original training set (of $3$M sentences) where the Japanese sentence was at most $50$ characters and the English sentence was at most $50$ words. We apply the same length filter on the provided validation/test sets for evaluation. The vocabulary consists of all tokens that occurred at least $10$ times in the training corpus. The segmentation attention layer is a two-state CRF where the unary potentials at the $j$-th decoder step are parameterized as \[ \theta_i(k)= \begin{cases} \hvec_i \Wvec \hvec_j,& k = 1 \\ 0, &k = 0 \end{cases} \] Here $[\hvec_1, \dots, \hvec_n]$ are the encoder hidden states and $\mathbf{h}_j'$ is the $j$-th decoder hidden state (i.e. the query vector). The pairwise potentials are parameterized linearly with $\mathbf{b}$, i.e. all together \[ \theta_{i,i+1}(z_i, z_{i+1}) = \theta_i(z_i) + \theta_{i+1}(z_{i+1}) + \mathbf{b}_{z_i, z_{i+1}} \] Therefore the segmentation attention layer requires just $4$ additional parameters. Appendix~\ref{app:nmt} describes the full model architecture. We experiment with three attention configurations: (a) standard {\it simple} attention, i.e. $\cvec_j = \sum_{i=1}^n \softmax(\theta_i) \hvec_i $; (b) \textit{sigmoid} attention: multiple selection with Bernoulli random variables, i.e. $\cvec_j = \sum_{i=1}^n \sigmoid(\theta_i) \hvec_i$; (c) \textit{structured} attention, encoded with normalized CRF marginals, \begin{align*} \cvec_j = \sum_{i=1}^n \frac{p(z_i=1 \given x, q)}{\gamma} \hvec_i & & \gamma = \frac{1}{\lambda} \sum_{i=1}^n p(z_i =1 \given x, q) \end{align*} The normalization term $\gamma$ is not ideal but we found it to be helpful for stable training.\footnote{With standard expectation (i.e. $\cvec_j = \sum_{i=1}^n p(z_i=1 \given x, q) \hvec_i$) we empirically observed the marginals to quickly saturate. We tried various strategies to overcome this, such as putting an $l_2$ penalty on the unary potentials and initializing with a pretrained sigmoid attention model, but simply normalizing the marginals proved to be the most effective. However, this changes the interpretation of the context vector as the expectation of an annotation function in this case.} $\lambda$ is a hyperparameter (we use $\lambda = 2$) and we further add an $l_2$ penalty of $0.005$ on the pairwise potentials $\bvec$. These values were found via grid search on the validation set. \begin{wraptable}{l}{0.43\textwidth}\label{nmt-perf} \small \begin{tabular}{c ccc} \toprule & Simple & Sigmoid & Structured \\ \midrule \textsc{Char} & $12.6$ & $13.1$ & $14.6$ \\ \textsc{Word} & $14.1$ & $13.8$ & $14.3$ \\ \bottomrule \end{tabular} \caption{\label{nmt-perf}\small Translation performance as measured by BLEU (higher is better) on character-to-word and word-to-word Japanese-English translation for the three different models.} \end{wraptable} \paragraph{Results} Results for the translation task on the test set are given in Table~\ref{nmt-perf}. Sigmoid attention outperforms simple (softmax) attention on the character-to-word task, potentially because it is able to learn many-to-one alignments. On the word-to-word task, the opposite is true, with simple attention outperforming sigmoid attention. Structured attention outperforms both models on both tasks, although improvements on the word-to-word task are modest and unlikely to be statistically significant. For further analysis, Figure~\ref{fig:vis3} shows a visualization of the different attention mechanisms on the character-to-word setup. The simple model generally focuses attention heavily on a single character. In contrast, the sigmoid and structured models are able to spread their attention distribution on contiguous subsequences. The structured attention learns additional parameters (i.e. $\bvec$) to smooth out this type of attention. \subsection{Question Answering} Our third experiment is on question answering (QA) with the linear-chain CRF attention layer for inference over multiple facts. We use the bAbI dataset \citep{Weston2015}, where the input is a set of sentences/facts paired with a question, and the answer is a single token. For many of the tasks the model has to attend to multiple supporting facts to arrive at the correct answer (see Figure~\ref{fig:vis4} for an example), and existing approaches use multiple `hops' to greedily attend to different facts. We experiment with employing structured attention to perform inference in a non-greedy way. As the ground truth supporting facts are given in the dataset, we are able to assess the model's inference accuracy. The baseline (simple) attention model is the End-To-End Memory Network \citep{Sukhbaatar2015} (MemN2N), which we briefly describe here. See Appendix~\ref{app:qa} for full model details. Let $\xvec_1, \dots, \xvec_n$ be the input embedding vectors for the $n$ sentences/facts and let $\mathbf{q}$ be the query embedding. In MemN2N, $z_k$ is the random variable for the sentence to select at the $k$-th inference step (i.e. $k$-th hop), and thus $z_k \in \{1, \dots, n\}$. The probability distribution over $z_k$ is given by $p(z_k = i \given x, q) = \softmax((\xvec_i^k)^\top\qvec^k)$, and the context vector is given by $\cvec^k = \sum_{i=1}^n p(z_k = i \given x, q) \mathbf{o}_i^k$, where $\xvec_i^k, \mathbf{o}_i^k$ are the input and output embedding for the $i$-th sentence at the $k$-th hop, respectively. The $k$-th context vector is used to modify the query $\qvec^{k+1} = \qvec^k + \cvec^k$, and this process repeats for $k = 1, \dots, K$ (for $k=1$ we have $\xvec_i^k = \xvec_i, \qvec^k = \qvec, \cvec^k = \mathbf{0}$). The $K$-th context and query vectors are used to obtain the final answer. The attention mechanism for a $K$-hop MemN2N network can therefore be interpreted as a greedy selection of a length-$K$ sequence of facts (i.e. $z_1, \dots, z_K$). For structured attention, we use an $n$-state, $K$-step linear-chain CRF.\footnote{Note that this differs from the segmentation attention for the neural machine translation experiments described above, which was a $K$-state (with $K =2$), $n$-step linear-chain CRF.} We experiment with two different settings: (a) a unary CRF model with node potentials $$\theta_k(i) = (\xvec_i^k)^\top \mathbf{q}^k$$ and (b) a binary CRF model with pairwise potentials $$\theta_{k,k+1}(i, j) = (\mathbf{x}_i^k)^\top\qvec^k + (\mathbf{x}_i^k)^\top \xvec_j^{k + 1} + (\mathbf{x}_j^{k + 1})^\top \mathbf{q}^{k + 1}$$ The binary CRF model is designed to test the model's ability to perform sequential reasoning. For both (a) and (b), a \emph{single} context vector is computed: $\mathbf{c} = \sum_{z_1,\ldots,z_K} p(z_1,\ldots,z_K \given x,q) f(x,z)$ (unlike MemN2N which computes $K$ context vectors). Evaluating $\mathbf{c}$ requires summing over all $n^K$ possible sequences of length $K$, which may not be practical for large values of $K$. However, if $f(x,z)$ factors over the components of $z$ (e.g. $f(x,z)= \sum_{k=1}^K f_k(x,z_k)$) then one can rewrite the above sum in terms of marginals: $\mathbf{c} = \sum_{k=1}^K \sum_{i=1}^n p(z_{k} = i \given x,q) f_{k}(x,z_{k})$. In our experiments, we use $f_k(x,z_k) = \mathbf{o}_{z_k}^k$. All three models are described in further detail in Appendix~\ref{app:qa}. \paragraph{Results} We use the version of the dataset with $1$K questions for each task. Since all models reduce to the same network for tasks with $1$ supporting fact, they are excluded from our experiments. The number of hops (i.e. $K$) is task-dependent, and the number of memories (i.e. $n$) is limited to be at most $25$ (note that many question have less than $25$ facts---e.g. the example in Figure~\ref{fig:vis4} has $9$ facts). Due to high variance in model performance, we train $20$ models with different initializations for each task and report the test accuracy of the model that performed the best on a $10\%$ held-out validation set (as is typically done for bAbI tasks). Results of the three different models are shown in Table~\ref{tab:results}. For correct answer seletion (Ans $\%$), we find that MemN2N and the Binary CRF model perform similarly while the Unary CRF model does worse, indicating the importance of including pairwise potentials. We also assess each model's ability to attend to the correct supporting facts in Table~\ref{tab:results} (Fact $\%$). Since ground truth supporting facts are provided for each query, we can check the sequence accuracy of supporting facts for each model (i.e. the rate of selecting the exact correct sequence of facts) by taking the highest probability sequence $\hat{z} = \argmax p(z_1, \dots, z_K \given x, q)$ from the model and checking against the ground truth. Overall the Binary CRF is able to recover supporting facts better than MemN2N. This improvement is significant and can be up to two-fold as seen for task $2$, $11$, $13$ \& $17$. However we observed that on many tasks it is sufficient to select only the last (or first) fact correctly to predict the answer, and thus higher sequence selection accuracy does not necessarily imply better answer accuracy (and vice versa). For example, all three models get $100 \%$ answer accuracy on task $15$ but have different supporting fact accuracies. Finally, in Figure~\ref{fig:vis4} we visualize of the output edge marginals produced by the Binary CRF model for a single question in task $16$. In this instance, the model is uncertain but ultimately able to select the right sequence of facts $5 \rightarrow 6 \rightarrow 8$. \subsection{Natural Language Inference} The final experiment looks at the task of natural language inference (NLI) with the syntactic attention layer. In NLI, the model is given two sentences (hypothesis/premise) and has to predict their relationship: entailment, contradiction, neutral. For this task, we use the Stanford NLI dataset \citep{Bowman2015} and model our approach off of the decomposable attention model of \cite{Parikh2016}. This model takes in the matrix of word embeddings as the input for each sentence and performs \textit{inter-sentence} attention to predict the answer. Appendix~\ref{app:nli} describes the full model. As in the transduction task, we focus on modifying the input representation to take into account soft parents via self-attention (i.e. \textit{intra-sentence} attention). In addition to the three baselines described for tree transduction (No Attention, Simple, Structured), we also explore two additional settings: (d) \textit{hard} pipeline parent selection, i.e. $\hat{\mathbf{x}}_j = [\mathbf{x}_j; \mathbf{x}_{\head(j)}]$, where $\head(j)$ is the index of $x_j$'s parent\footnote{The parents are obtained from running the dependency parser of \cite{Andor2016}, available at \\ \url{https://github.com/tensorflow/models/tree/master/syntaxnet}}; (e) \textit{pretrained} structured attention: structured attention where the parsing layer is pretrained for one epoch on a parsed dataset (which was enough for convergence). \paragraph{Results} Results of our models are shown in Table~\ref{tab:main}. Simple attention improves upon the no attention model, and this is consistent with improvements observed by \cite{Parikh2016} with their intra-sentence attention model. The pipelined model with hard parents also slightly improves upon the baseline. Structured attention outperforms both models, though surprisingly, pretraining the syntactic attention layer on the parse trees performs worse than training it from scratch---it is possible that the pretrained attention is too strict for this task. We also obtain the hard parse for an example sentence by running the Viterbi algorithm on the syntactic attention layer with the non-pretrained model: \begin{center} \includegraphics[scale=0.8]{tikz1.pdf} \end{center} Despite being trained without ever being exposed to an explicit parse tree, the syntactic attention layer learns an almost plausible dependency structure. In the above example it is able to correctly identify the main verb \texttt{fighting}, but makes mistakes on determiners (e.g. head of \texttt{The} should be \texttt{men}). We generally observed this pattern across sentences, possibly because the verb structure is more important for the inference task. \section{Conclusion} This work outlines structured attention networks, which incorporate graphical models to generalize simple attention, and describes the technical machinery and computational techniques for backpropagating through models of this form. We implement two classes of structured attention layers: a linear-chain CRF (for neural machine translation and question answering) and a more complicated first-order dependency parser (for tree transduction and natural language inference). Experiments show that this method can learn interesting structural properties and improve on top of standard models. Structured attention could also be a way of learning latent labelers or parsers through attention on other tasks. It should be noted that the additional complexity in computing the attention distribution increases run-time---for example, structured attention was approximately $5 \times$ slower to train than simple attention for the neural machine translation experiments, even though both attention layers have the same asymptotic run-time (i.e. $O(n)$). Embedding \textit{differentiable inference} (and more generally, \textit{differentiable algorithms}) into deep models is an exciting area of research. While we have focused on models that admit (tractable) exact inference, similar technique can be used to embed approximate inference methods. Many optimization algorithms (e.g. gradient descent, LBFGS) are also differentiable \citep{domke2012generic,Maclaurin2015}, and have been used as output layers for structured prediction in energy-based models \citep{Belanger2016,wang2016nips}. Incorporating them as internal neural network layers is an interesting avenue for future work. \subsubsection*{Acknowledgments} We thank Tao Lei, Ankur Parikh, Tim Vieira, Matt Gormley, Andr{\'e} Martins, Jason Eisner, Yoav Goldberg, and the anonymous reviewers for helpful comments, discussion, notes, and code. We additionally thank Yasumasa Miyamoto for verifying Japanese-English translations. \bibliographystyle{iclr2017_conference} \newpage \appendix \section*{APPENDICES} \section{Model Details}\label{app:model} \subsection{Syntactic Attention}\label{app:parsing} The syntactic attention layer (for tree transduction and natural language inference) is similar to the first-order graph-based dependency parser of \cite{Kipperwasser2016}. Given an input sentence $[x_1, \dots, x_n]$ and the corresponding word vectors $[\xvec_1, \dots, \xvec_n]$, we use a bidirectional LSTM to get the hidden states for each time step $i \in [1, \dots, n]$, \begin{align*} \hvec_i^{\text{fwd}} = \lstm(\xvec_i, \hvec_{i-1}^{\text{fwd}}) & & \hvec_i^{\text{bwd}} = \lstm(\xvec_i, \hvec_{i+1}^{\text{bwd}}) & & \hvec_i = [\hvec_i^{\text{fwd}} ; \hvec_i^{\text{bwd}}] \end{align*} where the forward and backward LSTMs have their own parameters. The score for $x_i \rightarrow x_j$ (i.e. $x_i$ is the parent of $x_j$), is given by an MLP \begin{equation*} \theta_{ij} = \tanh( \svec^\top\tanh(\Wvec_1 \hvec_i + \Wvec_2 \hvec_j + \bvec)) \end{equation*} These scores are used as input to the inside-outside algorithm (see Appendix~\ref{app:io}) to obtain the probability of each word's parent $p(z_{ij} = 1 \given x)$, which is used to obtain the soft-parent $\cvec_j$ for each word $x_j$. In the non-structured case we simply have $p(z_{ij} = 1 \given x) = \softmax(\theta_{ij})$. \subsection{Tree Transduction}\label{app:tree} Let $[x_1, \dots, x_n],[y_1, \dots, y_m]$ be the sequence of source/target symbols, with the associated embeddings $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$ with $\xvec_i, \yvec_j \in \reals^l$. In the simplest baseline model we take the source representation to be the matrix of the symbol embeddings. The decoder is a one-layer LSTM which produces the hidden states $\hvec_j' = \lstm(\yvec_j, \hvec_{j-1}')$, with $\hvec_j' \in \reals^l$. The hidden states are combined with the input representation via a bilinear map $\Wvec \in \reals^{l \times l}$ to produce the attention distribution used to obtain the vector $\mvec_i$, which is combined with the decoder hidden state as follows, \begin{align*} \alpha_i = \frac{\exp \xvec_i \Wvec \hvec_j'}{\sum_{k=1}^n \exp \xvec_k \Wvec \hvec_j'} & & \mvec_i = \sum_{i=1}^n \alpha_i \xvec_i & & \hat{\hvec}_j = \tanh (\Uvec [\mvec_i ; \hvec_j'] ) \end{align*} Here we have $\Wvec \in \reals^{l \times l}$ and $\Uvec \in \reals^{2l \times l}$. Finally, $\hat{\hvec}_j$ is used to to obtain a distribution over the next symbol $y_{j+1}$, \begin{equation*} p(y_{j+1} \given x_1, \dots, x_n, y_1, \dots, y_j) = \softmax(\Vvec \hat{\hvec}_j + \bvec) \end{equation*} For structured/simple models, the $j$-th source representation are respectively \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n p(z_{ki} = 1 \given x ) \, \xvec_k\right] & &\hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n \softmax (\theta_{ki})\, \xvec_k\right] \end{align*} where $\theta_{ij}$ comes from the bidirectional LSTM described in~\ref{app:parsing}. Then $\alpha_i$ and $\mvec_i$ changed accordingly, \begin{align*} \alpha_i = \frac{\exp \hat{\xvec}_i \Wvec \hvec_j'}{\sum_{k=1}^n \exp \hat{\xvec}_k \Wvec \hvec_j'} & & \mvec_i = \sum_{i=1}^n \alpha_i \hat{\xvec}_i \end{align*} Note that in this case we have $\Wvec \in \reals^{2l \times l}$ and $\Uvec \in \reals^{3l \times l}$. We use $l = 50$ in all our experiments. The forward/backward LSTMs for the parsing LSTM are also $50$-dimensional. Symbol embeddings are shared between the encoder and the parsing LSTMs. Additional training details include: batch size of $20$; training for $13$ epochs with a learning rate of $1.0$, which starts decaying by half after epoch $9$ (or the epoch at which performance does not improve on validation, whichever comes first); parameter initialization over a uniform distribution $U[-0.1, 0.1]$; gradient normalization at $1$ (i.e. renormalize the gradients to have norm $1$ if the $l_2$ norm exceeds $1$). Decoding is done with beam search (beam size $ = 5$). \subsection{Neural Machine Translation}\label{app:nmt} The baseline NMT system is from \cite{Luong2015}. Let $[x_1, \dots, x_n],[y_1, \dots, y_m]$ be the source/target sentence, with the associated word embeddings $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$. The encoder is an LSTM over the source sentence, which produces the hidden states $[\hvec_1, \dots, \hvec_n]$ where \begin{equation*} \hvec_i = \lstm(\xvec_i, \hvec_{i-1}) \end{equation*} and $\hvec_i \in \reals^l$. The decoder is another LSTM which produces the hidden states $\hvec_j' \in \reals^l$. In the simple attention case with categorical attention, the hidden states are combined with the input representation via a bilinear map $\Wvec \in \reals^{l \times l}$ and this distribution is used to obtain the context vector at the $j$-th time step, \begin{align*} \theta_i = \hvec_i \Wvec \hvec_j' & & \cvec_j = \sum_{i=1}^n \softmax(\theta_i)\hvec_i \end{align*} The Bernoulli attention network has the same $\theta_i$ but instead uses a $\sigmoid$ to obtain the weights of the linear combination, i.e., \begin{align*} \cvec_j = \sum_{i=1}^n \sigmoid(\theta_i) \hvec_i \end{align*} And finally, the structured attention model uses a bilinear map to parameterize one of the unary potentials \[ \theta_i(k)= \begin{cases} \hvec_i \Wvec \hvec_j',& k = 1 \\ 0, &k = 0 \end{cases} \] \begin{align*} \theta_{i,i+1}(z_i, z_{i+1}) &= \theta_i(z_i) + \theta_{i+1}(z_{i+1}) + \mathbf{b}_{z_i, z_{i+1}} \end{align*} where $\bvec$ are the pairwise potentials. These potentials are used as inputs to the forward-backward algorithm to obtain the marginals $p(z_i = 1 \given x, q)$, which are further normalized to obtain the context vector \begin{align*} \cvec_j = \sum_{i=1}^n \frac{p(z_i=1 \given x, q)}{\gamma} \hvec_i & & \gamma = \frac{1}{\lambda} \sum_i^n p(z_i =1 \given x, q) \end{align*} We use $\lambda = 2$ and also add an $l_2$ penalty of $0.005$ on the pairwise potentials $\bvec$. The context vector is then combined with the decoder hidden state \begin{align*} \hat{\hvec}_j = \tanh (\Uvec[\cvec_j ; \hvec_j']) \end{align*} and $\hat{\hvec}_j$ is used to obtain the distribution over the next target word $y_{j+1}$ \begin{align*} p(y_{j+1} \given x_1, \dots, x_n, y_1, \dots y_j) = \softmax(\Vvec \hat{\hvec}_j + \bvec) \end{align*} The encoder/decoder LSTMs have $2$ layers and $500$ hidden units (i.e. $l = 500$). Additional training details include: batch size of $128$; training for $30$ epochs with a learning rate of $1.0$, which starts decaying by half after the first epoch at which performance does not improve on validation; dropout with probability $0.3$; parameter initialization over a uniform distribution $U[-0.1, 0.1]$; gradient normalization at $1$. We generate target translations with beam search (beam size $= 5$), and evaluate with \texttt{multi-bleu.perl} from Moses.\footnote{ \url{https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl}} \subsection{Question Answering}\label{app:qa} Our baseline model (MemN2N) is implemented following the same architecture as described in \cite{Sukhbaatar2015}. In particular, let $x = [x_1, \dots, x_n]$ represent the sequence of $n$ facts with the associated embeddings $[\mathbf{x}_1, \dots, \xvec_n]$ and let $\qvec$ be the embedding of the query $q$. The embeddings are obtained by simply adding the word embeddings in each sentence or query. The full model with $K$ hops is as follows: \begin{align*} &p(z_k = i \given x, q) = \softmax( (\mathbf{x}_i^k)^\top \mathbf{q}^k ) \\ &\mathbf{c}^k = \sum_{i=1}^n p(z_k = i \given x, q) \mathbf{o}_i^k \\ &\mathbf{q}^{k + 1} = \mathbf{q}^k + \mathbf{c}^k \\ &p(y \given x, q) = \softmax(\Wvec (\mathbf{q}^K + \mathbf{c}^K)) \end{align*} where $p(y \given x, q)$ is the distribution over the answer vocabulary. At each layer, $\{\mathbf{x}_i^k\}$ and $\{\mathbf{o}_i^k\}$ are computed using embedding matrices $\mathbf{X}^k$ and $\mathbf{O}^k$. We use the \emph{adjacent weight tying scheme} from the paper so that $\mathbf{X}^{k+1} = \mathbf{O}^k, \mathbf{W}^T = \mathbf{O}^K$. $\mathbf{X}^1$ is also used to compute the query embedding at the first hop. For $k=1$ we have $\xvec_i^k = \xvec_i, \qvec^k = \qvec, \cvec^k = \mathbf{0}$. For both the Unary and the Binary CRF models, the same input fact and query representations are computed (i.e. same embedding matrices with weight tying scheme). For the unary model, the potentials are parameterized as \[ \theta_{k}(i) = (\xvec_i^k)^\top \mathbf{q}^k \] and for the binary model we compute pairwise potentials as \[ \theta_{k,k+1}(i, j) = (\mathbf{x}_i^k)^\top\qvec^k + (\mathbf{x}_i^k)^\top \xvec_j^{k + 1} + (\mathbf{x}_j^{k + 1})^\top \mathbf{q}^{k + 1} \] The $\qvec^k$'s are updated simply with a linear mapping, i.e. \[ \mathbf{q}^{k+1} = \mathbf{Q} \mathbf{q}^k \] In the case of the Binary CRF, to discourage the model from selecting the same fact again we additionally set $\theta_{k,k+1}(i,i) = -\infty$ for all $i \in \{1, \dots, n\}$. Given these potentials, we compute the marginals $p(z_k = i, z_{k+1} = j \given x, q)$ using the forward-backward algorithm, which is then used to compute the context vector: \begin{align*} \mathbf{c} = \sum_{z_1,\ldots,z_K} p(z_1,\ldots,z_K \given x,q) f(x,z) & & f(x,z) = \sum_{k=1}^K f_k(x, z_k) & & f_k(x,z_k) = \mathbf{o}_{z_k}^k \end{align*} Note that if $f(x,z)$ factors over the components of $z$ (as is the case above) then computing $\cvec$ only requires evaluating the marginals $p(z_k \given x,q)$. Finally, given the context vector the prediction is made in a similar fashion to MemN2N: \begin{align*} p(y \given x, q) = \softmax(\Wvec (\mathbf{q}^K + \mathbf{c})) \end{align*} Other training setup is similar to \cite{Sukhbaatar2015}: we use stochastic gradient descent with learning rate $0.01$, which is divided by $2$ every $25$ epochs until $100$ epochs are reached. Capacity of the memory is limited to $25$ sentences. The embedding vectors are of size $20$ and gradients are renormalized if the norm exceeds $40$. All models implement \emph{position encoding}, \emph{temporal encoding}, and \emph{linear start} from the original paper. For linear start, the $\softmax(\cdot)$ function in the attention layer is removed at the beginning and re-inserted after $20$ epochs for MemN2N, while for the CRF models we apply a $\log(\softmax(\cdot))$ layer on the $\qvec^k$ after $20$ epochs. Each model is trained separately for each task. \subsection{Natural Language Inference}\label{app:nli} Our baseline model/setup is essentially the same as that of \cite{Parikh2016}. Let $[x_1, \dots, x_n], [y_1, \dots, y_m]$ be the premise/hypothesis, with the corresponding input representations $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$. The input representations are obtained by a linear transformation of the $300$-dimensional pretrained GloVe embeddings \citep{Pennington2014} after normalizing the GloVe embeddings to have unit norm.\footnote{We use the GloVe embeddings pretrained over the $840$ billion word Common Crawl, publicly available at \url{http://nlp.stanford.edu/projects/glove/}} The pretrained embeddings remain fixed but the linear layer (which is also $300$-dimensional) is trained. Words not in the pretrained vocabulary are hashed to one of $100$ Gaussian embeddings with mean $0$ and standard deviation $1$. We concatenate each input representation with a convex combination of the other sentence's input representations (essentially performing \textit{inter-sentence} attention), where the weights are determined through a dot product followed by a softmax, \begin{align*} e_{ij} = f(\xvec_i)^\top f(\yvec_j) & & \bar{\xvec}_{i} = \left[\xvec_i ; \sum_{j=1}^m \frac{\exp e_{ij}}{\sum_{k=1}^m \exp e_{ik}} \yvec_{j}\right] & & \bar{\yvec}_{j} = \left[\yvec_j ; \sum_{i=1}^n \frac{\exp e_{ij}}{\sum_{k=1}^n \exp e_{kj}} \xvec_{i}\right] \end{align*} Here $f(\cdot)$ is an MLP. The new representations are fed through another MLP $g(\cdot)$, summed, combined with the final MLP $h(\cdot)$ and fed through a softmax layer to obtain a distribution over the labels $l$, \begin{align*} \bar{\xvec} &= \sum_{i=1}^n g(\bar{\xvec}_{i}) \hspace{20mm} \bar{\yvec} = \sum_{j=1}^m g(\bar{\yvec}_{j}) \\ p(l \given x_1, &\dots, x_n, y_1, \dots, y_m)= \softmax(\Vvec h([\bar{\xvec}; \bar{\yvec}]) + \bvec) \end{align*} All the MLPs have $2$-layers, $300$ $\relu$ units, and dropout probability of $0.2$. For structured/simple models, we first employ the bidirectional parsing LSTM (see \ref{app:parsing}) to obtain the scores $\theta_{ij}$. In the structured case each word representation is simply concatenated with its soft-parent \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n p(z_{ki} = 1 \given x ) \xvec_k\right] % & & \hat{\yvec}_j = [\yvec_j ; \sum_{l=1}^m p(z'_{lj} = 1 \given y) \yvec_l] \end{align*} and $\hat{\xvec}_i$ (and analogously $\hat{\yvec}_j$) is used as the input to the above model. In the simple case (which closely corresponds to the \emph{intra-sentence} attention model of \cite{Parikh2016}), we have \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n \frac{\exp \theta_{ki}}{\sum_{l=1}^n \exp \theta_{li}} \xvec_k \right] \end{align*} The word embeddings for the parsing LSTMs are also initialized with GloVe, and the parsing layer is shared between the two sentences. The forward/backward LSTMs for the parsing layer are $100$-dimensional. Additional training details include: batch size of $32$; training for $100$ epochs with Adagrad \citep{Duchi2011} where the global learning rate is $0.05$ and sum of gradient squared is initialized to $0.1$; parameter intialization over a Gaussian distribution with mean $0$ and standard deviation $0.01$; gradient normalization at $5$. In the pretrained scenario, pretraining is done with Adam \citep{Kingma2015} with learning rate equal to $0.01$, and $\beta_1 = 0.9$, $\beta_2 = 0.999$. \section{Forward/Backward through the Inside-Outside Algorithm}\label{app:io} Figure~\ref{fig:io-fprop} shows the procedure for obtaining the parsing marginals from the input potentials. This corresponds to running the inside-outside version of Eisner's algorithm \citep{Eisner1996}. The intermediate data structures used during the dynamic programming algorithm are the (log) inside tables $\alpha$, and the (log) outside tables $\beta$. Both $\alpha, \beta$ are of size $n \times n \times 2 \times 2$, where $n$ is the sentence length. First two dimensions encode the start/end index of the span (i.e. subtree). The third dimension encodes whether the root of the subtree is the left ($L$) or right ($R$) index of the span. The fourth dimension indicates if the span is complete ($1$) or incomplete ($0$). We can calculate the marginal distribution of each word's parent (for all words) in $O(n^3)$ using this algorithm. Backward pass through the inside-outside algorithm is slightly more involved, but still takes $O(n^3)$ time. Figure~\ref{fig:io-bprop} illustrates the backward procedure, which receives the gradient of the loss $\mcL$ with respect to the marginals, $\nabla^\mcL_p$, and computes the gradient of the loss with respect to the potentials $\nabla^\mcL_\theta$. The computations must be performed in the signed log-space semifield to handle log of negative values. See section~\ref{sec:e2e} and Table~\ref{tab:dlog} for more details. \end{document}
Structured Attention Networks
1702.00887
Table 5: Results of our models (bottom) and others (top) on the Stanford NLI test set. Our baseline model has the same architecture as Parikh et al. (2016) but the performance is slightly different due to different settings (e.g. we train for 100 epochs with a batch size of 32 while Parikh et al. (2016) train for 400 epochs with a batch size of 4 using asynchronous SGD.)
[ "Model", "Accuracy %" ]
[ [ "Handcrafted features (Bowman et al., 2015 )", "78.2" ], [ "LSTM encoders (Bowman et al., 2015 )", "80.6" ], [ "Tree-Based CNN (Mou et al., 2016 )", "82.1" ], [ "Stack-Augmented Parser-Interpreter Neural Net (Bowman et al., 2016 )", "83.2" ], [ "LSTM with word-by-word attention (Rocktäschel et al., 2016 )", "83.5" ], [ "Matching LSTMs (Wang & Jiang, 2016 )", "86.1" ], [ "Decomposable attention over word embeddings (Parikh et al., 2016 )", "86.3" ], [ "Decomposable attention + intra-sentence attention (Parikh et al., 2016 )", "86.8" ], [ "Attention over constituency tree nodes (Zhao et al., 2016 )", "87.2" ], [ "Neural Tree Indexers (Munkhdalai & Yu, 2016 )", "87.3" ], [ "Enhanced BiLSTM Inference Model (Chen et al., 2016 )", "87.7" ], [ "Enhanced BiLSTM Inference Model + ensemble (Chen et al., 2016 )", "88.3" ], [ "No Attention", "85.8" ], [ "No Attention + Hard parent", "86.1" ], [ "Simple Attention", "86.2" ], [ "Structured Attention", "86.8" ], [ "Pretrained Structured Attention", "86.5" ] ]
Simple attention improves upon the no attention model, and this is consistent with improvements observed by Parikh et al. The pipelined model with hard parents also slightly improves upon the baseline. Structured attention outperforms both models, though surprisingly, pretraining the syntactic attention layer on the parse trees performs worse than training it from scratch—it is possible that the pretrained attention is too strict for this task.
\documentclass{article} % For LaTeX2e \usepackage[font=small,labelfont=bf]{caption} \usepackage[noend]{algpseudocode} \usetikzlibrary{matrix,arrows,backgrounds,calc,patterns,positioning,fit,shapes} \usepackage[titletoc,title]{appendix} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator{\softmax}{softmax} \DeclareMathOperator{\logadd}{logadd} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\signexp}{signexp} \DeclareMathOperator{\sigmoid}{sigmoid} \DeclareMathOperator{\softparent}{soft-parent} \DeclareMathOperator{\parent}{parent} \DeclareMathOperator{\head}{head} \DeclareMathOperator{\softhead}{soft-head} \DeclareMathOperator{\simf}{sim} \DeclareMathOperator{\relu}{ReLU} \DeclareMathOperator{\lstm}{LSTM} \DeclareMathOperator{\rnn}{RNN} \DeclareMathOperator{\mlp}{MLP} \newcommand{\oplusgets}{\gets_{\oplus}} \newcommand{\pgrad}{\nabla_{p}^\mathcal{L}} \newcommand{\alphagrad}{\nabla_{\alpha}^\mathcal{L}} \newcommand{\betagrad}{\nabla_{\beta}^\mathcal{L}} \newcommand{\thetagrad}{\log \nabla_{\theta}^\mathcal{L}} \newcommand{\xvec}{\mathbf{x}} \newcommand{\yvec}{\mathbf{y}} \newcommand{\cvec}{\mathbf{c}} \newcommand{\mvec}{\mathbf{m}} \newcommand{\zvec}{\mathbf{z}} \newcommand{\qvec}{\mathbf{q}} \newcommand{\svec}{\mathbf{s}} \newcommand{\tvec}{\mathbf{t}} \newcommand{\mcL}{\mathcal{L}} \newcommand{\mcT}{\mathcal{T}} \newcommand{\mcY}{\mathcal{Y}} \newcommand{\mcV}{\mathcal{V}} \newcommand{\mcC}{\mathcal{C}} \newcommand{\mcA}{\mathcal{A}} \newcommand{\mcZ}{\mathcal{Z}} \newcommand{\mcX}{\mathcal{X}} \newcommand{\context}{\mathbf{y}_{\mathrm{c}}} \newcommand{\embcontext}{\mathbf{\tilde{y}}_{\mathrm{c}}} \newcommand{\inpcontext}{\mathbf{\tilde{x}}} \newcommand{\start}{\mathbf{\tilde{y}}_{\mathrm{c0}}} \newcommand{\End}{\mathrm{\texttt{</s>}}} \newcommand{\Uvec}{\mathbf{U}} \newcommand{\Evec}{\mathbf{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\Gvec}{\mathbf{G}} \newcommand{\Fvec}{\mathbf{F}} \newcommand{\Pvec}{\mathbf{P}} \newcommand{\pvec}{\mathbf{p}} \newcommand{\Vvec}{\mathbf{V}} \newcommand{\Wvec}{\mathbf{W}} \newcommand{\hvec}{\mathbf{h}} \newcommand{\wvec}{\mathbf{w}} \newcommand{\uvec}{\mathbf{u}} \newcommand{\vvec}{\mathbf{v}} \newcommand{\bvec}{\mathbf{b}} \newcommand{\reals}{\mathbb{R}} \newcommand{\ind}{\mathbbm{1}} \newcommand\given{\,|\,} \title{Structured Attention Networks} \author{Yoon Kim\thanks{Equal contribution.} \hspace{5mm} Carl Denton$^*$ \hspace{5mm} Luong Hoang \hspace{5mm} Alexander M. Rush \\ \texttt{\small \{yoonkim@seas,carldenton@college,lhoang@g,srush@seas\}.harvard.edu}\\ School of Engineering and Applied Sciences\\ Harvard University\\ Cambridge, MA 02138, USA \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard soft-selection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linear-chain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks: tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention. \end{abstract} \section{Introduction} Attention networks are now a standard part of the deep learning toolkit, contributing to impressive results in neural machine translation \citep{Bahdanau2015,Luong2015}, image captioning \citep{Xu2015}, speech recognition \citep{Chorowski2015,Chan2015}, question answering \citep{Hermann2015,Sukhbaatar2015}, and algorithm-learning \citep{Graves2014,Vinyals2015c}, among many other applications (see \cite{Cho2015} for a comprehensive review). This approach alleviates the bottleneck of compressing a source into a fixed-dimensional vector by equipping a model with variable-length memory \citep{Weston2014,Graves2014,Graves2016}, thereby providing random access into the source as needed. Attention is implemented as a hidden layer which computes a categorical distribution (or hierarchy of categorical distributions) to make a soft-selection over source elements. Noting the empirical effectiveness of attention networks, we also observe that the standard attention-based architecture does not directly model any \textit{structural dependencies} that may exist among the source elements, and instead relies completely on the hidden layers of the network. While one might argue that these structural dependencies can be learned implicitly by a deep model with enough data, in practice, it may be useful to provide a structural bias. Modeling structural dependencies at the final, \textit{output} layer has been shown to be important in many deep learning applications, most notably in seminal work on graph transformers \citep{LeCun1998}, key work on NLP \citep{Collobert2011}, and in many other areas \citep[\textit{inter alia}]{Peng2009,Do2010,Jaderberg2014b,Chen2015b,Durrett2015,Lample2016}. In this work, we consider applications which may require structural dependencies at the attention layer, and develop \textit{internal} structured layers for modeling these directly. This approach generalizes categorical soft-selection attention layers by specifying possible structural dependencies in a soft manner. Key applications will be the development of an attention function that segments the source input into subsequences and one that takes into account the latent recursive structure (i.e. parse tree) of a source sentence. Our approach views the attention mechanism as a graphical model over a set of latent variables. The standard attention network can be seen as an expectation of an annotation function with respect to a single latent variable whose categorical distribution is parameterized to be a function of the source. In the general case we can specify a graphical model over multiple latent variables whose edges encode the desired structure. Computing forward attention requires performing inference to obtain the expectation of the annotation function, i.e. the \textit{context vector}. This expectation is computed over an exponentially-sized set of structures (through the machinery of graphical models/structured prediction), hence the name \textit{structured attention} network. Notably each step of this process (including inference) is differentiable, so the model can be trained end-to-end without having to resort to deep policy gradient methods \citep{schulman2015gradient}. The differentiability of inference algorithms over graphical models has previously been noted by various researchers \citep{Li2009,Domke2011,Stoyanov2011,Stoyanov2012,Gormley2015}, primarily outside the area of deep learning. For example, \citet{Gormley2015} treat an entire graphical model as a differentiable circuit and backpropagate risk through variational inference (loopy belief propagation) for minimium risk training of dependency parsers. Our contribution is to combine these ideas to produce structured \textit{internal} attention layers within deep networks, noting that these approaches allow us to use the resulting marginals to create new features, as long as we do so a differentiable way. We focus on two classes of structured attention: linear-chain conditional random fields (CRFs) \citep{Lafferty2001} and first-order graph-based dependency parsers \citep{Eisner1996}. The initial work of \cite{Bahdanau2015} was particularly interesting in the context of machine translation, as the model was able to implicitly learn an \textit{alignment model as a hidden layer}, effectively embedding inference into a neural network. In similar vein, under our framework the model has the capacity to learn a \textit{segmenter as a hidden layer} or a \textit{parser as a hidden layer}, without ever having to see a segmented sentence or a parse tree. Our experiments apply this approach to a difficult synthetic reordering task, as well as to machine translation, question answering, and natural language inference. We find that models trained with structured attention outperform standard attention models. Analysis of learned representations further reveal that interesting structures emerge as an internal layer of the model. All code is available at \url{http://github.com/harvardnlp/struct-attn}. \section{Background: Attention Networks} A standard neural network consist of a series of non-linear transformation layers, where each layer produces a fixed-dimensional hidden representation. For tasks with large input spaces, this paradigm makes it hard to control the interaction between components. For example in machine translation, the source consists of an entire sentence, and the output is a prediction for each word in the translated sentence. Utilizing a standard network leads to an information bottleneck, where one hidden layer must encode the entire source sentence. Attention provides an alternative approach.\footnote{Another line of work involves marginalizing over latent variables (e.g. latent alignments) for sequence-to-sequence transduction \citep{Kong2016,Lu2016,Yu2016,Yu2017}.} An attention network maintains a set of hidden representations that scale with the size of the source. The model uses an internal inference step to perform a soft-selection over these representations. This method allows the model to maintain a variable-length memory and has shown to be crucially important for scaling systems for many tasks. Formally, let $x = [x_1, \dots, x_n]$ represent a sequence of inputs, let $q$ be a query, and let $z$ be a categorical latent variable with sample space $\{1, \ldots, n\}$ that encodes the desired selection among these inputs. Our aim is to produce a \textit{context} $c$ based on the sequence and the query. To do so, we assume access to an \textit{attention distribution} $z \sim p(z \given x, q)$, where we condition $p$ on the inputs $x$ and a query $q$. The \textit{context} over a sequence is defined as expectation, $c = \E_{z \sim p(z \given x, q)} [f(x, z)]$ where $f(x, z)$ is an \textit{annotation function}. Attention of this form can be applied over any type of input, however, we will primarily be concerned with ``deep'' networks, where both the annotation function and attention distribution are parameterized with neural networks, and the context produced is a vector fed to a downstream network. For example, consider the case of attention-based neural machine translation \citep{Bahdanau2015}. Here the sequence of inputs $[\mathbf{x}_1, \ldots, \mathbf{x}_n]$ are the hidden states of a recurrent neural network (RNN), running over the words in the source sentence, $\mathbf{q}$ is the RNN hidden state of the target decoder (i.e. vector representation of the query $q$), and $z$ represents the source position to be attended to for translation. The attention distribution $p$ is simply $p(z = i \given x, q) = \softmax(\theta_i)$ where $\theta \in \reals^n$ is a parameterized potential typically based on a neural network, e.g. $\theta_i = \mlp([\mathbf{x}_i; \qvec])$. The annotation function is defined to simply return the selected hidden state, $f(\mathbf{x}, z) = \mathbf{x}_z$. The context vector can then be computed using a simple sum, \begin{equation}\label{vanilla-attn} \mathbf{c} = \E_{z \sim p(z \given x, q)} [f( x, z)] = \sum_{i=1}^n p(z = i \given x, q) \mathbf{x}_i \end{equation} Other tasks such as question answering use attention in a similar manner, for instance by replacing source $[x_1, \ldots, x_n]$ with a set of potential facts and $q$ with a representation of the question. In summary we interpret the attention mechanism as taking the expectation of an annotation function $f(x,z)$ with respect to a latent variable $z \sim p$, where $p$ is parameterized to be function of $x$ and $q$. \section{Structured Attention} Attention networks simulate selection from a set using a soft model. In this work we consider generalizing selection to types of attention, such as selecting chunks, segmenting inputs, or even attending to latent subtrees. One interpretation of this attention is as using soft-selection that considers all possible structures over the input, of which there may be exponentially many possibilities. Of course, this expectation can no longer be computed using a simple sum, and we need to incorporate the machinery of inference directly into our neural network. Define a structured attention model as being an attention model where $z$ is now a vector of discrete latent variables $[z_1, \ldots, z_m]$ and the attention distribution is $p(z \given x, q)$ is defined as a \textit{conditional random field} (CRF), specifying the independence structure of the $z$ variables. Formally, we assume an undirected graph structure with $m$ vertices. The CRF is parameterized with clique (log-)potentials $\theta_C(z_{C}) \in \reals$, where the $z_C$ indicates the subset of $z$ given by clique $C$. Under this definition, the attention probability is defined as, $p(z \given x, q; \theta) = \softmax(\sum_C \theta_C(z_C))$, where for symmetry we use $\softmax$ in a general sense, i.e. $\softmax(g(z)) = \frac{1}{Z} \exp(g(z))$ where $Z = \sum_{z'} \exp(g(z'))$ is the implied partition function. In practice we use a neural CRF, where $\theta$ comes from a deep model over $x, q$. In structured attention, we also assume that the annotation function $f$ factors (at least) into clique annotation functions $f(x, z) = \sum_C f_C(x, z_C)$. Under standard conditions on the conditional independence structure, inference techniques from graphical models can be used to compute the forward-pass expectations and the context: \[c = \E_{z \sim p(z \given x, q)} [f(x, z)] = \sum_{C} \E_{z \sim p(z_C \given x, q)} [f_C(x, z_C)]\] \subsection{Example 1: Subsequence Selection} \label{sec:subselect} Suppose instead of soft-selecting a single input, we wanted to explicitly model the selection of contiguous subsequences. We could naively apply categorical attention over all subsequences, or hope the model learns a multi-modal distribution to combine neighboring words. Structured attention provides an alternate approach. Concretely, let $m =n$, define $z$ to be a random vector $z = [z_1, \dots, z_n]$ with $z_i \in \{0, 1\}$, and define our annotation function to be, $f(x,z) = \sum_{i=1}^n f_{i} (x,z_{i})$ where $f_{i} (x,z_i) = \ind \{ z_i = 1\} \xvec_i$. The explicit expectation is then, \begin{equation}\label{struct-attn} \E_{z_1, \dots, z_n }[f(x,z)] = \sum_{i=1}^n p(z_i = 1 \given x, q) \xvec_i \end{equation} Equation (\ref{struct-attn}) is similar to equation (\ref{vanilla-attn})---both are a linear combination of the input representations where the scalar is between $[0,1]$ and represents how much attention should be focused on each input. However, (2) is fundamentally different in two ways: (i) it allows for multiple inputs (or no inputs) to be selected for a given query; (ii) we can incorporate structural dependencies across the $z_i$'s. For instance, we can model the distribution over $z$ with a linear-chain CRF with pairwise edges, \begin{align}\label{linear-chain} p(z_1, \dots, z_n \given x, q) = \softmax \left( \sum_{i=1}^{n-1} \theta_{i,i+1}(z_i, z_{i+1}) \right) \end{align} where $\theta_{k,l}$ is the pairwise potential for $z_i = k$ and $z_{i+1} = l$. This model is shown in Figure~\ref{fig:seq}c. Compare this model to the standard attention in Figure~\ref{fig:seq}a, or to a simple Bernoulli (sigmoid) selection method, $p(z_i = 1 \given x, q) = \sigmoid(\theta_{i}) $, shown in Figure~\ref{fig:seq}b. All three of these methods can use potentials from the same neural network or RNN that takes $x$ and $q$ as inputs. In the case of the linear-chain CRF in~(\ref{linear-chain}), the marginal distribution $p(z_i = 1 \given x)$ can be calculated efficiently in linear-time for all $i$ using message-passing, i.e. the forward-backward algorithm. These marginals allow us to calculate (\ref{struct-attn}), and in doing so we implicitly sum over an exponentially-sized set of structures (i.e. all binary sequences of length $n$) through dynamic programming. We refer to this type of attention layer as a \emph{segmentation attention} layer. Note that the forward-backward algorithm is being used as parameterized \textit{pooling} (as opposed to output computation), and can be thought of as generalizing the standard attention softmax. Crucially this generalization from vector softmax to forward-backward is just a series of differentiable steps,\footnote{As are other dynamic programming algorithms for inference in graphical models, such as (loopy and non-loopy) belief propagation.} and we can compute gradients of its output (marginals) with respect to its input (potentials). This will allow the structured attention model to be trained end-to-end as part of a deep model. \subsection{Example 2: Syntactic Tree Selection } This same approach can be used for more involved structural dependencies. One popular structure for natural language tasks is a dependency tree, which enforces a structural bias on the recursive dependencies common in many languages. In particular a dependency tree enforces that each word in a source sentence is assigned exactly one parent word (\textit{head word}), and that these assignments do not cross (projective structure). Employing this bias encourages the system to make a soft-selection based on learned syntactic dependencies, without requiring linguistic annotations or a pipelined decision. A dependency parser can be partially formalized as a graphical model with the following cliques \citep{Smith2008}: latent variables $z_{ij} \in \{0,1\}$ for all $i \ne j$, which indicates that the $i$-th word is the parent of the $j$-th word (i.e. $x_i \rightarrow x_j$); and a special global constraint that rules out configurations of $z_{ij}$'s that violate parsing constraints (e.g. one head, projectivity). The parameters to the graph-based CRF dependency parser are the potentials $\theta_{ij}$, which reflect the score of selecting $x_i$ as the parent of $x_j$. The probability of a parse tree $z$ given the sentence $x = [x_1, \ldots, x_n]$ is, \begin{equation} p(z \given x, q)= \softmax \left(\ind\{z\ \text{is valid}\} \sum_{i \neq j} \ind\{z_{ij} = 1\} \theta_{ij} \right) \end{equation} where $z$ is represented as a vector of $z_{ij}$'s for all $i \ne j$. It is possible to calculate the marginal probability of each edge $p(z_{ij} = 1\given x, q)$ for all $i, j$ in $O(n^3)$ time using the inside-outside algorithm \citep{Baker1979} on the data structures of \citet{Eisner1996}. The parsing contraints ensure that each word has exactly one head (i.e. $\sum_{i=1}^n z_{ij} = 1$). Therefore if we want to utilize the \emph{soft-head} selection of a position $j$, the context vector is defined as: \begin{align*} f_j(x, z) = \sum_{i=1}^n \ind\{z_{ij} = 1\} \xvec_i & & \cvec_j = \E_z [f_j(x, z)] = \sum_{i=1}^n p(z_{ij} = 1 \given x, q) \xvec_i \end{align*} Note that in this case the annotation function has the subscript $j$ to produce a context vector for each word in the sentence. Similar types of attention can be applied for other tree properties (e.g. soft-children). We refer to this type of attention layer as a \emph{syntactic attention} layer. \subsection{End-to-End Training}\label{sec:e2e} Graphical models of this form have been widely used as the final layer of deep models. Our contribution is to argue that these networks can be added within deep networks in place of simple attention layers. The whole model can then be trained end-to-end. The main complication in utilizing this approach within the network itself is the need to backpropagate the gradients through an inference algorithm as part of the structured attention network. Past work has demonstrated the techniques necessary for this approach (see \citet{Stoyanov2011}), but to our knowledge it is very rarely employed. Consider the case of the simple linear-chain CRF layer from equation (\ref{linear-chain}). Figure~\ref{fig:fb} (left) shows the standard forward-backward algorithm for computing the marginals $p(z_i = 1\given x, q; \theta)$. If we treat the forward-backward algorithm as a neural network layer, its input are the potentials $\theta$, and its output after the forward pass are these marginals.\footnote{Confusingly, ``forward'' in this case is different than in the \textit{forward}-backward algorithm, as the marginals themselves are the output. However the two uses of the term are actually quite related. The forward-backward algorithm can be interpreted as a forward and backpropagation pass on the log partition function. See \citet{Eisner2016} for further details (appropriately titled ``Inside-Outside and Forward-Backward Algorithms Are Just Backprop''). As such our full approach can be seen as computing second-order information. This interpretation is central to \citet{Li2009}.} To backpropagate a loss through this layer we need to compute the gradient of the loss $\mcL$ with respect to $\theta$, $\nabla_{\theta}^\mcL$, as a function of the gradient of the loss with respect to the marginals, $\nabla_{p}^\mcL$.\footnote{In general we use $\nabla^a_b$ to denote the Jacobian of $a$ with respect to $b$.} As the forward-backward algorithm consists of differentiable steps, this function can be derived using reverse-mode automatic differentiation of the forward-backward algorithm itself. Note that this reverse-mode algorithm conveniently has a parallel structure to the forward version, and can also be implemented using dynamic programming. \begin{wraptable}{r}{0.54\textwidth} \small \centering \begin{tabular}{cc|cc|cc} \toprule & & \multicolumn{2}{c|}{$\oplus$} & \multicolumn{2}{c}{$\otimes$} \\ $s_a$ & $s_b$ & $ l_{a+b} $ & $s_{a+b}$ & $ l_{a\cdot b}$ & $s_{a \cdot b}$\\ \midrule $+$ & $+$ & $l_a+\log (1 + d)$& $+$ & $l_a+l_b$ &$+$ \\ $+$ & $-$ & $l_a+\log (1 - d)$& $+$ & $l_a+l_b$ &$-$ \\ $-$ & $+$ & $l_a+\log (1 - d)$& $-$ & $l_a+l_b$ &$-$ \\ $-$ & $-$ & $l_a+\log (1 + d)$& $-$ & $l_a+l_b$ &$+$ \\ \bottomrule \end{tabular} \caption{\label{tab:dlog} \small Signed log-space semifield (from \cite{Li2009}). Each real number $a$ is represented as a pair $( l_a, s_a )$ where $l_a = \log |a|$ and $s_a = \sign(a)$. Therefore $a = s_a \exp(l_a)$. For the above we let $d = \exp(l_b- l_a)$ and assume $|a| > |b|$. } \end{wraptable} However, in practice, one cannot simply use current off-the-shelf tools for this task. For one, efficiency is quite important for these models and so the benefits of hand-optimizing the reverse-mode implementation still outweighs simplicity of automatic differentiation. Secondly, numerical precision becomes a major issue for structured attention networks. For computing the forward-pass and the marginals, it is important to use the standard log-space semifield over $\mathbb{R}\cup\{\pm \infty\}$ with binary operations $(\oplus = \logadd, \otimes = +)$ to avoid underflow of probabilities. For computing the backward-pass, we need to remain in log-space, but also handle log of negative values (since $\pgrad$ could be negative). This requires extending to the \textit{signed} log-space semifield over $\left[\mathbb{R}\cup\{\pm \infty\}\right] \times \{+, -\}$ with special $+$/$-$ operations. Table~\ref{tab:dlog}, based on \cite{Li2009}, demonstrates how to handle this issue, and Figure~\ref{fig:fb} (right) describes backpropagation through the forward-backward algorithm. For dependency parsing, the forward pass can be computed using the inside-outside implementation of Eisner's algorithm \citep{Eisner1996}. Similarly, the backpropagation parallels the inside-outside structure. Forward/backward pass through the inside-outside algorithm is described in Appendix~\ref{app:io}. \section{Experiments} We experiment with three instantiations of structured attention networks on four different tasks: (a) a simple, synthetic tree manipulation task using the syntactic attention layer, (b) machine translation with segmentation attention (i.e. two-state linear-chain CRF), (c) question answering using an $n$-state linear-chain CRF for multi-step inference over $n$ facts, and (d) natural language inference with syntactic tree attention. These experiments are not intended to boost the state-of-the-art for these tasks but to test whether these methods can be trained effectively in an end-to-end fashion, can yield improvements over standard selection-based attention, and can learn plausible latent structures. All model architectures, hyperparameters, and training details are further described in Appendix~\ref{app:model}. \subsection{Tree Transduction} The first set of experiments look at a tree-transduction task. These experiments use synthetic data to explore a failure case of soft-selection attention models. The task is to learn to convert a random formula given in prefix notation to one in infix notation, e.g., \begin{small} \begin{align*} (\,\,\,*\,\,\,(\,\,\,+\,\,\,(\,\,\,+\,\,\,15\,\,\,7\,\,\,)\,\,\,1\,\,\,8\,\,\,)\,\,\,(\,\,\,+\,\,\,19\,\,\,0\,\,\,11\,\,\,)\,\,\,) \,\, \Rightarrow (\,\,(\,\,15\,\,+\,\,7\,\,\,)\,\,+\,\,1\,\,+\,\,8\,\,\,)\,\,*\,\,(\,\,\,19\,\,+\,\,0\,\,+\,\,11\,\,\,) \end{align*} \end{small} The alphabet consists of symbols $\{(, ),+,*\}$, numbers between $0$ and $20$, and a special root symbol $\$$. This task is used as a preliminary task to see if the model is able to learn the implicit tree structure on the source side. The model itself is an encoder-decoder model, where the encoder is defined below and the decoder is an LSTM. See Appendix~\ref{app:tree} for the full model. Training uses $15$K prefix-infix pairs where the maximum nesting depth is set to be between $2$-$4$ (the above example has depth $3$), with $5$K pairs in each depth bucket. The number of expressions in each parenthesis is limited to be at most $4$. Test uses $1$K unseen sequences with depth between $2$-$6$ (note specifically deeper than train), with $200$ sequences for each depth. The performance is measured as the average proportion of correct target tokens produced until the first failure (as in \cite{Grefenstette2015}). For experiments we try using different forms of \textit{self}-attention over embedding-only encoders. Let $\mathbf{x}_j$ be an embedding for each source symbol; our three variants of the source representation $\hat{\xvec}_j$ are: (a) \textit{no atten}, just symbol embeddings by themselves, i.e. $\hat{\xvec}_j = \mathbf{x}_j$; (b) \textit{simple} attention, symbol embeddings and soft-pairing for each symbol, i.e. $ \hat{\xvec}_j = [\mathbf{x}_j; \mathbf{c}_j]$ where $ \mathbf{c}_j = \sum_{i=1}^n \softmax( \theta_{ij}) \mathbf{x}_i$ is calculated using soft-selection; (c) \textit{structured} attention, symbol embeddings and soft-parent, i.e. $\hat{\xvec}_j = [\mathbf{x}_j; \mathbf{c}_j]$ where $ \mathbf{c}_j = \sum_{i=1}^n p(z_{ij} = 1 \given x) \mathbf{x}_i $ is calculated using parsing marginals, obtained from the syntactic attention layer. None of these models use an explicit query value---the potentials come from running a bidirectional LSTM over the source, producing hidden vectors $\hvec_i$, and then computing \[\theta_{ij} = \tanh(\mathbf{s}^\top \tanh(\mathbf{W}_1 \mathbf{h}_i + \mathbf{W}_2 \mathbf{h}_j + \mathbf{b}))\] \noindent where $\mathbf{s}, \mathbf{b}, \mathbf{W}_1, \mathbf{W}_2$ are parameters (see Appendix~\ref{app:parsing}). \begin{wraptable}{l}{0.43\textwidth}\label{tree-perf} \small \begin{tabular}{c ccc} \toprule Depth & No Atten & Simple & Structured \\ \midrule $2$ & $7.6$ & $87.4$ & $99.2$ \\ $3$ & $4.1$ & $49.6$ & $87.0$ \\ $4$ & $2.8$ & $23.3$ & $64.5$ \\ $5$ & $2.1$ & $15.0$ & $30.8$ \\ $6$ & $1.5$ & $8.5$ & $18.2$ \\ \bottomrule \end{tabular} \caption{\label{tree-perf} \small Performance (average length to failure \%) of models on the tree-transduction task.} \end{wraptable} The source representation $[\hat{\xvec}_1, \dots, \hat{\xvec}_n]$ are attended over using the standard attention mechanism at each decoding step by an LSTM decoder.\footnote{Thus there are two attention mechanisms at work under this setup. First, structured attention over the source only to obtain soft-parents for each symbol (i.e. self-attention). Second, standard softmax alignment attention over the source representations during decoding.} Additionally, symbol embedding parameters are shared between the parsing LSTM and the source encoder. \paragraph{Results} Table~\ref{tree-perf} has the results for the task. Note that this task is fairly difficult as the encoder is quite simple. The baseline model (unsurprisingly) performs poorly as it has no information about the source ordering. The simple attention model performs better, but is significantly outperformed by the structured model with a tree structure bias. We hypothesize that the model is partially reconstructing the arithmetic tree. Figure~\ref{tree-viz} shows the attention distribution for the simple/structured models on the same source sequence, which indicates that the structured model is able to learn boundaries (i.e. parentheses). \subsection{Neural Machine Translation} Our second set of experiments use a full neural machine translation model utilizing attention over subsequences. Here both the encoder/decoder are LSTMs, and we replace standard simple attention with a segmentation attention layer. We experiment with two settings: translating directly from unsegmented Japanese characters to English words (effectively using structured attention to perform soft word segmentation), and translating from segmented Japanese words to English words (which can be interpreted as doing \emph{phrase-based} neural machine translation). Japanese word segmentation is done using the KyTea toolkit \citep{Neubig2011}. The data comes from the Workshop on Asian Translation (WAT) \citep{wat2016}. We randomly pick $500$K sentences from the original training set (of $3$M sentences) where the Japanese sentence was at most $50$ characters and the English sentence was at most $50$ words. We apply the same length filter on the provided validation/test sets for evaluation. The vocabulary consists of all tokens that occurred at least $10$ times in the training corpus. The segmentation attention layer is a two-state CRF where the unary potentials at the $j$-th decoder step are parameterized as \[ \theta_i(k)= \begin{cases} \hvec_i \Wvec \hvec_j,& k = 1 \\ 0, &k = 0 \end{cases} \] Here $[\hvec_1, \dots, \hvec_n]$ are the encoder hidden states and $\mathbf{h}_j'$ is the $j$-th decoder hidden state (i.e. the query vector). The pairwise potentials are parameterized linearly with $\mathbf{b}$, i.e. all together \[ \theta_{i,i+1}(z_i, z_{i+1}) = \theta_i(z_i) + \theta_{i+1}(z_{i+1}) + \mathbf{b}_{z_i, z_{i+1}} \] Therefore the segmentation attention layer requires just $4$ additional parameters. Appendix~\ref{app:nmt} describes the full model architecture. We experiment with three attention configurations: (a) standard {\it simple} attention, i.e. $\cvec_j = \sum_{i=1}^n \softmax(\theta_i) \hvec_i $; (b) \textit{sigmoid} attention: multiple selection with Bernoulli random variables, i.e. $\cvec_j = \sum_{i=1}^n \sigmoid(\theta_i) \hvec_i$; (c) \textit{structured} attention, encoded with normalized CRF marginals, \begin{align*} \cvec_j = \sum_{i=1}^n \frac{p(z_i=1 \given x, q)}{\gamma} \hvec_i & & \gamma = \frac{1}{\lambda} \sum_{i=1}^n p(z_i =1 \given x, q) \end{align*} The normalization term $\gamma$ is not ideal but we found it to be helpful for stable training.\footnote{With standard expectation (i.e. $\cvec_j = \sum_{i=1}^n p(z_i=1 \given x, q) \hvec_i$) we empirically observed the marginals to quickly saturate. We tried various strategies to overcome this, such as putting an $l_2$ penalty on the unary potentials and initializing with a pretrained sigmoid attention model, but simply normalizing the marginals proved to be the most effective. However, this changes the interpretation of the context vector as the expectation of an annotation function in this case.} $\lambda$ is a hyperparameter (we use $\lambda = 2$) and we further add an $l_2$ penalty of $0.005$ on the pairwise potentials $\bvec$. These values were found via grid search on the validation set. \begin{wraptable}{l}{0.43\textwidth}\label{nmt-perf} \small \begin{tabular}{c ccc} \toprule & Simple & Sigmoid & Structured \\ \midrule \textsc{Char} & $12.6$ & $13.1$ & $14.6$ \\ \textsc{Word} & $14.1$ & $13.8$ & $14.3$ \\ \bottomrule \end{tabular} \caption{\label{nmt-perf}\small Translation performance as measured by BLEU (higher is better) on character-to-word and word-to-word Japanese-English translation for the three different models.} \end{wraptable} \paragraph{Results} Results for the translation task on the test set are given in Table~\ref{nmt-perf}. Sigmoid attention outperforms simple (softmax) attention on the character-to-word task, potentially because it is able to learn many-to-one alignments. On the word-to-word task, the opposite is true, with simple attention outperforming sigmoid attention. Structured attention outperforms both models on both tasks, although improvements on the word-to-word task are modest and unlikely to be statistically significant. For further analysis, Figure~\ref{fig:vis3} shows a visualization of the different attention mechanisms on the character-to-word setup. The simple model generally focuses attention heavily on a single character. In contrast, the sigmoid and structured models are able to spread their attention distribution on contiguous subsequences. The structured attention learns additional parameters (i.e. $\bvec$) to smooth out this type of attention. \subsection{Question Answering} Our third experiment is on question answering (QA) with the linear-chain CRF attention layer for inference over multiple facts. We use the bAbI dataset \citep{Weston2015}, where the input is a set of sentences/facts paired with a question, and the answer is a single token. For many of the tasks the model has to attend to multiple supporting facts to arrive at the correct answer (see Figure~\ref{fig:vis4} for an example), and existing approaches use multiple `hops' to greedily attend to different facts. We experiment with employing structured attention to perform inference in a non-greedy way. As the ground truth supporting facts are given in the dataset, we are able to assess the model's inference accuracy. The baseline (simple) attention model is the End-To-End Memory Network \citep{Sukhbaatar2015} (MemN2N), which we briefly describe here. See Appendix~\ref{app:qa} for full model details. Let $\xvec_1, \dots, \xvec_n$ be the input embedding vectors for the $n$ sentences/facts and let $\mathbf{q}$ be the query embedding. In MemN2N, $z_k$ is the random variable for the sentence to select at the $k$-th inference step (i.e. $k$-th hop), and thus $z_k \in \{1, \dots, n\}$. The probability distribution over $z_k$ is given by $p(z_k = i \given x, q) = \softmax((\xvec_i^k)^\top\qvec^k)$, and the context vector is given by $\cvec^k = \sum_{i=1}^n p(z_k = i \given x, q) \mathbf{o}_i^k$, where $\xvec_i^k, \mathbf{o}_i^k$ are the input and output embedding for the $i$-th sentence at the $k$-th hop, respectively. The $k$-th context vector is used to modify the query $\qvec^{k+1} = \qvec^k + \cvec^k$, and this process repeats for $k = 1, \dots, K$ (for $k=1$ we have $\xvec_i^k = \xvec_i, \qvec^k = \qvec, \cvec^k = \mathbf{0}$). The $K$-th context and query vectors are used to obtain the final answer. The attention mechanism for a $K$-hop MemN2N network can therefore be interpreted as a greedy selection of a length-$K$ sequence of facts (i.e. $z_1, \dots, z_K$). For structured attention, we use an $n$-state, $K$-step linear-chain CRF.\footnote{Note that this differs from the segmentation attention for the neural machine translation experiments described above, which was a $K$-state (with $K =2$), $n$-step linear-chain CRF.} We experiment with two different settings: (a) a unary CRF model with node potentials $$\theta_k(i) = (\xvec_i^k)^\top \mathbf{q}^k$$ and (b) a binary CRF model with pairwise potentials $$\theta_{k,k+1}(i, j) = (\mathbf{x}_i^k)^\top\qvec^k + (\mathbf{x}_i^k)^\top \xvec_j^{k + 1} + (\mathbf{x}_j^{k + 1})^\top \mathbf{q}^{k + 1}$$ The binary CRF model is designed to test the model's ability to perform sequential reasoning. For both (a) and (b), a \emph{single} context vector is computed: $\mathbf{c} = \sum_{z_1,\ldots,z_K} p(z_1,\ldots,z_K \given x,q) f(x,z)$ (unlike MemN2N which computes $K$ context vectors). Evaluating $\mathbf{c}$ requires summing over all $n^K$ possible sequences of length $K$, which may not be practical for large values of $K$. However, if $f(x,z)$ factors over the components of $z$ (e.g. $f(x,z)= \sum_{k=1}^K f_k(x,z_k)$) then one can rewrite the above sum in terms of marginals: $\mathbf{c} = \sum_{k=1}^K \sum_{i=1}^n p(z_{k} = i \given x,q) f_{k}(x,z_{k})$. In our experiments, we use $f_k(x,z_k) = \mathbf{o}_{z_k}^k$. All three models are described in further detail in Appendix~\ref{app:qa}. \paragraph{Results} We use the version of the dataset with $1$K questions for each task. Since all models reduce to the same network for tasks with $1$ supporting fact, they are excluded from our experiments. The number of hops (i.e. $K$) is task-dependent, and the number of memories (i.e. $n$) is limited to be at most $25$ (note that many question have less than $25$ facts---e.g. the example in Figure~\ref{fig:vis4} has $9$ facts). Due to high variance in model performance, we train $20$ models with different initializations for each task and report the test accuracy of the model that performed the best on a $10\%$ held-out validation set (as is typically done for bAbI tasks). Results of the three different models are shown in Table~\ref{tab:results}. For correct answer seletion (Ans $\%$), we find that MemN2N and the Binary CRF model perform similarly while the Unary CRF model does worse, indicating the importance of including pairwise potentials. We also assess each model's ability to attend to the correct supporting facts in Table~\ref{tab:results} (Fact $\%$). Since ground truth supporting facts are provided for each query, we can check the sequence accuracy of supporting facts for each model (i.e. the rate of selecting the exact correct sequence of facts) by taking the highest probability sequence $\hat{z} = \argmax p(z_1, \dots, z_K \given x, q)$ from the model and checking against the ground truth. Overall the Binary CRF is able to recover supporting facts better than MemN2N. This improvement is significant and can be up to two-fold as seen for task $2$, $11$, $13$ \& $17$. However we observed that on many tasks it is sufficient to select only the last (or first) fact correctly to predict the answer, and thus higher sequence selection accuracy does not necessarily imply better answer accuracy (and vice versa). For example, all three models get $100 \%$ answer accuracy on task $15$ but have different supporting fact accuracies. Finally, in Figure~\ref{fig:vis4} we visualize of the output edge marginals produced by the Binary CRF model for a single question in task $16$. In this instance, the model is uncertain but ultimately able to select the right sequence of facts $5 \rightarrow 6 \rightarrow 8$. \subsection{Natural Language Inference} The final experiment looks at the task of natural language inference (NLI) with the syntactic attention layer. In NLI, the model is given two sentences (hypothesis/premise) and has to predict their relationship: entailment, contradiction, neutral. For this task, we use the Stanford NLI dataset \citep{Bowman2015} and model our approach off of the decomposable attention model of \cite{Parikh2016}. This model takes in the matrix of word embeddings as the input for each sentence and performs \textit{inter-sentence} attention to predict the answer. Appendix~\ref{app:nli} describes the full model. As in the transduction task, we focus on modifying the input representation to take into account soft parents via self-attention (i.e. \textit{intra-sentence} attention). In addition to the three baselines described for tree transduction (No Attention, Simple, Structured), we also explore two additional settings: (d) \textit{hard} pipeline parent selection, i.e. $\hat{\mathbf{x}}_j = [\mathbf{x}_j; \mathbf{x}_{\head(j)}]$, where $\head(j)$ is the index of $x_j$'s parent\footnote{The parents are obtained from running the dependency parser of \cite{Andor2016}, available at \\ \url{https://github.com/tensorflow/models/tree/master/syntaxnet}}; (e) \textit{pretrained} structured attention: structured attention where the parsing layer is pretrained for one epoch on a parsed dataset (which was enough for convergence). \paragraph{Results} Results of our models are shown in Table~\ref{tab:main}. Simple attention improves upon the no attention model, and this is consistent with improvements observed by \cite{Parikh2016} with their intra-sentence attention model. The pipelined model with hard parents also slightly improves upon the baseline. Structured attention outperforms both models, though surprisingly, pretraining the syntactic attention layer on the parse trees performs worse than training it from scratch---it is possible that the pretrained attention is too strict for this task. We also obtain the hard parse for an example sentence by running the Viterbi algorithm on the syntactic attention layer with the non-pretrained model: \begin{center} \includegraphics[scale=0.8]{tikz1.pdf} \end{center} Despite being trained without ever being exposed to an explicit parse tree, the syntactic attention layer learns an almost plausible dependency structure. In the above example it is able to correctly identify the main verb \texttt{fighting}, but makes mistakes on determiners (e.g. head of \texttt{The} should be \texttt{men}). We generally observed this pattern across sentences, possibly because the verb structure is more important for the inference task. \section{Conclusion} This work outlines structured attention networks, which incorporate graphical models to generalize simple attention, and describes the technical machinery and computational techniques for backpropagating through models of this form. We implement two classes of structured attention layers: a linear-chain CRF (for neural machine translation and question answering) and a more complicated first-order dependency parser (for tree transduction and natural language inference). Experiments show that this method can learn interesting structural properties and improve on top of standard models. Structured attention could also be a way of learning latent labelers or parsers through attention on other tasks. It should be noted that the additional complexity in computing the attention distribution increases run-time---for example, structured attention was approximately $5 \times$ slower to train than simple attention for the neural machine translation experiments, even though both attention layers have the same asymptotic run-time (i.e. $O(n)$). Embedding \textit{differentiable inference} (and more generally, \textit{differentiable algorithms}) into deep models is an exciting area of research. While we have focused on models that admit (tractable) exact inference, similar technique can be used to embed approximate inference methods. Many optimization algorithms (e.g. gradient descent, LBFGS) are also differentiable \citep{domke2012generic,Maclaurin2015}, and have been used as output layers for structured prediction in energy-based models \citep{Belanger2016,wang2016nips}. Incorporating them as internal neural network layers is an interesting avenue for future work. \subsubsection*{Acknowledgments} We thank Tao Lei, Ankur Parikh, Tim Vieira, Matt Gormley, Andr{\'e} Martins, Jason Eisner, Yoav Goldberg, and the anonymous reviewers for helpful comments, discussion, notes, and code. We additionally thank Yasumasa Miyamoto for verifying Japanese-English translations. \bibliographystyle{iclr2017_conference} \newpage \appendix \section*{APPENDICES} \section{Model Details}\label{app:model} \subsection{Syntactic Attention}\label{app:parsing} The syntactic attention layer (for tree transduction and natural language inference) is similar to the first-order graph-based dependency parser of \cite{Kipperwasser2016}. Given an input sentence $[x_1, \dots, x_n]$ and the corresponding word vectors $[\xvec_1, \dots, \xvec_n]$, we use a bidirectional LSTM to get the hidden states for each time step $i \in [1, \dots, n]$, \begin{align*} \hvec_i^{\text{fwd}} = \lstm(\xvec_i, \hvec_{i-1}^{\text{fwd}}) & & \hvec_i^{\text{bwd}} = \lstm(\xvec_i, \hvec_{i+1}^{\text{bwd}}) & & \hvec_i = [\hvec_i^{\text{fwd}} ; \hvec_i^{\text{bwd}}] \end{align*} where the forward and backward LSTMs have their own parameters. The score for $x_i \rightarrow x_j$ (i.e. $x_i$ is the parent of $x_j$), is given by an MLP \begin{equation*} \theta_{ij} = \tanh( \svec^\top\tanh(\Wvec_1 \hvec_i + \Wvec_2 \hvec_j + \bvec)) \end{equation*} These scores are used as input to the inside-outside algorithm (see Appendix~\ref{app:io}) to obtain the probability of each word's parent $p(z_{ij} = 1 \given x)$, which is used to obtain the soft-parent $\cvec_j$ for each word $x_j$. In the non-structured case we simply have $p(z_{ij} = 1 \given x) = \softmax(\theta_{ij})$. \subsection{Tree Transduction}\label{app:tree} Let $[x_1, \dots, x_n],[y_1, \dots, y_m]$ be the sequence of source/target symbols, with the associated embeddings $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$ with $\xvec_i, \yvec_j \in \reals^l$. In the simplest baseline model we take the source representation to be the matrix of the symbol embeddings. The decoder is a one-layer LSTM which produces the hidden states $\hvec_j' = \lstm(\yvec_j, \hvec_{j-1}')$, with $\hvec_j' \in \reals^l$. The hidden states are combined with the input representation via a bilinear map $\Wvec \in \reals^{l \times l}$ to produce the attention distribution used to obtain the vector $\mvec_i$, which is combined with the decoder hidden state as follows, \begin{align*} \alpha_i = \frac{\exp \xvec_i \Wvec \hvec_j'}{\sum_{k=1}^n \exp \xvec_k \Wvec \hvec_j'} & & \mvec_i = \sum_{i=1}^n \alpha_i \xvec_i & & \hat{\hvec}_j = \tanh (\Uvec [\mvec_i ; \hvec_j'] ) \end{align*} Here we have $\Wvec \in \reals^{l \times l}$ and $\Uvec \in \reals^{2l \times l}$. Finally, $\hat{\hvec}_j$ is used to to obtain a distribution over the next symbol $y_{j+1}$, \begin{equation*} p(y_{j+1} \given x_1, \dots, x_n, y_1, \dots, y_j) = \softmax(\Vvec \hat{\hvec}_j + \bvec) \end{equation*} For structured/simple models, the $j$-th source representation are respectively \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n p(z_{ki} = 1 \given x ) \, \xvec_k\right] & &\hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n \softmax (\theta_{ki})\, \xvec_k\right] \end{align*} where $\theta_{ij}$ comes from the bidirectional LSTM described in~\ref{app:parsing}. Then $\alpha_i$ and $\mvec_i$ changed accordingly, \begin{align*} \alpha_i = \frac{\exp \hat{\xvec}_i \Wvec \hvec_j'}{\sum_{k=1}^n \exp \hat{\xvec}_k \Wvec \hvec_j'} & & \mvec_i = \sum_{i=1}^n \alpha_i \hat{\xvec}_i \end{align*} Note that in this case we have $\Wvec \in \reals^{2l \times l}$ and $\Uvec \in \reals^{3l \times l}$. We use $l = 50$ in all our experiments. The forward/backward LSTMs for the parsing LSTM are also $50$-dimensional. Symbol embeddings are shared between the encoder and the parsing LSTMs. Additional training details include: batch size of $20$; training for $13$ epochs with a learning rate of $1.0$, which starts decaying by half after epoch $9$ (or the epoch at which performance does not improve on validation, whichever comes first); parameter initialization over a uniform distribution $U[-0.1, 0.1]$; gradient normalization at $1$ (i.e. renormalize the gradients to have norm $1$ if the $l_2$ norm exceeds $1$). Decoding is done with beam search (beam size $ = 5$). \subsection{Neural Machine Translation}\label{app:nmt} The baseline NMT system is from \cite{Luong2015}. Let $[x_1, \dots, x_n],[y_1, \dots, y_m]$ be the source/target sentence, with the associated word embeddings $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$. The encoder is an LSTM over the source sentence, which produces the hidden states $[\hvec_1, \dots, \hvec_n]$ where \begin{equation*} \hvec_i = \lstm(\xvec_i, \hvec_{i-1}) \end{equation*} and $\hvec_i \in \reals^l$. The decoder is another LSTM which produces the hidden states $\hvec_j' \in \reals^l$. In the simple attention case with categorical attention, the hidden states are combined with the input representation via a bilinear map $\Wvec \in \reals^{l \times l}$ and this distribution is used to obtain the context vector at the $j$-th time step, \begin{align*} \theta_i = \hvec_i \Wvec \hvec_j' & & \cvec_j = \sum_{i=1}^n \softmax(\theta_i)\hvec_i \end{align*} The Bernoulli attention network has the same $\theta_i$ but instead uses a $\sigmoid$ to obtain the weights of the linear combination, i.e., \begin{align*} \cvec_j = \sum_{i=1}^n \sigmoid(\theta_i) \hvec_i \end{align*} And finally, the structured attention model uses a bilinear map to parameterize one of the unary potentials \[ \theta_i(k)= \begin{cases} \hvec_i \Wvec \hvec_j',& k = 1 \\ 0, &k = 0 \end{cases} \] \begin{align*} \theta_{i,i+1}(z_i, z_{i+1}) &= \theta_i(z_i) + \theta_{i+1}(z_{i+1}) + \mathbf{b}_{z_i, z_{i+1}} \end{align*} where $\bvec$ are the pairwise potentials. These potentials are used as inputs to the forward-backward algorithm to obtain the marginals $p(z_i = 1 \given x, q)$, which are further normalized to obtain the context vector \begin{align*} \cvec_j = \sum_{i=1}^n \frac{p(z_i=1 \given x, q)}{\gamma} \hvec_i & & \gamma = \frac{1}{\lambda} \sum_i^n p(z_i =1 \given x, q) \end{align*} We use $\lambda = 2$ and also add an $l_2$ penalty of $0.005$ on the pairwise potentials $\bvec$. The context vector is then combined with the decoder hidden state \begin{align*} \hat{\hvec}_j = \tanh (\Uvec[\cvec_j ; \hvec_j']) \end{align*} and $\hat{\hvec}_j$ is used to obtain the distribution over the next target word $y_{j+1}$ \begin{align*} p(y_{j+1} \given x_1, \dots, x_n, y_1, \dots y_j) = \softmax(\Vvec \hat{\hvec}_j + \bvec) \end{align*} The encoder/decoder LSTMs have $2$ layers and $500$ hidden units (i.e. $l = 500$). Additional training details include: batch size of $128$; training for $30$ epochs with a learning rate of $1.0$, which starts decaying by half after the first epoch at which performance does not improve on validation; dropout with probability $0.3$; parameter initialization over a uniform distribution $U[-0.1, 0.1]$; gradient normalization at $1$. We generate target translations with beam search (beam size $= 5$), and evaluate with \texttt{multi-bleu.perl} from Moses.\footnote{ \url{https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl}} \subsection{Question Answering}\label{app:qa} Our baseline model (MemN2N) is implemented following the same architecture as described in \cite{Sukhbaatar2015}. In particular, let $x = [x_1, \dots, x_n]$ represent the sequence of $n$ facts with the associated embeddings $[\mathbf{x}_1, \dots, \xvec_n]$ and let $\qvec$ be the embedding of the query $q$. The embeddings are obtained by simply adding the word embeddings in each sentence or query. The full model with $K$ hops is as follows: \begin{align*} &p(z_k = i \given x, q) = \softmax( (\mathbf{x}_i^k)^\top \mathbf{q}^k ) \\ &\mathbf{c}^k = \sum_{i=1}^n p(z_k = i \given x, q) \mathbf{o}_i^k \\ &\mathbf{q}^{k + 1} = \mathbf{q}^k + \mathbf{c}^k \\ &p(y \given x, q) = \softmax(\Wvec (\mathbf{q}^K + \mathbf{c}^K)) \end{align*} where $p(y \given x, q)$ is the distribution over the answer vocabulary. At each layer, $\{\mathbf{x}_i^k\}$ and $\{\mathbf{o}_i^k\}$ are computed using embedding matrices $\mathbf{X}^k$ and $\mathbf{O}^k$. We use the \emph{adjacent weight tying scheme} from the paper so that $\mathbf{X}^{k+1} = \mathbf{O}^k, \mathbf{W}^T = \mathbf{O}^K$. $\mathbf{X}^1$ is also used to compute the query embedding at the first hop. For $k=1$ we have $\xvec_i^k = \xvec_i, \qvec^k = \qvec, \cvec^k = \mathbf{0}$. For both the Unary and the Binary CRF models, the same input fact and query representations are computed (i.e. same embedding matrices with weight tying scheme). For the unary model, the potentials are parameterized as \[ \theta_{k}(i) = (\xvec_i^k)^\top \mathbf{q}^k \] and for the binary model we compute pairwise potentials as \[ \theta_{k,k+1}(i, j) = (\mathbf{x}_i^k)^\top\qvec^k + (\mathbf{x}_i^k)^\top \xvec_j^{k + 1} + (\mathbf{x}_j^{k + 1})^\top \mathbf{q}^{k + 1} \] The $\qvec^k$'s are updated simply with a linear mapping, i.e. \[ \mathbf{q}^{k+1} = \mathbf{Q} \mathbf{q}^k \] In the case of the Binary CRF, to discourage the model from selecting the same fact again we additionally set $\theta_{k,k+1}(i,i) = -\infty$ for all $i \in \{1, \dots, n\}$. Given these potentials, we compute the marginals $p(z_k = i, z_{k+1} = j \given x, q)$ using the forward-backward algorithm, which is then used to compute the context vector: \begin{align*} \mathbf{c} = \sum_{z_1,\ldots,z_K} p(z_1,\ldots,z_K \given x,q) f(x,z) & & f(x,z) = \sum_{k=1}^K f_k(x, z_k) & & f_k(x,z_k) = \mathbf{o}_{z_k}^k \end{align*} Note that if $f(x,z)$ factors over the components of $z$ (as is the case above) then computing $\cvec$ only requires evaluating the marginals $p(z_k \given x,q)$. Finally, given the context vector the prediction is made in a similar fashion to MemN2N: \begin{align*} p(y \given x, q) = \softmax(\Wvec (\mathbf{q}^K + \mathbf{c})) \end{align*} Other training setup is similar to \cite{Sukhbaatar2015}: we use stochastic gradient descent with learning rate $0.01$, which is divided by $2$ every $25$ epochs until $100$ epochs are reached. Capacity of the memory is limited to $25$ sentences. The embedding vectors are of size $20$ and gradients are renormalized if the norm exceeds $40$. All models implement \emph{position encoding}, \emph{temporal encoding}, and \emph{linear start} from the original paper. For linear start, the $\softmax(\cdot)$ function in the attention layer is removed at the beginning and re-inserted after $20$ epochs for MemN2N, while for the CRF models we apply a $\log(\softmax(\cdot))$ layer on the $\qvec^k$ after $20$ epochs. Each model is trained separately for each task. \subsection{Natural Language Inference}\label{app:nli} Our baseline model/setup is essentially the same as that of \cite{Parikh2016}. Let $[x_1, \dots, x_n], [y_1, \dots, y_m]$ be the premise/hypothesis, with the corresponding input representations $[\xvec_1, \dots, \xvec_n], [\yvec_1, \dots, \yvec_m]$. The input representations are obtained by a linear transformation of the $300$-dimensional pretrained GloVe embeddings \citep{Pennington2014} after normalizing the GloVe embeddings to have unit norm.\footnote{We use the GloVe embeddings pretrained over the $840$ billion word Common Crawl, publicly available at \url{http://nlp.stanford.edu/projects/glove/}} The pretrained embeddings remain fixed but the linear layer (which is also $300$-dimensional) is trained. Words not in the pretrained vocabulary are hashed to one of $100$ Gaussian embeddings with mean $0$ and standard deviation $1$. We concatenate each input representation with a convex combination of the other sentence's input representations (essentially performing \textit{inter-sentence} attention), where the weights are determined through a dot product followed by a softmax, \begin{align*} e_{ij} = f(\xvec_i)^\top f(\yvec_j) & & \bar{\xvec}_{i} = \left[\xvec_i ; \sum_{j=1}^m \frac{\exp e_{ij}}{\sum_{k=1}^m \exp e_{ik}} \yvec_{j}\right] & & \bar{\yvec}_{j} = \left[\yvec_j ; \sum_{i=1}^n \frac{\exp e_{ij}}{\sum_{k=1}^n \exp e_{kj}} \xvec_{i}\right] \end{align*} Here $f(\cdot)$ is an MLP. The new representations are fed through another MLP $g(\cdot)$, summed, combined with the final MLP $h(\cdot)$ and fed through a softmax layer to obtain a distribution over the labels $l$, \begin{align*} \bar{\xvec} &= \sum_{i=1}^n g(\bar{\xvec}_{i}) \hspace{20mm} \bar{\yvec} = \sum_{j=1}^m g(\bar{\yvec}_{j}) \\ p(l \given x_1, &\dots, x_n, y_1, \dots, y_m)= \softmax(\Vvec h([\bar{\xvec}; \bar{\yvec}]) + \bvec) \end{align*} All the MLPs have $2$-layers, $300$ $\relu$ units, and dropout probability of $0.2$. For structured/simple models, we first employ the bidirectional parsing LSTM (see \ref{app:parsing}) to obtain the scores $\theta_{ij}$. In the structured case each word representation is simply concatenated with its soft-parent \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n p(z_{ki} = 1 \given x ) \xvec_k\right] % & & \hat{\yvec}_j = [\yvec_j ; \sum_{l=1}^m p(z'_{lj} = 1 \given y) \yvec_l] \end{align*} and $\hat{\xvec}_i$ (and analogously $\hat{\yvec}_j$) is used as the input to the above model. In the simple case (which closely corresponds to the \emph{intra-sentence} attention model of \cite{Parikh2016}), we have \begin{align*} \hat{\xvec}_i = \left[\xvec_i ; \sum_{k=1}^n \frac{\exp \theta_{ki}}{\sum_{l=1}^n \exp \theta_{li}} \xvec_k \right] \end{align*} The word embeddings for the parsing LSTMs are also initialized with GloVe, and the parsing layer is shared between the two sentences. The forward/backward LSTMs for the parsing layer are $100$-dimensional. Additional training details include: batch size of $32$; training for $100$ epochs with Adagrad \citep{Duchi2011} where the global learning rate is $0.05$ and sum of gradient squared is initialized to $0.1$; parameter intialization over a Gaussian distribution with mean $0$ and standard deviation $0.01$; gradient normalization at $5$. In the pretrained scenario, pretraining is done with Adam \citep{Kingma2015} with learning rate equal to $0.01$, and $\beta_1 = 0.9$, $\beta_2 = 0.999$. \section{Forward/Backward through the Inside-Outside Algorithm}\label{app:io} Figure~\ref{fig:io-fprop} shows the procedure for obtaining the parsing marginals from the input potentials. This corresponds to running the inside-outside version of Eisner's algorithm \citep{Eisner1996}. The intermediate data structures used during the dynamic programming algorithm are the (log) inside tables $\alpha$, and the (log) outside tables $\beta$. Both $\alpha, \beta$ are of size $n \times n \times 2 \times 2$, where $n$ is the sentence length. First two dimensions encode the start/end index of the span (i.e. subtree). The third dimension encodes whether the root of the subtree is the left ($L$) or right ($R$) index of the span. The fourth dimension indicates if the span is complete ($1$) or incomplete ($0$). We can calculate the marginal distribution of each word's parent (for all words) in $O(n^3)$ using this algorithm. Backward pass through the inside-outside algorithm is slightly more involved, but still takes $O(n^3)$ time. Figure~\ref{fig:io-bprop} illustrates the backward procedure, which receives the gradient of the loss $\mcL$ with respect to the marginals, $\nabla^\mcL_p$, and computes the gradient of the loss with respect to the potentials $\nabla^\mcL_\theta$. The computations must be performed in the signed log-space semifield to handle log of negative values. See section~\ref{sec:e2e} and Table~\ref{tab:dlog} for more details. \end{document}
Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting
1612.03205
Table 3: The results of the automated evaluation. The bold indicates the system with a lower similarity at the target rhyme density.
[ "[BOLD] Artist", "[BOLD] Avg Rhyme Density", "[BOLD] Baseline [BOLD] Similarity", "[BOLD] Baseline [BOLD] N-gram", "[BOLD] LSTM [BOLD] Similarity", "[BOLD] LSTM [BOLD] iteration" ]
[ [ "Tupac", "0.302467", "0.023823", "−1.57121", "0.065263", "−3167.99" ], [ "Aesop Rock", "0.348548", "0.745454", "7.199791", "0.460148", "12469.99" ], [ "DMX", "0.340826", "0.663113", "6.16003", "0.431234", "8271.148" ], [ "Drake", "0.340947", "0.586454", "5.268742", "0.51892", "9948.641" ], [ "Eminem", "0.325489", "0.337271", "2.761968", "0.301677", "8854.896" ], [ "Fabolous", "0.359554", "1.353241", "13.88348", "0.569096", "14971.56" ], [ "GZA", "0.280337", "0.519516", "4.059898", "0.616107", "14938.72" ], [ "Jay Z", "0.365125", "0.498746", "4.866775", "0.46294", "15146.64" ], [ "Lil’ Wayne", "0.362307", "0.618853", "5.894365", "0.405918", "9248.569" ], [ "Notorious B.I.G.", "0.383004", "0.701206", "7.320694", "0.427865", "3722.83" ], [ "Sage Francis", "0.414946", "0.763904", "7.726321", "0.240524", "−187.459" ], [ "[BOLD] Average", "-", "0.619234", "-", "0.409062", "-" ] ]
For each artist, we calculate their average rhyme density across all verses. We then use this value to determine at which iteration this rhyme density is achieved during generation (using the regression line for rhyme density). Next, we use the maximum similarity regression line to determine the maximum similarity score at that iteration. Low maximum similarity score indicates that we have maintained stylistic similarity while producing new, previously unseen lyrics. The reason is that in the beginning of training (in the LSTM’s case) and at a low n-gram length (for the baseline model), the models actually achieved a rhyme density that exceeded the artist’s average rhyme density. As a result, the rhyme density regression line hits the average rhyme density on a negative iteration.
\documentclass[11pt,letterpaper]{article} \usepackage[letterpaper]{geometry} \makeatletter \newcommand{\@BIBLABEL}{\@emptybiblabel} \newcommand{\@emptybiblabel}[1]{} \makeatother \usepackage[hidelinks]{hyperref} \DeclareMathOperator*{\argmax}{arg\,max} \setlength\titlebox{6.5cm} % Expanding the titlebox \graphicspath{ {images/} } \title{Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting} \author{Peter Potash, Alexey Romanov, Anna Rumshisky \\ University of Massachusetts Lowell \\ Department of Computer Science \\ {\tt \{ppotash,aromanov,arum\}@cs.uml.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Language generation tasks that seek to mimic human ability to use language creatively are difficult to evaluate, since one must consider creativity, style, and other non-trivial aspects of the generated text. The goal of this paper is to develop evaluation methods for one such task, ghostwriting of rap lyrics, and to provide an explicit, quantifiable foundation for the goals and future directions of this task. Ghostwriting must produce text that is similar in style to the emulated artist, yet distinct in content. We develop a novel evaluation methodology that addresses several complementary aspects of this task, and illustrate how such evaluation can be used to meaningfully analyze system performance. We provide a corpus of lyrics for 13 rap artists, annotated for stylistic similarity, which allows us to assess the feasibility of manual evaluation for generated verse. \end{abstract} \section{Introduction} Language generation tasks are often among the most difficult to evaluate. Evaluating machine translation, image captioning, summarization, and other similar tasks is typically done via comparison with existing human-generated ``references''. However, human beings also use language creatively, and for the language generation tasks that seek to mimic this ability, determining how accurately the generated text represents its target is insufficient, as one also needs to evaluate creativity and style. We believe that one of the reasons such tasks receive little attention is the lack of sound evaluation methodology, without which no task is well-defined, and no progress can be made. The goal of this paper is to develop an evaluation methodology for one such task, ghostwriting, or more specifically, ghostwriting of rap lyrics. Ghostwriting is ubiquitous in politics, literature, and music. As such, it introduces a distinction between the performer/presenter of text, lyrics, etc, and the creator of text/lyrics. The goal of ghostwriting is to present something in a style that is believable enough to be credited to the performer. In the domain of rap specifically, rappers sometimes function as ghostwriters early on before embarking on their own public careers, and there are even businesses that provide written lyrics as a service\footnote{\url{http://www.rap-rebirth.com/},\\ \url{http://www.precisionwrittens.com/rap-ghostwriters-for-hire/}}. The goal of automatic ghostwriting is therefore to create a system that can take as input a given artist's work and generate \textbf{similar} yet \textbf{unique} lyrics. Our objective in this work is to provide a quantifiable direction and foundation for the task of rap lyric generation and similar tasks through (1) developing an evaluation methodology for such models, and (2) illustrating how such evaluation can be used to analyze system performance, including advantages and limitations of a specific language model developed for this task. As an illustration case, we use the ghostwriter model previously proposed in exploratory work by Potash et al. \shortcite{potashghostwriter}, which uses a recurrent neural network (RNN) with Long Short-Term Memory (LSTM) for rap lyric generation. The following are the main contributions of this paper. We present a comprehensive manual evaluation methodology of the generated verses along three key aspects: fluency, coherence, and style matching. We introduce an improvement to the semi-automatic methodology used by Potash et al. \shortcite{potashghostwriter} that automatically penalizes repetitive text, which removes the need for manual intervention and enables a large-scale analysis. Finally, we build a corpus of lyrics for 13 rap artists, each with his own unique style, and conduct a comprehensive evaluation of the LSTM model performance using the new evaluation methodology. The corpus includes style matching annotation for select verses in dataset, which can form a gold standard for future work on automatic representation of similarity between artists' styles. The resulting rap lyric dataset is publicly available from the authors' website. Additionally, we believe that the annotation method we propose for manual style evaluation can be used for other similar generation tasks. One example is 'Deep Art' work in the computer vision community that seeks to apply the style of a particular painting to other images \cite{gatys2015neural,li2016combining}. One of the drawbacks of such work is a lack of systematic evaluation. For example,~\newcite{li2016combining} compared the results of the model with previous work by doing a manual inspection during an informal user study. The presence of a systematic formal evaluation process would lead to a clearer comparison between models and facilitate progress in this area of research. With this in mind, we make the interface used for style evaluation in this work available for public use. Our evaluation results highlight the truly multi-faceted nature of the ghostwriting task. While having a single measure of success is clearly desirable, our analysis shows the need for complementary metrics that evaluate different components of the overall task. Indeed, despite the fact that our test-case LSTM model outperforms a baseline model across numerous artists based on automated evaluation, the full set of evaluation metrics is able to showcase the LSTM model's strengths and weakness. The coherence evaluation demonstrates the difficulty of incorporating large amounts of training data into the LSTM model, which intuitively would be desirable to create a flexible ghostwriting model. The style matching experiments suggest that the LSTM is effective at capturing an artist's general style. However, this may indicate that it tends to form `average' verses, which are then more likely to be matched with existing verses from an artist rather than another random verse from the same artist. Overall, the evaluation methodology we present provides an explicit, quantifiable foundation for the ghostwriting task, allowing for a deeper understanding of the task's goals and future research directions. \section{Related Work} In the past few years there has been a significant amount of work dedicated to the evaluation of natural language generation~\cite{hastie2014comparative}, dealing with different aspects of evaluation methodology. However, most of this work focuses on simple tasks, such as referring expressions generation. For example, Belz and Kow \shortcite{belz2011discrete} investigated the impact of continuous and discrete scales for generated weather descriptions, as well as and simple image descriptions that typically consist of a few words (e.g., "\verb|the small blue fan|"). Previous work that explores text generation for artistic purposes, such as poetry and lyrics, generally uses either automated or manual evaluation. In terms of manual evaluation, Barbieri et al. \shortcite{barbieri2012markov} have a set of annotators evaluate generated lyrics along two separate dimensions: grammar and semantic relatedness to song title. The annotators rated the dimensions with scores 1-3. A similar strategy was used by Gerv\'{a}s \shortcite{gervas2000wasp}, where the author had annotators evaluate generated verses with regard to syntactic correctness and overall aesthetic value, providing scores in the range 1-5. Wu et al. \shortcite{wu2013learning} had annotators determine the effectiveness of various systems based on fluency as well as rhyming. Some heuristic-based automated approaches have also been used. For example, Oliveira et al. \shortcite{oliveira2014adapting} use a simple automatic heuristic that awards lines for ending in a termination previously used in the generated stanza. Malmi et al. \shortcite{malmi2015dopelearning} evaluate their generated lyrics based on the verses' rhyme density, on the assumption that a higher rhyme density means better lyrics. Note that none of the work cited above provide a comprehensive evaluation methodology, but rather focus on certain specific aspects of generated verses, such as rhyme density or syntactic correctness. Moreover, the methodology for generating lyrics, proposed by the various authors, influences the evaluation process. For instance,~\newcite{barbieri2012markov} did not evaluate the presence of rhymes because the model was constrained to produce only rhyming verses. Furthermore, none of the aforementioned works implement models that generate complete verses at the token level(including verse structure), which is the goal of the models we aim to evaluate. In contrast to previous approaches that evaluate whole verses, our evaluation methodology uses a more fine-grained, line-by-line scheme, which makes it easier for human annotators, as they no longer need to evaluate the whole verse at once. In addition, despite the fact the each line is annotated using a discrete scale, our methodology produces a continuous numeric score for the whole verse, enabling better comparison. \section{Dataset} For our evaluation experiments, we selected the following list of artists in four different categories: \begin{itemize} \item Three top-selling rap artists according to Wikipedia\footnote{\url{http://en.wikipedia.org/wiki/List_of_best-selling_music_artists}}: Eminem, Jay Z, Tupac \item Artists with the largest vocabulary according to Pop Chart Lab\footnote{\url{http://popchartlab.com/products/the-hip-hop-flow-chart}}: Aesop Rock, GZA, Sage Francis \item Artists with the smallest vocabulary according to Pop Chart Lab: DMX, Drake \item Best classified artists from Hirjee and Brown \shortcite{hirjee2010rhyme} using rhyme detection features: Fabolous, Nototious B.I.G., Lil' Wayne \end{itemize} We collected all available songs from the above artists from the site \textit{The Original Hip-Hop (Rap) Lyrics Archive - OHHLA.com - Hip-Hop Since 1992}\footnote{\url{http://www.ohhla.com/}}. We removed the metadata, line repetiton markup, and chorus lines, and tokenized the lyrics using the NLTK library~\cite{BirdKleinLoper09}. Since the preprocessing was done heuristically, the resulting dataset may still contain some text that is not actual verse, but rather dialogue or chorus lines. We therefore filter out all verses that are shorter than 20 tokens. Statistics of our dataset are shown in Table~\ref{table:dataset-stat}. \section{Evaluation Methodology} We believe that adequate evaluation for the ghostwriting task requires both manual and automatic approaches. The automated evaluation methodology enables large-scale analysis of the generated verse. However, given the nature of the task, the automated evaluation is not able to assess certain critical aspects of fluency and style, such as the vocabulary, tone, and themes preferred by a particular artist. In this section, we present a manual methodology for evaluating these aspects of the generated verse, as well as an improvement to the automatic methodology proposed by Potash et al. \shortcite{potashghostwriter}. \subsection{Manual Evaluation} We have designed two annotation tasks for manual evaluation. The first task is to determine how fluent and coherent the generated verses are. The second task is to evaluate manually how well the generated verses match the style of the target artist. \paragraph*{Fluency/Coherence Evaluation} \label{sec:Fluency/Coherence Evaluation} Given a generated verse, we ask annotators to determine the fluency and coherence of the lyrics. Even though our evaluation is for systems that produce entire verses, we follow the work of Wu \shortcite{wuevaluating} and annotate fluency, as well as coherence, at the line level. To assess fluency, we ask to what extent a given line can be considered a valid English utterance. Since a language model may produce highly disjointed verses as it progresses through the training process, we offer the annotator three options for grading fluency: strongly fluent, weakly fluent, and not fluent. If a line is disjointed, i.e., it is only fluent in specific segments of the line, the annotators are instructed to mark it as weakly fluent. The grade of not fluent is reserved for highly incoherent text. To assess coherence, we ask the annotator how well a given line matches the preceding line. That is, how believable is it that these two lines would follow each other in a rap verse. We offer the annotators the same choices as in the fluency evaluation: strongly coherent, weakly coherent, and not coherent. During the training process, a language model may output the same line repeatedly. We account for this in our coherence evaluation by defining the consecutive repetition of a line as not coherent. This is important to define because the line on its own may be strongly fluent, however, a coherent verse cannot consist of a single fluent line repeated indefinitely. \paragraph*{Style Matching} The goal of the style matching annotation is to determine how well a given verse captures the style of the target artist. In this annotation task, a user is presented with an evaluation verse and asked to compare it against four other verses. The goal is to pick the verse that is written in a similar style. One of the four choices is always a verse from the same artist that was used to generate the verse being evaluated. The other three verses are chosen from the remaining artists in our dataset. Each verse is evaluated in this manner four times, each time against different verses, so that it has the chance to get matched with a verse from each of the remaining twelve artists. The generated verse is considered stylistically consistent if the annotators tend to select the verse that belongs to the target artist. To evaluate the difficulty of this task, we also perform style matching annotation for authentic verse, in which the evaluated verse is not generated, but rather is an actual existing verse from the target artist. \footnote{We have made the annotation interface available on (\url{https://github.com/placeholder}).} \subsection{Automated Evaluation} \label{subsection:auto_eval} The automated evaluation we describe below attempts to capture computationally the dual aspects of ``unique yet similar'' in a manner originally proposed by \newcite{potashghostwriter}. \paragraph*{Uniqueness of Generated Lyrics} \label{sec:eval-methods-sim} We use a modified tf-idf representation for verses, and calculate cosine similarity between generated verses and the verses from the training data to determine novelty (or lack thereof). \iffalse In order to evaluate the novelty of generated lyrics, we compare the similarity of the generated lyrics to the lyrics in our training set. We used an algorithm proposed by \cite{mahedero2005natural} for calculating the similarity between produced lyrics and all verses from the same artist. This algorithm is based on the well-known Inverse Document Frequency and cosine distance between documents. First, we build the Term-Document Matrix with weights for each token in each song: \begin{equation} \label{eq:sim_vectors} w_{ij} = f_{ij} \log\big( \frac{N}{n_{j}} \big) \end{equation} where $N$ is the total number of documents (verses, in our case), $n_{j}$ is the number of verses that contains term $j$ and $f_{ij}$ is the frequency of term $j$ in the $ith$ verse. Using this matrix, we can calculate the cosine distance between verses and use it as measure of similarity. Verses that have a similar distribution of word usage will have a high cosine similarity. \fi In order to more directly penalize generated verses that are primarily the reproduction of a single verse from the training set, we calculate the maximum similarity score across all training verses. That is, we do not want generated verses that contain text from a single training verse, which in turn rewards generated verses that draw from numerous training verses. \paragraph*{Stylistic Similarity via Rhyme Density of Lyrics} We use the rhyme density method proposed by \newcite{hirjee2010using} to evaluate how well the generated verse models an artist's style. The point of an effective system is not to produce arbitrary rhymes: it is to produce rhyme types and rhyme frequency similar to the target artist. We note that the ghostwriter methodology we implement trains exclusively on the verses of a given artist, causing the vocabulary of the generated verse to be closed with respect to the training data. In this case, assessing how similar the generated vocabulary is to the target artist is not important. Instead, we focus on rhyme density, which is defined as the number of rhymed syllables divided by the total number of syllables \cite{hirjee2010using}. Certain artists distinguish themselves by having more complicated rhyme schemes, such as the use of internal\footnote{e.g. ``New York City gritty committee pity the fool'' and ``How I made it you salivated over my calibrated''} or polysyllabic rhymes\footnote{e.g. ``But it was your op to shop stolen art/Catch a swollen heart form not rolling smart''.}. Rhyme density is able to capture this in a single metric, since the tool we use is able to detect these various forms of rhymes. Moreover, as was shown in Wu \shortcite{wuevaluating}, the inter-annotator agreement (IAA) for manual rhyme detection is low (the highest IAA was only 0.283 using a two-scale annotation scheme), which is expected due to the subjective nature of the task. Therefore, an objective automatic methodology is desirable. Since this tool is trained on a distinct corpus of lyrics, it can provide a "uniform" experience and give an impartial and objective score. However, the rhyme detection method is not designed to deal with highly repetitive text, which the LSTM model produces often in the early stages of training. Since the same phoneme is repeated (because the same word is repeated), the rhyme detection tool generates a false positive. \newcite{potashghostwriter} deal with this by manually inspecting the rhyme densities of verses generated in the early stages of training to determine if a generated verse should be kept for the evaluation procedure. This approach is suitable for their work since they processed only one artist, but it is clearly not scalable, and therefore not applicable to our case. In order to fully automate this method, we propose to handle highly repetitive text% without assigning it a high rhyme density by weighting the rhyme density of a given verse by its entropy. More specifically, for a given verse, we calculate entropy at the token level and divide by total number of tokens in that verse. Verses with highly repetitive text will have a low entropy, which results in down-weighting the rhyme density of verses that produce false positive rhymes due to their repetitive text. To evaluate our method, we applied it to the artist used by Potash et al. \shortcite{potashghostwriter} and obtained exactly the same average rhyme density without any manual inspection of the generated verses; this despite the presence of false positive rhymes automatically detected in the beginning of training. \paragraph*{Merging Uniqueness and Similarity} Since ghostwriting is a balancing act of the two opposing forces of textual uniqueness and stylistic similarity, we want a low correlation between rhyme density (stylistic similarity) and maximum verse similarity (lack of textual uniqueness). However, our goal is not to have a high rhyme density, but rather to have a rhyme density similar to the target artist, while simultaneously keeping the maximum similarity score low. As the model overfits the training data, both the value of maximum similarity and the rhyme density will increase, until the model generates the original verse directly. Therefore, our goal is to evaluate the value of the maximum similarity at the point where the rhyme density has the value of the target artist. In order to accomplish this, we follow \newcite{potashghostwriter} and plot the values of rhyme density and maximum similarity obtained at different points during model training. We use regression lines for these points to identify the value of the maximum similarity line at the point where the rhyme density line has the value of the target artist. We give more detail below. \section{Lyric Generation Experiments} \label{sec:lyric-generation-experimnents} \iffalse \subsection{Baseline} To compare with the results of an LSTM model \cite{potashghostwriter}, we followed the work of \cite{barbieri2012markov} and created a Markov model for lyric generation. Since the goal of our work is to make an unsupervised system, we do not use any constraints or templates to produce the lyrics. Thus, our baseline simplifies to a basic n-gram model. Given a history of $w_{k+n-1}$,...,$w_{k}$, the system generates a new token $t$ as follows: \begin{equation} \label{eq:ngram_generation} \begin{split} & P(w_{k+n}=t|w_{k+n-1},...,w_{k}) = \\ &\frac{|w_{k}...w_{k+n-1}t|}{|w_{k}...w_{k+n-1}\bullet|} \end{split} \end{equation} where $|w_{k}...w_{k+n-1}t|$ is the number of times the context $w_{k+n-1}$,...,$w_{1}$ is followed by $t$ in the training data, and $|w_{k}...w_{k+n-1}\bullet|$ is the number of times the context appears followed by any token. There is the possibility, particularly when $n$ is large, that the context has never been encountered in the training data. When this occurs, we back off to a smaller n-gram model: \begin{equation} \label{eq:ngram_backoff} \begin{split} &P(w_{k+n}=t|w_{k+n-2},...,w_{k}) =\\ &\frac{|w_{k}...w_{k+n-2}\bullet t|}{|w_{k}...w_{k+n-2}\bullet\bullet|} \end{split} \end{equation} The model may have to backoff multiple times before it encounters context it has seen in the training data. Once we back off to the point where we compute $P(w_{n+k}=t|w_{k})$ we are guaranteed to have at least one non-zero probability, because $w_{k}$ must have appeared in the vocabulary for it to have been generated previously (see section 4.3 on model initialization). Because of this, we do not need to implement smoothing into the model. Note that the model to which we backoff is not necessarily a lower order n-gram model, but rather a lower order skip-gram model. Also, given the initial context of just the ``$<$startVerse$>$'' token, the model initializes as a unigram model, then becomes a bigram model, and so on until there is enough context to use the full n-gram model. \fi The main generative model we use in our evaluation experiments is an LSTM. Similar to \newcite{potashghostwriter}, we use an n-gram model as a baseline system for automated evaluation. %The n-gram model backs off to a lower order skip-gram model, with the goal mirroring the LSTM's ability to capture long-range dependencies. We refer the reader to the original work for a detailed description. After every 100 iterations of training\footnote{Training is done in batches with two verses per batch.} the LSTM model generates a verse. For the baseline model, we generate five verses at values 1-9 for $n$. We see a correspondence between higher $n$ and higher iteration: as both increase, the models become more `fit' to the training data. For the baseline model, we use the verses generated at different n-gram lengths ($n\in \{1,...,9\}$) to obtain the values for regression. At every value of $n$, we take the average rhyme density and maximum similarity score of the five verses that we generate to create a single data point for rhyme density and maximum similarity score, respectively. To enable comparison, we also create nine data points from the verses generated by the LSTM as follows. A separate model for each artist is trained for a minimum of 16,400 iterations. We take the verses generated every 2,000 iterations, from 0 to 16,000 iterations, giving us nine points. The averages for each point are obtained by using the verses generated in iterations $\pm x, x \in \{100,200,300,400\}$ for each interval of 2,000. \section{Results} We present the results of our evaluation experiments using both manual and automated evaluations. \subsection{Fluency/Coherence} In order to fairly compare the fluency/coherence of verses across artists, we use the verses generated by each artist's model at 16,000 iterations. We apply the fluency/coherence annotation methodology from Section \ref{sec:Fluency/Coherence Evaluation}. Each line is annotated by two annotators. Annotation results are shown in Figure~\ref{fig:fluency_comparasion} and Figure~\ref{fig:coherence_comparasion}. For each annotated verse, we report the percentage of lines annotated as strongly fluent, weakly fluent, and not fluent, as well as the corresponding percentages for coherence. We convert the raw annotation results into a single score for each verse by treating the labels ``strongly fluent'', ``weakly fluent'', and ``not fluent'' as numeric values 1, 0.5, and 0, respectively. Treating each annotation on a given line separately, we calculate the average numeric rating for a given verse: \begin{equation} \mbox{\textit{Fluency}} = \frac{\#\mbox{\textit{sf}} + 0.5 \#\mbox{\textit{wf}}}{\#a} \end{equation} where $\#\mbox{\textit{sf}}$ is the number of times any line is labeled strongly fluent, $\#\mbox{\textit{wf}}$ is the number of times any line is labeled weakly fluent, and $\#a$ is the total annotations provided for a verse, which is equal to the number of lines $\times$ 2. \textit{Coherence} is calculated in a similar manner. Raw inter-annotator agreement (IAA) for fluency annotation was 0.67. For coherence annotation, the IAA was 0.43. We believe coherence has a lower agreement because it is more semantic, as opposed to syntactic, in nature, causing it to be more subjective. Note that while the agreement is relatively low, it is expected, given the subjective nature of the task. For example,~\newcite{wuevaluating} report similar agreement values for the fluency annotation they perform. \subsection{Style Matching} We performed style-matching annotation for the verses generated at iterations 16,000--16,400 for each artist. For the experiment with authentic verses, we randomly chose five verses from each artist, with a verse length of at least 40 tokens. Each page was annotated twice, by native English-speaking rap fans. The results of our style matching annotations are shown in Table \ref{tbl:style_matches}. We present two different views of the results. First, each annotation for a page is considered separately and we calculate: \begin{equation} Match \% = \frac{\#m}{\#a} \end{equation} where $\#m$ is the number of times, on a given page, the chosen verse actually came from the target artist, and $\#a$ is the total number of annotations done. For a given artist, five verses were evaluated, each verse appeared on four separate pages, and each page is annotated twice, so $\#a$ is equal to 40. Since in each case (i.e., page) the classes are different, we cannot use Fleiss's kappa directly. Raw agreement for style annotation, which corresponds to the percentage of times annotators picked the same verse (whether or not they are correct) is shown in the column 'Raw agreement \%' in Table \ref{tbl:style_matches}. We also report annotators' joint ability to guess the target artist correctly, which we compute as follows: \begin{equation} Match_A\% = \frac{\#m_A}{\#s_A} \end{equation} where $\#s_A$ is the number of times the annotators agreed on a verse on the same page, and $\#m_A$ is the number of times that the agreed upon verse is from the target artist. \subsubsection{Artist Confusion} The results of style-matching annotation also provides us with an interesting insight into the similarity between two artists' styles. This is captured by the \textit{confusion} between two artists during the annotation of the pages with authentic verses, which is computed as follows: \begin{equation} Confusion(a,b) = \frac{\#c(a,b)+\#c(b,a)}{\#p(a,b)+\#p(b,a)} \end{equation} \noindent where $\#p(a,b)$ is the number of times a verse from artist $a$ is presented for evaluation and a verse from artist $b$ is shown as one of four choices; $\#c(a,b)$ is the number of times the verse from artist $b$ was chosen as the matching verse. The resulting confusion matrix is presented in Figure \ref{fig:confusions_baseline}. We intend for this data to provide a gold standard for future experiments that would attempt to encode the similarity of artists' styles. \subsection{Automated Evaluation} \label{sec:automated_evaluation} The results of our automated evaluation are shown in Table \ref{tbl:regression_results}. For each artist, we calculate their average rhyme density across all verses. We then use this value to determine at which iteration this rhyme density is achieved during generation (using the regression line for rhyme density). Next, we use the maximum similarity regression line to determine the maximum similarity score at that iteration. Low maximum similarity score indicates that we have maintained stylistic similarity while producing new, previously unseen lyrics. Note the presence of negative numbers in Table~\ref{tbl:regression_results}. The reason is that in the beginning of training (in the LSTM's case) and at a low n-gram length (for the baseline model), the models actually achieved a rhyme density that exceeded the artist's average rhyme density. As a result, the rhyme density regression line hits the average rhyme density on a negative iteration. \section{Discussion} In order to better understand the interaction between the four metrics we have introduced in this paper, we examined correlation coefficients between different measures of quality for generated verse (see Table \ref{tbl:coherence_fluency_sim_math_correlation}). The lack of strong correlation %between different measures of quality for generated verse supports the notion that different aspects of verse quality should be addressed separately. Moreover, the metrics are in fact complementary. Even the measures of \textit{fluency} and \textit{coherence}, despite sharing a similar goal, have a relatively low correlation of 0.4. % Such low correlations emphasize our contribution, since other works~\cite{barbieri2012markov,wuevaluating,malmi2015dopelearning} do not provide a comprehensive evaluation methodology, and evaluate just one or two particular aspects. For example,~\newcite{wuevaluating} evaluated only fluency and rhyming, and~\newcite{barbieri2012markov} evaluated only syntactic correctness and semantic relatedness to the title, whereas we present complementary approaches for evaluating different aspects of the generated verses. Interestingly, the number of verses a rapper has in our dataset has a strong negative correlation with coherence score (cf. Table~\ref{tbl:coherency_fluency_corr}). This can be explained by the following consideration: on iteration 16,000, the model for the authors with the smaller number of verses has seen the same verses more times than the model trained on a larger number of verses. Therefore, it is easier for the former to produce more coherent lyrics since it saw more of the same patterns. As a result, models trained on a larger number of verses have a lower coherence score. For example, Lil' Wayne has the most verses in our data, and correspondingly, the model for his verse has the worst coherence score. Note that the fluency score does not have this negative correlation with the number of verses. Based on our evaluation, 16,000 iterations is enough to learn a language model for the given artist that produces fluent lines. However, these lines will not necessarily form a coherent verse if the artist has a large number of verses. As can be seen from Table~\ref{tbl:style_matches}, the $Match\%$ score suggests that the LSTM-generated verses are able to capture the style of the artist as well as the original verses. Furthermore, $Match_A\%$ is significantly higher for the LSTM model, which means that the annotators agreed on matching verses more frequently. We believe this means that the LSTM model, trained on all verses from a given artist, is able to capture the artist's ``average'' style, whereas authentic verses represent a random selection that are less likely, statistically speaking, to be similar to another random verse. Note that, as we expect, there is a strong correlation between the number of tokens in the artist's data and the frequency of agreed-upon correct style matches (cf. Table \ref{tbl:coherency_fluency_corr}). Since verses vary in length, this correlation is not observed for verses. Finally, the lack of strong correlation with vocabulary richness suggests that token uniqueness is not as important as the sheer volume. One aspect of the generated verse we have not discussed so far is the structure of the generated verse. For example, the length of the generated verses should be evaluated, since the models we examined do generate line breaks and also decide when to end the verse. Table \ref{tbl:max_len_epoch} shows the longest verse generated for each artist, and also the point at which it was achieved during the training. We note that although 10 of the 11 models are able to generate long verses (up to a full standard deviation above the average verse length for that author), it takes a substantial amount of time, and the correlation between the average verse length for a given an artist and the verse length achieved by the model is weak (0.258). This suggests that modeling the specific verse structure, including length, is one aspect that requires improvement. Lastly, we note that the fully automated methodology we propose is able to replicate the results of the previously available semi-automatic method for the rapper Fabolous, which was the only artist evaluated by \newcite{potashghostwriter}. Furthermore, the results of automated evaluation for the 11 artists confirm that the LSTM model generalizes better than the baseline model. \section{Conclusions and Future Work} In this paper, we have presented a comprehensive evaluation methodology for the task of ghostwriting rap lyrics, which captures complementary aspects of this task and its goals. We developed a manual evaluation method that assesses several key properties of generated verse, and created a data set of authentic verse, manually annotated for style matching. A previously proposed semi-automatic evaluation method has now been fully automated, and shown to replicate results of the original method. We have shown how the proposed evaluation methodology can be used to evaluate an LSTM-based ghostwriter model. We believe our evaluation experiments also clearly demonstrate that complementary evaluation methods are required to capture different aspects of the ghostwriting task. Lastly, our evaluation provides key insights into future directions for generative models. For example, the automated evaluation shows how the LSTM's inability to integrate new vocabulary makes it difficult to achieve truly desirable similarity scores; future generative models can draw on the work of \newcite{graves2013generating} and \newcite{bowman2015generating} in an attempt to leverage other artists' lyrics. \iffalse \section*{Acknowledgments} Do not number the acknowledgment section. Do not include this section when submitting your paper for review. \fi \bibliographystyle{acl2012} \end{document}
Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting
1612.03205
Table 5: The maximum lengths of generated verses and % of training completed on which the verse is generated
[ "[BOLD] Artist", "[BOLD] Max Len", "[BOLD] % of training completed" ]
[ [ "Tupac", "454", "69.7" ], [ "Aesop Rock", "450", "91.0" ], [ "DMX", "361", "64.9" ], [ "Drake", "146", "82.3" ], [ "Eminem", "452", "90.8" ], [ "Fabolous", "278", "47.3" ], [ "GZA", "433", "81.1" ], [ "Jay Z", "449", "98.5" ], [ "Lil’ Wayne", "253", "92.7" ], [ "Nototious B.I.G.", "253", "83.0" ], [ "Sage Francis", "280", "53.9" ], [ "[BOLD] Average", "-", "77.8" ] ]
One aspect of the generated verse we have not discussed so far is the structure of the generated verse. For example, the length of the generated verses should be evaluated, since the models we examined do generate line breaks and also decide when to end the verse. We note that although 10 of the 11 models are able to generate long verses (up to a full standard deviation above the average verse length for that author), it takes a substantial amount of time, and the correlation between the average verse length for a given an artist and the verse length achieved by the model is weak (0.258). This suggests that modeling the specific verse structure, including length, is one aspect that requires improvement.
\documentclass[11pt,letterpaper]{article} \usepackage[letterpaper]{geometry} \makeatletter \newcommand{\@BIBLABEL}{\@emptybiblabel} \newcommand{\@emptybiblabel}[1]{} \makeatother \usepackage[hidelinks]{hyperref} \DeclareMathOperator*{\argmax}{arg\,max} \setlength\titlebox{6.5cm} % Expanding the titlebox \graphicspath{ {images/} } \title{Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting} \author{Peter Potash, Alexey Romanov, Anna Rumshisky \\ University of Massachusetts Lowell \\ Department of Computer Science \\ {\tt \{ppotash,aromanov,arum\}@cs.uml.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} Language generation tasks that seek to mimic human ability to use language creatively are difficult to evaluate, since one must consider creativity, style, and other non-trivial aspects of the generated text. The goal of this paper is to develop evaluation methods for one such task, ghostwriting of rap lyrics, and to provide an explicit, quantifiable foundation for the goals and future directions of this task. Ghostwriting must produce text that is similar in style to the emulated artist, yet distinct in content. We develop a novel evaluation methodology that addresses several complementary aspects of this task, and illustrate how such evaluation can be used to meaningfully analyze system performance. We provide a corpus of lyrics for 13 rap artists, annotated for stylistic similarity, which allows us to assess the feasibility of manual evaluation for generated verse. \end{abstract} \section{Introduction} Language generation tasks are often among the most difficult to evaluate. Evaluating machine translation, image captioning, summarization, and other similar tasks is typically done via comparison with existing human-generated ``references''. However, human beings also use language creatively, and for the language generation tasks that seek to mimic this ability, determining how accurately the generated text represents its target is insufficient, as one also needs to evaluate creativity and style. We believe that one of the reasons such tasks receive little attention is the lack of sound evaluation methodology, without which no task is well-defined, and no progress can be made. The goal of this paper is to develop an evaluation methodology for one such task, ghostwriting, or more specifically, ghostwriting of rap lyrics. Ghostwriting is ubiquitous in politics, literature, and music. As such, it introduces a distinction between the performer/presenter of text, lyrics, etc, and the creator of text/lyrics. The goal of ghostwriting is to present something in a style that is believable enough to be credited to the performer. In the domain of rap specifically, rappers sometimes function as ghostwriters early on before embarking on their own public careers, and there are even businesses that provide written lyrics as a service\footnote{\url{http://www.rap-rebirth.com/},\\ \url{http://www.precisionwrittens.com/rap-ghostwriters-for-hire/}}. The goal of automatic ghostwriting is therefore to create a system that can take as input a given artist's work and generate \textbf{similar} yet \textbf{unique} lyrics. Our objective in this work is to provide a quantifiable direction and foundation for the task of rap lyric generation and similar tasks through (1) developing an evaluation methodology for such models, and (2) illustrating how such evaluation can be used to analyze system performance, including advantages and limitations of a specific language model developed for this task. As an illustration case, we use the ghostwriter model previously proposed in exploratory work by Potash et al. \shortcite{potashghostwriter}, which uses a recurrent neural network (RNN) with Long Short-Term Memory (LSTM) for rap lyric generation. The following are the main contributions of this paper. We present a comprehensive manual evaluation methodology of the generated verses along three key aspects: fluency, coherence, and style matching. We introduce an improvement to the semi-automatic methodology used by Potash et al. \shortcite{potashghostwriter} that automatically penalizes repetitive text, which removes the need for manual intervention and enables a large-scale analysis. Finally, we build a corpus of lyrics for 13 rap artists, each with his own unique style, and conduct a comprehensive evaluation of the LSTM model performance using the new evaluation methodology. The corpus includes style matching annotation for select verses in dataset, which can form a gold standard for future work on automatic representation of similarity between artists' styles. The resulting rap lyric dataset is publicly available from the authors' website. Additionally, we believe that the annotation method we propose for manual style evaluation can be used for other similar generation tasks. One example is 'Deep Art' work in the computer vision community that seeks to apply the style of a particular painting to other images \cite{gatys2015neural,li2016combining}. One of the drawbacks of such work is a lack of systematic evaluation. For example,~\newcite{li2016combining} compared the results of the model with previous work by doing a manual inspection during an informal user study. The presence of a systematic formal evaluation process would lead to a clearer comparison between models and facilitate progress in this area of research. With this in mind, we make the interface used for style evaluation in this work available for public use. Our evaluation results highlight the truly multi-faceted nature of the ghostwriting task. While having a single measure of success is clearly desirable, our analysis shows the need for complementary metrics that evaluate different components of the overall task. Indeed, despite the fact that our test-case LSTM model outperforms a baseline model across numerous artists based on automated evaluation, the full set of evaluation metrics is able to showcase the LSTM model's strengths and weakness. The coherence evaluation demonstrates the difficulty of incorporating large amounts of training data into the LSTM model, which intuitively would be desirable to create a flexible ghostwriting model. The style matching experiments suggest that the LSTM is effective at capturing an artist's general style. However, this may indicate that it tends to form `average' verses, which are then more likely to be matched with existing verses from an artist rather than another random verse from the same artist. Overall, the evaluation methodology we present provides an explicit, quantifiable foundation for the ghostwriting task, allowing for a deeper understanding of the task's goals and future research directions. \section{Related Work} In the past few years there has been a significant amount of work dedicated to the evaluation of natural language generation~\cite{hastie2014comparative}, dealing with different aspects of evaluation methodology. However, most of this work focuses on simple tasks, such as referring expressions generation. For example, Belz and Kow \shortcite{belz2011discrete} investigated the impact of continuous and discrete scales for generated weather descriptions, as well as and simple image descriptions that typically consist of a few words (e.g., "\verb|the small blue fan|"). Previous work that explores text generation for artistic purposes, such as poetry and lyrics, generally uses either automated or manual evaluation. In terms of manual evaluation, Barbieri et al. \shortcite{barbieri2012markov} have a set of annotators evaluate generated lyrics along two separate dimensions: grammar and semantic relatedness to song title. The annotators rated the dimensions with scores 1-3. A similar strategy was used by Gerv\'{a}s \shortcite{gervas2000wasp}, where the author had annotators evaluate generated verses with regard to syntactic correctness and overall aesthetic value, providing scores in the range 1-5. Wu et al. \shortcite{wu2013learning} had annotators determine the effectiveness of various systems based on fluency as well as rhyming. Some heuristic-based automated approaches have also been used. For example, Oliveira et al. \shortcite{oliveira2014adapting} use a simple automatic heuristic that awards lines for ending in a termination previously used in the generated stanza. Malmi et al. \shortcite{malmi2015dopelearning} evaluate their generated lyrics based on the verses' rhyme density, on the assumption that a higher rhyme density means better lyrics. Note that none of the work cited above provide a comprehensive evaluation methodology, but rather focus on certain specific aspects of generated verses, such as rhyme density or syntactic correctness. Moreover, the methodology for generating lyrics, proposed by the various authors, influences the evaluation process. For instance,~\newcite{barbieri2012markov} did not evaluate the presence of rhymes because the model was constrained to produce only rhyming verses. Furthermore, none of the aforementioned works implement models that generate complete verses at the token level(including verse structure), which is the goal of the models we aim to evaluate. In contrast to previous approaches that evaluate whole verses, our evaluation methodology uses a more fine-grained, line-by-line scheme, which makes it easier for human annotators, as they no longer need to evaluate the whole verse at once. In addition, despite the fact the each line is annotated using a discrete scale, our methodology produces a continuous numeric score for the whole verse, enabling better comparison. \section{Dataset} For our evaluation experiments, we selected the following list of artists in four different categories: \begin{itemize} \item Three top-selling rap artists according to Wikipedia\footnote{\url{http://en.wikipedia.org/wiki/List_of_best-selling_music_artists}}: Eminem, Jay Z, Tupac \item Artists with the largest vocabulary according to Pop Chart Lab\footnote{\url{http://popchartlab.com/products/the-hip-hop-flow-chart}}: Aesop Rock, GZA, Sage Francis \item Artists with the smallest vocabulary according to Pop Chart Lab: DMX, Drake \item Best classified artists from Hirjee and Brown \shortcite{hirjee2010rhyme} using rhyme detection features: Fabolous, Nototious B.I.G., Lil' Wayne \end{itemize} We collected all available songs from the above artists from the site \textit{The Original Hip-Hop (Rap) Lyrics Archive - OHHLA.com - Hip-Hop Since 1992}\footnote{\url{http://www.ohhla.com/}}. We removed the metadata, line repetiton markup, and chorus lines, and tokenized the lyrics using the NLTK library~\cite{BirdKleinLoper09}. Since the preprocessing was done heuristically, the resulting dataset may still contain some text that is not actual verse, but rather dialogue or chorus lines. We therefore filter out all verses that are shorter than 20 tokens. Statistics of our dataset are shown in Table~\ref{table:dataset-stat}. \section{Evaluation Methodology} We believe that adequate evaluation for the ghostwriting task requires both manual and automatic approaches. The automated evaluation methodology enables large-scale analysis of the generated verse. However, given the nature of the task, the automated evaluation is not able to assess certain critical aspects of fluency and style, such as the vocabulary, tone, and themes preferred by a particular artist. In this section, we present a manual methodology for evaluating these aspects of the generated verse, as well as an improvement to the automatic methodology proposed by Potash et al. \shortcite{potashghostwriter}. \subsection{Manual Evaluation} We have designed two annotation tasks for manual evaluation. The first task is to determine how fluent and coherent the generated verses are. The second task is to evaluate manually how well the generated verses match the style of the target artist. \paragraph*{Fluency/Coherence Evaluation} \label{sec:Fluency/Coherence Evaluation} Given a generated verse, we ask annotators to determine the fluency and coherence of the lyrics. Even though our evaluation is for systems that produce entire verses, we follow the work of Wu \shortcite{wuevaluating} and annotate fluency, as well as coherence, at the line level. To assess fluency, we ask to what extent a given line can be considered a valid English utterance. Since a language model may produce highly disjointed verses as it progresses through the training process, we offer the annotator three options for grading fluency: strongly fluent, weakly fluent, and not fluent. If a line is disjointed, i.e., it is only fluent in specific segments of the line, the annotators are instructed to mark it as weakly fluent. The grade of not fluent is reserved for highly incoherent text. To assess coherence, we ask the annotator how well a given line matches the preceding line. That is, how believable is it that these two lines would follow each other in a rap verse. We offer the annotators the same choices as in the fluency evaluation: strongly coherent, weakly coherent, and not coherent. During the training process, a language model may output the same line repeatedly. We account for this in our coherence evaluation by defining the consecutive repetition of a line as not coherent. This is important to define because the line on its own may be strongly fluent, however, a coherent verse cannot consist of a single fluent line repeated indefinitely. \paragraph*{Style Matching} The goal of the style matching annotation is to determine how well a given verse captures the style of the target artist. In this annotation task, a user is presented with an evaluation verse and asked to compare it against four other verses. The goal is to pick the verse that is written in a similar style. One of the four choices is always a verse from the same artist that was used to generate the verse being evaluated. The other three verses are chosen from the remaining artists in our dataset. Each verse is evaluated in this manner four times, each time against different verses, so that it has the chance to get matched with a verse from each of the remaining twelve artists. The generated verse is considered stylistically consistent if the annotators tend to select the verse that belongs to the target artist. To evaluate the difficulty of this task, we also perform style matching annotation for authentic verse, in which the evaluated verse is not generated, but rather is an actual existing verse from the target artist. \footnote{We have made the annotation interface available on (\url{https://github.com/placeholder}).} \subsection{Automated Evaluation} \label{subsection:auto_eval} The automated evaluation we describe below attempts to capture computationally the dual aspects of ``unique yet similar'' in a manner originally proposed by \newcite{potashghostwriter}. \paragraph*{Uniqueness of Generated Lyrics} \label{sec:eval-methods-sim} We use a modified tf-idf representation for verses, and calculate cosine similarity between generated verses and the verses from the training data to determine novelty (or lack thereof). \iffalse In order to evaluate the novelty of generated lyrics, we compare the similarity of the generated lyrics to the lyrics in our training set. We used an algorithm proposed by \cite{mahedero2005natural} for calculating the similarity between produced lyrics and all verses from the same artist. This algorithm is based on the well-known Inverse Document Frequency and cosine distance between documents. First, we build the Term-Document Matrix with weights for each token in each song: \begin{equation} \label{eq:sim_vectors} w_{ij} = f_{ij} \log\big( \frac{N}{n_{j}} \big) \end{equation} where $N$ is the total number of documents (verses, in our case), $n_{j}$ is the number of verses that contains term $j$ and $f_{ij}$ is the frequency of term $j$ in the $ith$ verse. Using this matrix, we can calculate the cosine distance between verses and use it as measure of similarity. Verses that have a similar distribution of word usage will have a high cosine similarity. \fi In order to more directly penalize generated verses that are primarily the reproduction of a single verse from the training set, we calculate the maximum similarity score across all training verses. That is, we do not want generated verses that contain text from a single training verse, which in turn rewards generated verses that draw from numerous training verses. \paragraph*{Stylistic Similarity via Rhyme Density of Lyrics} We use the rhyme density method proposed by \newcite{hirjee2010using} to evaluate how well the generated verse models an artist's style. The point of an effective system is not to produce arbitrary rhymes: it is to produce rhyme types and rhyme frequency similar to the target artist. We note that the ghostwriter methodology we implement trains exclusively on the verses of a given artist, causing the vocabulary of the generated verse to be closed with respect to the training data. In this case, assessing how similar the generated vocabulary is to the target artist is not important. Instead, we focus on rhyme density, which is defined as the number of rhymed syllables divided by the total number of syllables \cite{hirjee2010using}. Certain artists distinguish themselves by having more complicated rhyme schemes, such as the use of internal\footnote{e.g. ``New York City gritty committee pity the fool'' and ``How I made it you salivated over my calibrated''} or polysyllabic rhymes\footnote{e.g. ``But it was your op to shop stolen art/Catch a swollen heart form not rolling smart''.}. Rhyme density is able to capture this in a single metric, since the tool we use is able to detect these various forms of rhymes. Moreover, as was shown in Wu \shortcite{wuevaluating}, the inter-annotator agreement (IAA) for manual rhyme detection is low (the highest IAA was only 0.283 using a two-scale annotation scheme), which is expected due to the subjective nature of the task. Therefore, an objective automatic methodology is desirable. Since this tool is trained on a distinct corpus of lyrics, it can provide a "uniform" experience and give an impartial and objective score. However, the rhyme detection method is not designed to deal with highly repetitive text, which the LSTM model produces often in the early stages of training. Since the same phoneme is repeated (because the same word is repeated), the rhyme detection tool generates a false positive. \newcite{potashghostwriter} deal with this by manually inspecting the rhyme densities of verses generated in the early stages of training to determine if a generated verse should be kept for the evaluation procedure. This approach is suitable for their work since they processed only one artist, but it is clearly not scalable, and therefore not applicable to our case. In order to fully automate this method, we propose to handle highly repetitive text% without assigning it a high rhyme density by weighting the rhyme density of a given verse by its entropy. More specifically, for a given verse, we calculate entropy at the token level and divide by total number of tokens in that verse. Verses with highly repetitive text will have a low entropy, which results in down-weighting the rhyme density of verses that produce false positive rhymes due to their repetitive text. To evaluate our method, we applied it to the artist used by Potash et al. \shortcite{potashghostwriter} and obtained exactly the same average rhyme density without any manual inspection of the generated verses; this despite the presence of false positive rhymes automatically detected in the beginning of training. \paragraph*{Merging Uniqueness and Similarity} Since ghostwriting is a balancing act of the two opposing forces of textual uniqueness and stylistic similarity, we want a low correlation between rhyme density (stylistic similarity) and maximum verse similarity (lack of textual uniqueness). However, our goal is not to have a high rhyme density, but rather to have a rhyme density similar to the target artist, while simultaneously keeping the maximum similarity score low. As the model overfits the training data, both the value of maximum similarity and the rhyme density will increase, until the model generates the original verse directly. Therefore, our goal is to evaluate the value of the maximum similarity at the point where the rhyme density has the value of the target artist. In order to accomplish this, we follow \newcite{potashghostwriter} and plot the values of rhyme density and maximum similarity obtained at different points during model training. We use regression lines for these points to identify the value of the maximum similarity line at the point where the rhyme density line has the value of the target artist. We give more detail below. \section{Lyric Generation Experiments} \label{sec:lyric-generation-experimnents} \iffalse \subsection{Baseline} To compare with the results of an LSTM model \cite{potashghostwriter}, we followed the work of \cite{barbieri2012markov} and created a Markov model for lyric generation. Since the goal of our work is to make an unsupervised system, we do not use any constraints or templates to produce the lyrics. Thus, our baseline simplifies to a basic n-gram model. Given a history of $w_{k+n-1}$,...,$w_{k}$, the system generates a new token $t$ as follows: \begin{equation} \label{eq:ngram_generation} \begin{split} & P(w_{k+n}=t|w_{k+n-1},...,w_{k}) = \\ &\frac{|w_{k}...w_{k+n-1}t|}{|w_{k}...w_{k+n-1}\bullet|} \end{split} \end{equation} where $|w_{k}...w_{k+n-1}t|$ is the number of times the context $w_{k+n-1}$,...,$w_{1}$ is followed by $t$ in the training data, and $|w_{k}...w_{k+n-1}\bullet|$ is the number of times the context appears followed by any token. There is the possibility, particularly when $n$ is large, that the context has never been encountered in the training data. When this occurs, we back off to a smaller n-gram model: \begin{equation} \label{eq:ngram_backoff} \begin{split} &P(w_{k+n}=t|w_{k+n-2},...,w_{k}) =\\ &\frac{|w_{k}...w_{k+n-2}\bullet t|}{|w_{k}...w_{k+n-2}\bullet\bullet|} \end{split} \end{equation} The model may have to backoff multiple times before it encounters context it has seen in the training data. Once we back off to the point where we compute $P(w_{n+k}=t|w_{k})$ we are guaranteed to have at least one non-zero probability, because $w_{k}$ must have appeared in the vocabulary for it to have been generated previously (see section 4.3 on model initialization). Because of this, we do not need to implement smoothing into the model. Note that the model to which we backoff is not necessarily a lower order n-gram model, but rather a lower order skip-gram model. Also, given the initial context of just the ``$<$startVerse$>$'' token, the model initializes as a unigram model, then becomes a bigram model, and so on until there is enough context to use the full n-gram model. \fi The main generative model we use in our evaluation experiments is an LSTM. Similar to \newcite{potashghostwriter}, we use an n-gram model as a baseline system for automated evaluation. %The n-gram model backs off to a lower order skip-gram model, with the goal mirroring the LSTM's ability to capture long-range dependencies. We refer the reader to the original work for a detailed description. After every 100 iterations of training\footnote{Training is done in batches with two verses per batch.} the LSTM model generates a verse. For the baseline model, we generate five verses at values 1-9 for $n$. We see a correspondence between higher $n$ and higher iteration: as both increase, the models become more `fit' to the training data. For the baseline model, we use the verses generated at different n-gram lengths ($n\in \{1,...,9\}$) to obtain the values for regression. At every value of $n$, we take the average rhyme density and maximum similarity score of the five verses that we generate to create a single data point for rhyme density and maximum similarity score, respectively. To enable comparison, we also create nine data points from the verses generated by the LSTM as follows. A separate model for each artist is trained for a minimum of 16,400 iterations. We take the verses generated every 2,000 iterations, from 0 to 16,000 iterations, giving us nine points. The averages for each point are obtained by using the verses generated in iterations $\pm x, x \in \{100,200,300,400\}$ for each interval of 2,000. \section{Results} We present the results of our evaluation experiments using both manual and automated evaluations. \subsection{Fluency/Coherence} In order to fairly compare the fluency/coherence of verses across artists, we use the verses generated by each artist's model at 16,000 iterations. We apply the fluency/coherence annotation methodology from Section \ref{sec:Fluency/Coherence Evaluation}. Each line is annotated by two annotators. Annotation results are shown in Figure~\ref{fig:fluency_comparasion} and Figure~\ref{fig:coherence_comparasion}. For each annotated verse, we report the percentage of lines annotated as strongly fluent, weakly fluent, and not fluent, as well as the corresponding percentages for coherence. We convert the raw annotation results into a single score for each verse by treating the labels ``strongly fluent'', ``weakly fluent'', and ``not fluent'' as numeric values 1, 0.5, and 0, respectively. Treating each annotation on a given line separately, we calculate the average numeric rating for a given verse: \begin{equation} \mbox{\textit{Fluency}} = \frac{\#\mbox{\textit{sf}} + 0.5 \#\mbox{\textit{wf}}}{\#a} \end{equation} where $\#\mbox{\textit{sf}}$ is the number of times any line is labeled strongly fluent, $\#\mbox{\textit{wf}}$ is the number of times any line is labeled weakly fluent, and $\#a$ is the total annotations provided for a verse, which is equal to the number of lines $\times$ 2. \textit{Coherence} is calculated in a similar manner. Raw inter-annotator agreement (IAA) for fluency annotation was 0.67. For coherence annotation, the IAA was 0.43. We believe coherence has a lower agreement because it is more semantic, as opposed to syntactic, in nature, causing it to be more subjective. Note that while the agreement is relatively low, it is expected, given the subjective nature of the task. For example,~\newcite{wuevaluating} report similar agreement values for the fluency annotation they perform. \subsection{Style Matching} We performed style-matching annotation for the verses generated at iterations 16,000--16,400 for each artist. For the experiment with authentic verses, we randomly chose five verses from each artist, with a verse length of at least 40 tokens. Each page was annotated twice, by native English-speaking rap fans. The results of our style matching annotations are shown in Table \ref{tbl:style_matches}. We present two different views of the results. First, each annotation for a page is considered separately and we calculate: \begin{equation} Match \% = \frac{\#m}{\#a} \end{equation} where $\#m$ is the number of times, on a given page, the chosen verse actually came from the target artist, and $\#a$ is the total number of annotations done. For a given artist, five verses were evaluated, each verse appeared on four separate pages, and each page is annotated twice, so $\#a$ is equal to 40. Since in each case (i.e., page) the classes are different, we cannot use Fleiss's kappa directly. Raw agreement for style annotation, which corresponds to the percentage of times annotators picked the same verse (whether or not they are correct) is shown in the column 'Raw agreement \%' in Table \ref{tbl:style_matches}. We also report annotators' joint ability to guess the target artist correctly, which we compute as follows: \begin{equation} Match_A\% = \frac{\#m_A}{\#s_A} \end{equation} where $\#s_A$ is the number of times the annotators agreed on a verse on the same page, and $\#m_A$ is the number of times that the agreed upon verse is from the target artist. \subsubsection{Artist Confusion} The results of style-matching annotation also provides us with an interesting insight into the similarity between two artists' styles. This is captured by the \textit{confusion} between two artists during the annotation of the pages with authentic verses, which is computed as follows: \begin{equation} Confusion(a,b) = \frac{\#c(a,b)+\#c(b,a)}{\#p(a,b)+\#p(b,a)} \end{equation} \noindent where $\#p(a,b)$ is the number of times a verse from artist $a$ is presented for evaluation and a verse from artist $b$ is shown as one of four choices; $\#c(a,b)$ is the number of times the verse from artist $b$ was chosen as the matching verse. The resulting confusion matrix is presented in Figure \ref{fig:confusions_baseline}. We intend for this data to provide a gold standard for future experiments that would attempt to encode the similarity of artists' styles. \subsection{Automated Evaluation} \label{sec:automated_evaluation} The results of our automated evaluation are shown in Table \ref{tbl:regression_results}. For each artist, we calculate their average rhyme density across all verses. We then use this value to determine at which iteration this rhyme density is achieved during generation (using the regression line for rhyme density). Next, we use the maximum similarity regression line to determine the maximum similarity score at that iteration. Low maximum similarity score indicates that we have maintained stylistic similarity while producing new, previously unseen lyrics. Note the presence of negative numbers in Table~\ref{tbl:regression_results}. The reason is that in the beginning of training (in the LSTM's case) and at a low n-gram length (for the baseline model), the models actually achieved a rhyme density that exceeded the artist's average rhyme density. As a result, the rhyme density regression line hits the average rhyme density on a negative iteration. \section{Discussion} In order to better understand the interaction between the four metrics we have introduced in this paper, we examined correlation coefficients between different measures of quality for generated verse (see Table \ref{tbl:coherence_fluency_sim_math_correlation}). The lack of strong correlation %between different measures of quality for generated verse supports the notion that different aspects of verse quality should be addressed separately. Moreover, the metrics are in fact complementary. Even the measures of \textit{fluency} and \textit{coherence}, despite sharing a similar goal, have a relatively low correlation of 0.4. % Such low correlations emphasize our contribution, since other works~\cite{barbieri2012markov,wuevaluating,malmi2015dopelearning} do not provide a comprehensive evaluation methodology, and evaluate just one or two particular aspects. For example,~\newcite{wuevaluating} evaluated only fluency and rhyming, and~\newcite{barbieri2012markov} evaluated only syntactic correctness and semantic relatedness to the title, whereas we present complementary approaches for evaluating different aspects of the generated verses. Interestingly, the number of verses a rapper has in our dataset has a strong negative correlation with coherence score (cf. Table~\ref{tbl:coherency_fluency_corr}). This can be explained by the following consideration: on iteration 16,000, the model for the authors with the smaller number of verses has seen the same verses more times than the model trained on a larger number of verses. Therefore, it is easier for the former to produce more coherent lyrics since it saw more of the same patterns. As a result, models trained on a larger number of verses have a lower coherence score. For example, Lil' Wayne has the most verses in our data, and correspondingly, the model for his verse has the worst coherence score. Note that the fluency score does not have this negative correlation with the number of verses. Based on our evaluation, 16,000 iterations is enough to learn a language model for the given artist that produces fluent lines. However, these lines will not necessarily form a coherent verse if the artist has a large number of verses. As can be seen from Table~\ref{tbl:style_matches}, the $Match\%$ score suggests that the LSTM-generated verses are able to capture the style of the artist as well as the original verses. Furthermore, $Match_A\%$ is significantly higher for the LSTM model, which means that the annotators agreed on matching verses more frequently. We believe this means that the LSTM model, trained on all verses from a given artist, is able to capture the artist's ``average'' style, whereas authentic verses represent a random selection that are less likely, statistically speaking, to be similar to another random verse. Note that, as we expect, there is a strong correlation between the number of tokens in the artist's data and the frequency of agreed-upon correct style matches (cf. Table \ref{tbl:coherency_fluency_corr}). Since verses vary in length, this correlation is not observed for verses. Finally, the lack of strong correlation with vocabulary richness suggests that token uniqueness is not as important as the sheer volume. One aspect of the generated verse we have not discussed so far is the structure of the generated verse. For example, the length of the generated verses should be evaluated, since the models we examined do generate line breaks and also decide when to end the verse. Table \ref{tbl:max_len_epoch} shows the longest verse generated for each artist, and also the point at which it was achieved during the training. We note that although 10 of the 11 models are able to generate long verses (up to a full standard deviation above the average verse length for that author), it takes a substantial amount of time, and the correlation between the average verse length for a given an artist and the verse length achieved by the model is weak (0.258). This suggests that modeling the specific verse structure, including length, is one aspect that requires improvement. Lastly, we note that the fully automated methodology we propose is able to replicate the results of the previously available semi-automatic method for the rapper Fabolous, which was the only artist evaluated by \newcite{potashghostwriter}. Furthermore, the results of automated evaluation for the 11 artists confirm that the LSTM model generalizes better than the baseline model. \section{Conclusions and Future Work} In this paper, we have presented a comprehensive evaluation methodology for the task of ghostwriting rap lyrics, which captures complementary aspects of this task and its goals. We developed a manual evaluation method that assesses several key properties of generated verse, and created a data set of authentic verse, manually annotated for style matching. A previously proposed semi-automatic evaluation method has now been fully automated, and shown to replicate results of the original method. We have shown how the proposed evaluation methodology can be used to evaluate an LSTM-based ghostwriter model. We believe our evaluation experiments also clearly demonstrate that complementary evaluation methods are required to capture different aspects of the ghostwriting task. Lastly, our evaluation provides key insights into future directions for generative models. For example, the automated evaluation shows how the LSTM's inability to integrate new vocabulary makes it difficult to achieve truly desirable similarity scores; future generative models can draw on the work of \newcite{graves2013generating} and \newcite{bowman2015generating} in an attempt to leverage other artists' lyrics. \iffalse \section*{Acknowledgments} Do not number the acknowledgment section. Do not include this section when submitting your paper for review. \fi \bibliographystyle{acl2012} \end{document}
Robust Incremental Neural Semantic Graph Parsing
1704.07092
Table 4: EDS parsing test set results.
[ "Model", "AE RNN", "ACE" ]
[ [ "EDM", "85.48", "89.58" ], [ "EDM [ITALIC] P", "88.14", "91.82" ], [ "EDM [ITALIC] A", "82.20", "86.92" ], [ "Smatch", "86.50", "93.52" ] ]
The EDS parsing task is slightly simpler than DMRS, due to the absence of rich argument labels and additional graph edges that allow the recovery of full MRS. We see that for ACE the accuracies are very similar, while for our model EDS parsing is more accurate on the EDM metrics. We hypothesize that most of the extra information in DMRS can be obtained through the ERG, to which ACE has access but our model doesn’t.
\section{Meaning Representations} We define a common framework for semantic graphs in which we can place both MRS-based graph representations (DMRS and EDS) and AMR. Sentence meaning is represented with rooted, labelled, connected, directed graphs~\cite{KuhlmannO16}. An example graph is visualized in Figure~\ref{fig:eds-graph}. representations. Node labels are referred to as \emph{predicates} (\emph{concepts} in AMR) and edge labels as \emph{arguments} (AMR \emph{relations}). In addition \emph{constants}, a special type of node modifiers, are used to denote the string values of named entities and numbers (including date and time expressions). Every node is aligned to a token or a continuous span of tokens in the sentence the graph corresponds to. Minimal Recursion Semantics (MRS) is a framework for computational semantics that can be used for parsing or generation~\cite{CopenstakeFPS05}. Instances and eventualities are represented with logical variables. Predicates take arguments with labels from a small, fixed set of roles. Arguments are either logical variables or handles, designated formalism-internal variables. Handle equality constraints support scope underspecification; multiple scope-resolved logical representations can be derived from one MRS structure. A predicate corresponds to its intrinsic argument and is aligned to a character span of the (untokenized) input sentence. Predicates representing named entities or numbers are parameterized by strings. Quantification is expressed through predicates that bound instance variables, rather than through logical operators such as $\exists$ or $\forall$. MRS was designed to be integrated with feature-based grammars such as Head-driven Phrase Structure Grammar (HPSG)~\cite{PollardS94} or Lexical Functional Grammar (LFG)~\cite{KaplanB82}. MRS has been implement the English Resource Grammar (ERG)~\cite{Flickinger00}, a broad-coverage high-precision HPSG grammar. \newcite{OepenL06} proposed Elementary Dependency Structure (EDS), a conversion of MRS to variable-free dependency graphs which drops scope underspecification. \newcite{Copenstake09} extended this conversion to avoid information loss, primarily through richer edge labels. The resulting representation, Dependency MRS (DMRS), can be converted back to the original MRS, or used directly in MRS-based applications~\cite{CopenstakeEa16}. We are interested in the empirical performance of parsers for both of these representations: while EDS is more interpretable as an independent semantic graph representation, DMRS can be related back to underspecified logical forms. A bilexical simplification of EDS has previously been used for semantic dependency parsing~\cite{OepenEa14,OepenEa15}. Figure~\ref{fig:eds-graph} illustrates an EDS graph. MRS makes an explicit distinction between surface and abstract predicates (by convention surface predicates are prefixed by an underscore). Surface predicates consist of a lemma followed by a coarse part-of-speech tag and an optional sense label. Predicates absent from the ERG lexicon are represented by their surface forms and POS tags. We convert the character-level predicate spans given by MRS to token-level spans for parsing purposes, but the representation does not require gold tokenization. Surface predicates usually align with the span of the token(s) they represent, while abstract predicates can span longer segments. In full MRS every predicate is annotated with a set of morphosyntactic features, encoding for example tense, aspect and number information; we do not currently model these features. AMR~\cite{BanarescuEa13} graphs can be represented in the same framework, despite a number of linguistic differences with MRS. Some information annotated explicitly in MRS is latent in AMR, including alignments and the distinction between surface (lexical) and abstract concepts. AMR predicates are based on PropBank~\cite{PalmerGK05}, annotated as lemmas plus sense labels, but they form only a subset of concepts. Other concepts are either English words or special keywords, corresponding to overt lexemes in some cases but not others. \section{Introduction} An important goal of Natural Language Understanding (NLU) is to parse sentences to structured, interpretable meaning representations that can be used for query execution, inference and reasoning. Recently end-to-end models have outperformed traditional pipeline approaches, predicting syntactic or semantic structure as intermediate steps, on NLU tasks such as sentiment analysis and semantic relatedness~\cite{LeM14,KirosEa15}, question answering~\cite{HermannEa15} % WestonCB14, and textual entailment~\cite{RocktaschelEa15}. However the linguistic structure used in applications has predominantly been shallow, restricted to bilexical dependencies or trees. In this paper we focus on robust parsing into linguistically deep representations. The main representation that we use is Minimal Recursion Semantics (MRS)~\cite{CopenstakeFMRS95,CopenstakeFPS05}, which serves as the semantic representation of the English Resource Grammar (ERG)~\cite{Flickinger00}. Existing parsers for full MRS (as opposed to bilexical semantic graphs derived from, but simplifying MRS) are grammar-based, performing disambiguation with a maximum entropy model~\cite{ToutanovaMFO05,ZhangOC07}; this approach has high precision but incomplete coverage. Our main contribution is to develop a fast and robust parser for full MRS-based semantic graphs. We exploit the power of global conditioning enabled by deep learning to predict linguistically deep graphs incrementally. The model does not have access to the underlying ERG or syntactic structures from which the MRS analyses were originally derived. We develop parsers for two graph-based conversions of MRS, Elementary Dependency Structure (EDS)~\cite{OepenL06} and Dependency MRS (DMRS)~\cite{Copenstake09}, of which the latter is inter-convertible with MRS. Abstract Meaning Representation (AMR)~\cite{BanarescuEa13} is a graph-based semantic representation that shares the goals of MRS. Aside from differences in the choice of which linguistic phenomena are annotated, MRS is a compositional representation explicitly coupled with the syntactic structure of the sentence, while AMR does not assume compositionality or alignment with the sentence structure. Recently a number of AMR parsers have been developed~\cite{FlaniganTCDS14,WangXP15,ArtziLZ15,DamonteCS16}, but corpora are still under active development and low inter-annotator agreement places on upper bound of $83\%$ F1 on expected parser performance~\cite{BanarescuEa13}. We apply our model to AMR parsing by introducing structure that is present explicitly in MRS but not in AMR~\cite{BuysB17a}. Parsers based on RNNs have achieved state-of-the-art performance for dependency parsing~\cite{DyerBLMS15,KiperwasserG16} and constituency parsing~\cite{VinyalsEa15,DyerKBS16,CrossH16a}. One of the main reasons for the prevalence of bilexical dependencies and tree-based representations is that they can be parsed with efficient and well-understood algorithms. However, one of the key advantages of deep learning is the ability to make predictions conditioned on unbounded contexts encoded with RNNs; this enables us to predict more complex structures without increasing algorithmic complexity. In this paper we show how to perform linguistically deep parsing with RNNs. Our parser is based on a transition system for semantic graphs. However, instead of generating arcs over an ordered, fixed set of nodes (the words in the sentence), we generate the nodes and their alignments jointly with the transition actions. We use a graph-based variant of the arc-eager transition-system. The sentence is encoded with a bidirectional RNN. The transition sequence, seen as a graph linearization, can be predicted with any encoder-decoder model, but we show that using hard attention, predicting the alignments with a pointer network and conditioning explicitly on stack-based features improves performance. In order to deal with data sparsity candidate lemmas are predicted as a pre-processing step, so that the RNN decoder predicts unlexicalized node labels. We evaluate our parser on DMRS, EDS and AMR graphs. Our model architecture improves performance from $79.68\%$ to $84.16\%$ F1 over an attention-based encoder-decoder baseline. Although our parser is less accurate that a high-precision grammar-based parser on a test set of sentences parsable by that grammar, incremental prediction and GPU batch processing enables it to parse $529$ tokens per second, against $7$ tokens per second for the grammar-based parser. On AMR parsing our model obtains $60.11\%$ Smatch. \begin{abstract} Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the $86.69\%$ Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.\footnote{Code, models and data preparation scripts are available at \url{https://github.com/janmbuys/DeepDeepParser}.} \end{abstract} \section{Experiments} \label{sec:experiments} \subsection{Data} DeepBank~\cite{FlickingerZK12} is an HPSG and MRS annotation of the Penn Treebank Wall Street Journal (WSJ) corpus. It was developed following an approach known as dynamic treebanking~\cite{OepenFTM04} that couples treebank annotation with grammar development, in this case of the ERG. This approach has been shown to lead to high inter-annotator agreement: $0.94$ against $0.71$ for AMR~\cite{BenderFOPC15}. Parses are only provided for sentences for which the ERG has an analysis acceptable to the annotator -- this means that we cannot evaluate parsing accuracy for sentences which the ERG cannot parse (approximately $15\%$ of the original corpus). We use Deepbank version $1.1$, corresponding to ERG \texttt{1214}\footnote{\url{http://svn.delph-in.net/erg/tags/1214/}}, following the suggested split of sections $0$ to $19$ as training data data, $20$ for development and $21$ for testing. The gold-annotated training data consists of 35,315 sentences. We use the LOGON environment\footnote{\url{http://moin.delph-in.net/LogonTop}} and the pyDelphin library\footnote{\url{https://github.com/delph-in/pydelphin}} to extract DMRS and EDS graphs. For AMR parsing we use LDC2015E86, the dataset released for the SemEval 2016 AMR parsing Shared Task~\cite{May16}. This data includes newswire, weblog and discussion forum text. The training set has 16,144 sentences. We obtain alignments using the rule-based JAMR aligner~\cite{FlaniganTCDS14}. \subsection{Evaluation} \newcite{DridanO11} proposed an evaluation metric called Elementary Dependency Matching (EDM) for MRS-based graphs. EDM computes the F1-score of tuples of predicates and arguments. A predicate tuple consists of the label and character span of a predicate, while an argument tuple consists of the character spans of the head and dependent nodes of the relation, together with the argument label. In order to tolerate subtle tokenization differences with respect to punctuation, we allow span pairs whose ends differ by one character to be matched. The Smatch metric~\cite{CaiK13}, proposed for evaluating AMR graphs, also measures graph overlap, but does not rely on sentence alignments to determine the correspondences between graph nodes. Smatch is instead computed by performing inference over graph alignments to estimate the maximum F1-score obtainable from a one-to-one matching between the predicted and gold graph nodes. \subsection{Model setup} Our parser is implemented in TensorFlow~\cite{AbadiEa15}. For training we use Adam~\cite{KingmaB14} with learning rate $0.01$ and batch-size $64$. Gradients norms are clipped to $5.0$~\cite{PascanuMB13}. We use single-layer LSTMs with dropout of $0.3$ (tuned on the development set) on input and output connections. We use encoder and decoder embeddings of size $256$, and POS and NE tag embeddings of size $32$, For DMRS and EDS graphs the hidden units size is set to $256$, for AMR it is $128$. This configuration, found using grid search and heuristic search within the range of models that fit into a single GPU, gave the best performance on the development set under multiple graph linearizations. Encoder word embeddings are initialized (in the first 100 dimensions) with pre-trained order-sensitive embeddings~\cite{LingDBT15}. Singletons in the encoder input are replaced with an unknown word symbol with probability $0.5$ for each iteration. \subsection{MRS parsing results} We compare different linearizations and model architectures for parsing DMRS on the development data, showing that our approach is more accurate than baseline neural approaches. We report EDM scores, including scores for predicate (EDM$_P$) and argument (EDM$_A$) prediction. First we report results using standard attention-based encoder-decoders, with the alignments encoded as token strings in the linearization. (Table~\ref{tab:dmrs-dev-delex}). We compare the top-down (TD) and arc-eager (AE) linearizations, as well as the effect of delexicalizing the predicates (factorizing lemmas out of the linearization and predicting them separately.) In both cases constants are predicted with a dictionary lookup based on the predicted spans. A special label is predicted for predicates not in the ERG lexicon -- the words and POS tags that make up those predicates are recovered through the alignments during post-processing. The arc-eager unlexicalized representation gives the best performance, even though the model has to learn to model the transition system stack through the recurrent hidden states without any supervision of the transition semantics. The unlexicalized models are more accurate, mostly due to their ability to generalize to sparse or unseen predicates occurring in the lexicon. For the arc-eager representation, the oracle EDM is $99\%$ for the lexicalized representation and $98.06\%$ for the delexicalized representation. The remaining errors are mostly due to discrepancies between the tokenization used by our system and the ERG tokenization. The unlexicalized models are also faster to train, as the decoder's output vocabulary is much smaller, reducing the expense of computing softmaxes over large vocabularies. Next we consider models with delexicalized linearizations that predict the alignments with pointer networks, contrasting soft and hard attention models (Table~\ref{tab:dmrs-dev-point}). The results show that the arc-eager models performs better than those based on top-down representation. For the arc-eager model we use hard attention, due to the natural interpretation of the alignment prediction corresponding to the transition system. The stack-based architecture gives further improvements. When comparing the effect of different predicate orderings for the arc-eager model, we find that the monotone ordering performs $0.44$ EDM better than the in-order ordering, despite having to parse more non-planar dependencies. We also trained models that only predict predicates (in monotone order) together with their start spans. The hard attention model obtains $91.36\%$ F1 on predicates together with their start spans with the unlexicalized model, compared to $88.22\%$ for lexicalized predicates and $91.65\%$ for the full parsing model. Table \ref{tab:dmrs-test} reports test set results for various evaluation metrics. Start EDM is calculated by requiring only the start of the alignment spans to match, not the ends. We compare the performance of our baseline and stack-based models against ACE, the ERG-based parser. Despite the promising performance of the model a gap remains between the accuracy of our parser and ACE. One reason for this is that the test set sentences will arguably be easier for ACE to parse as their choice was restricted by the same grammar that ACE uses. EDM metrics excluding end-span prediction (Start EDM) show that our parser has relatively more difficulty in parsing end-span predictions than the grammar-based parser. We also evaluate the speed of our model compared with ACE. For the unbatched version of our model, the stack-based parser parses $41.63$ tokens per second, while the batched implementation parses $529.42$ tokens per second using a batch size of $128$. In comparison, the setting of ACE for which we report accuracies parses $7.47$ tokens per second. By restricting the memory usage of ACE, which restricts its coverage, we see that ACE can parse $11.07$ tokens per second at $87.7\%$ coverage, and $15.11$ tokens per second at $77.8\%$ coverage. Finally we report results for parsing EDS (Table~\ref{tab:eds-test}). The EDS parsing task is slightly simpler than DMRS, due to the absence of rich argument labels and additional graph edges that allow the recovery of full MRS. We see that for ACE the accuracies are very similar, while for our model EDS parsing is more accurate on the EDM metrics. We hypothesize that most of the extra information in DMRS can be obtained through the ERG, to which ACE has access but our model doesn't. An EDS corpus which consists of about $95\%$ of the DeepBank data has also been released\footnote{\url{http://sdp.delph-in.net/osdp-12.tgz}}, with the goal of enabling comparison with other semantic graph parsing formalisms, including CCG dependencies and Prague Semantic Dependencies, on the same data set~\cite{KuhlmannO16}. On this corpus our model obtains $85.87$ EDM and $85.49$ Smatch. \subsection{AMR parsing} We apply the same approach to AMR parsing. Results on the development set are given in Table~\ref{tab:amr-dev}. The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy. However, concept prediction remains the most important weakness of the model; \newcite{DamonteCS16} reports that state-of-the-art AMR parsers score $83\%$ on concept prediction. We report test set results in Table~\ref{tab:amr-test}. Our best neural model outperforms the baseline JAMR parser~\cite{FlaniganTCDS14}, but still lags behind the performance of state-of-the-art AMR parsers such as CAMR~\cite{WangEa16} and AMR Eager~\cite{DamonteCS16}. These models make extensive use of external resources, including syntactic parsers and semantic role labellers. Our attention-based encoder-decoder model already outperforms previous sequence-to-sequence AMR parsers~\cite{BarzdinsG16,PengWGX17}, and the arc-eager model boosts accuracy further. Our model also outperforms a Synchronous Hyperedge Replacement Grammar model~\cite{PengG16} which is comparable as it does not make extensive use of external resources. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \newcommand{\pb}[1]{\textcolor{red}{\bf\small [#1 --PB]}} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Robust Incremental Neural Semantic Graph Parsing} \author{Jan Buys$^1$ and Phil Blunsom$^{1,2}$ \\ $^1$Department of Computer Science, University of Oxford \qquad $^2$DeepMind \\ \texttt{\{jan.buys,phil.blunsom\}@cs.ox.ac.uk} \\ } \date{} \begin{document} \maketitle \input{abstract} \input{introduction} \input{graphs} \input{parsing} \input{models} \input{related-work} \input{experiments} \section{Conclusion} In this paper we advance the state of parsing by employing deep learning techniques to parse sentence to linguistically expressive semantic representations that have not previously been parsed in an end-to-end fashion. We presented a robust, wide-coverage parser for MRS that is faster than existing parsers and amenable to batch processing. We believe that there are many future avenues to explore to further increase the accuracy of such parsers, including different training objectives, more structured architectures and semi-supervised learning. \section*{Acknowledgments} The first author thanks the financial support of the Clarendon Fund and the Skye Foundation. We thank Stephan Oepen for feedback and help with data preperation, and members of the Oxford NLP group for valuable discussions. \bibliographystyle{acl} \end{document} \section{Encoder-Decoder Models} \label{sec:models} \subsection{Sentence encoder} The sentence $\mathbf{e}$ is encoded with a bidirectional RNN. We use a standard LSTM architecture without peephole connections~\cite{JozefowiczZS15}. For every token $e$ we embed its word, POS tag and named entity (NE) tag as vectors $x_w$, $x_t$ and $x_n$, respectively. The embeddings are concatenated and passed through a linear transformation \[ g(e) = W^{(x)} [x_w; x_t; x_n] + b^{x}, \] such that $g(e)$ has the same dimension as the LSTM. Each input position $i$ is represented by a hidden state $h_i$, which is the concatenation of its forward and backward LSTM state vectors. \subsection{Hard attention decoder} We model the alignment of graph nodes to sentence tokens, $\mathbf{a}$, as a random variable. For the arc-eager model, $a_j$ corresponds to the alignment of the node of the buffer after action $t_j$ is executed. The distribution of $t_j$ is over all transitions and predicates (corresponding to shift transitions), predicted with a single softmax. The parser output is predicted by an RNN decoder. Let $s_j$ be the decoder hidden state at output position $j$. We initialize $s_0$ with the final state of the backward encoder. The alignment is predicted with a pointer network~\cite{VinyalsFJ15}. The logits are computed with an MLP scoring the decoder hidden state against each of the encoder hidden states (for $i = 1, \ldots, I$), \[ u_j^i = w^T \tanh(W^{(1)} h_i + W^{(2)} s_j). \] The alignment distribution is then estimated by \[ p(a_j = i | \mathbf{a}_{1:j-1}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(u_j^i). \] To predict the next transition $t_i$, the output vector is conditioned on the encoder state vector $h_{a_j}$, corresponding to the alignment: \begin{align*} o_j &= W^{(3)} s_j + W^{(4)} h_{a_j} \\ v_j &= R^{(d)} o_j + b^{(d)}, \end{align*} where $R^{(d)}$ and $b^{(d)}$ are the output representation matrix and bias vector, respectively. The transition distribution is then given by \[ p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(v_j). \] Let $e(t)$ be the embedding of decoder symbol $t$. The RNN state at the next time-step is computed as \begin{align*} d_{j+1} &= W^{(5)} e(t_{j}) + W^{(6)} h_{a_j} \\ s_{j+1} &= RNN(d_{j+1}, s_{j}). \end{align*} The end-of-span alignment $a_j^{(e)}$ for MRS-based graphs is predicted with another pointer network. The end alignment of a token is predicted only when a node is reduced from the stack, therefore this alignment is not observed at each time-step; it is also not fed back into the model. The hard attention approach, based on supervised alignments, can be contrasted to soft attention, which learns to attend over the input without supervision. The attention is computed as with hard attention, as $\alpha_j^i = \mathrm{softmax}(u_j^i)$. However instead of making a hard selection, a weighted average over the encoder vectors is computed as $q_j = \sum_{i=1}^{i=I} \alpha_j^i h_i$. This vector is used instead of $h_{a_j}$ for prediction and feeding to the next time-step. \subsection{Stack-based model} We extend the hard attention model to include features based on the transition system stack. These features are embeddings from the bidirectional RNN encoder, corresponding to the alignments of the nodes on the buffer and on top of the stack. This approach is similar to the features proposed by \newcite{KiperwasserG16} and \newcite{CrossH16} for dependency parsing, although they do not use RNN decoders. To implement these features the layer that computes the output vector is extended to \[ o_j = W^{(3)} s_j + W^{(4)} h_{a_j} + W^{(7)} h_{\textrm{st}_0}, \] where $\texttt{st}_0$ is the sentence alignment index of the element on top of the stack. The input layer to the next RNN time-step is similarly extended to \[ d_{j+1} = W^{(5)} e(t_{j}) + W^{(6)} h_{\textrm{buf}} + W^{(8)} h_{\textrm{st}_0}, \] where \texttt{buf} is the buffer alignment after $t_j$ is executed. Our implementation of the stack-based model enables batch processing in static computation graphs, similar to \newcite{BowmanEa16}. We maintain a stack of alignment indexes for each element in the batch, which is updated inside the computation graph after each parsing action. This enables minibatch SGD during training as well as efficient batch decoding. We perform greedy decoding. For the stack-based model we ensure that if the stack is empty, the next transition predicted has to be shift. For the other models we ensure that the output is well-formed during post-processing by robustly skipping over out-of-place symbols or inserting missing ones. \section{Related Work} Prior work for MRS parsing predominantly predicts structures in the context of grammar-based parsing, where sentences are parsed to HPSG derivations consistent with the grammar, in this case the ERG~\cite{Flickinger00}. The nodes in the derivation trees are feature structures, from which MRS is extracted through unification. This approach fails to parse sentences for which no valid derivation is found. Maximum entropy models are used to score the derivations in order to find the most likely parse~\cite{ToutanovaMFO05}. This approach is implemented in the PET~\cite{Callmeier00} and ACE\footnote{\url{http://sweaglesw.org/linguistics/ace/}} parsers. There have also been some efforts to develop robust MRS parsers. One proposed approach learns a PCFG grammar to approximate the HPSG derivations~\cite{ZhangK11,ZhangEa14}. MRS is then extracted with robust unification to compose potentially incompatible feature structures, although that still fails for a small proportion of sentences. The model is trained on a large corpus of Wikipedia text parsed with the grammar-based parser. \newcite{Ytrestol12} proposed a transition-based approach to HPSG parsing that produces derivations from which both syntactic and semantic (MRS) parses can be extracted. The parser has an option not to be restricted by the ERG. However, neither of these approaches have results available that can be compared directly to our setup, or generally available implementations. Although AMR parsers produce graphs that are similar in structure to MRS-based graphs, most of them make assumptions that are invalid for MRS, and rely on extensive external AMR-specific resources. \newcite{FlaniganTCDS14} proposed a two-stage parser that first predicts concepts or subgraphs corresponding to sentence segments, and then parses these concepts into a graph structure. However MRS has a large proportion of abstract nodes that cannot be predicted from short segments, and interact closely with the graph structure. \newcite{WangXP15,WangXP15a} proposed a custom transition-system for AMR parsing that converts dependency trees to AMR graphs, relying on assumptions on the relationship between these. \newcite{PustHKMM15} proposed a parser based on syntax-based machine translation (MT), while AMR has also been integrated into CCG Semantic Parsing~\cite{ArtziLZ15,MisraA16}. Recently \newcite{DamonteCS16} and \newcite{PengWGX17} proposed AMR parsers based on neural networks. \section{Incremental Graph Parsing} \label{sec:parsing} We parse sentences to their meaning representations by incrementally predicting semantic graphs together with their alignments. Let $\mathbf{e} = e_1, e_2, \ldots, e_I$ be a tokenized English sentence, $\mathbf{t} = t_1, t_2, \ldots, t_J$ a sequential representation of its graph derivation and $\mathbf{a} = a_1, a_2, \ldots, a_J$ an alignment sequence consisting of integers in the range $1, \ldots, I$. We model the conditional distribution $p(\mathbf{t}, \mathbf{a} | \mathbf{e})$ which decomposes as \[ \prod_{j=1}^{J} p(a_j | \mathbf{(a,t)}_{1:j-1}, \mathbf{e}) p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}). \] We also predict the end-of-span alignments as a seperate sequence $\mathbf{a^{(e)}}$. \subsection{Top-down linearization} We now consider how to linearize the semantic graphs, before defining the neural models to parameterize the parser in section~\ref{sec:models}. The first approach is to linearize a graph as the pre-order traversal of its spanning tree, starting at a designated root node (see Figure~\ref{fig:linear-eds}). Variants of this approach have been proposed for neural constituency parsing~\cite{VinyalsEa15}, logical form prediction~\cite{DongL16,JiaL16} and AMR parsing~\cite{BarzdinsG16,PengWGX17}. In the linearization, labels of edges whose direction are reversed in the spanning tree are marked by adding \texttt{-of}. Edges not included in the spanning tree, referred to as \emph{reentrancies}, are represented with special edges whose dependents are dummy nodes pointing back to the original nodes. Our potentially lossy representation represents these edges by repeating the dependent node labels and alignments, which are recovered heuristically. The alignment does not influence the linearized node ordering. \subsection{Transition-based parsing} Figure~\ref{fig:eds-graph} shows that the semantic graphs we work with can also be interpreted as dependency graphs, as nodes are aligned to sentence tokens. Transition-based parsing~\cite{Nivre08} has been used extensively to predict dependency graphs incrementally. We apply a variant of the arc-eager transition system that has been proposed for graph (as opposed to tree) parsing~\cite{SagaeTsujii08,TitovHMM09,GomezN10} to derive a transition-based parser for deep semantic graphs. In dependency parsing the sentence tokens also act as nodes in the graph, but here we need to generate the nodes incrementally as the transition-system proceeds, conditioning the generation on the given sentence. \newcite{DamonteCS16} proposed an arc-eager AMR parser, but their transition system is more narrowly restricted to AMR graphs. The transition system consists of a \emph{stack} of graph nodes being processed and a \emph{buffer}, holding a single node at a time. The main transition actions are \emph{shift}, \emph{reduce}, \emph{left-arc}, \emph{right-arc}. Figure~\ref{fig:transition-table} shows an example transition sequence together with the stack and buffer after each step. The shift transition moves the element on the buffer to the top of the stack, and generates a predicate and its alignment as the next node on the buffer. Left-arc and right-arc actions add labeled arcs between the buffer and stack top (for DMRS a transition for undirected arcs is included), but do not change the state of the stack or buffer. Finally, reduce pops the top element from the stack, and predicts its end-of-span alignment (if included in the representation). To predict non-planar arcs, we add another transition, which we call \emph{cross-arc}, which first predicts the stack index of a node which is not on top of the stack, adding an arc between the head of the buffer and that node. Another special transition designates the buffer node as the root. To derive an oracle for this transition system, it is necessary to determine the order in which the nodes are generated. We consider two approaches. The first ordering is obtained by performing an in-order traversal of the spanning tree, where the node order is determined by the alignment. In the resulting linearization the only non-planar arcs are reentrancies. The second approach lets the ordering be monotone (non-decreasing) with respect to the alignments, while respecting the in-order ordering for nodes with the same alignment. In an arc-eager oracle arcs are added greedily, while a reduce action can either be performed as soon as the stack top node has been connected to all its dependents, or delayed until it has to reduce to allow the correct parse tree to be formed. In our model the oracle delays reduce, where possible, until the end alignment of the stack top node spans the node on the buffer. As the span end alignments often cover phrases that they head (e.g. for quantifiers) this gives a natural interpretation to predicting the span end together with the reduce action. \subsection{Delexicalization and lemma prediction} Each token in MRS annotations is aligned to at most one surface predicate. We decompose surface predicate prediction by predicting candidate lemmas for input tokens, and delexicalized predicates consisting only of sense labels. The full surface predicates are then recovered through the predicted alignments. We extract a dictionary mapping words to lemmas from the ERG lexicon. Candidate lemmas are predicted using this dictionary, and where no dictionary entry is available with a lemmatizer. The same approach is applied to predict constants, along with additional normalizations such as mapping numbers to digit strings. We use the Stanford CoreNLP toolkit~\cite{ManningEa14} to tokenize and lemmatize sentences, and tag tokens with the Stanford Named Entity Recognizer~\cite{FinkelGM05}. The tokenization is customized to correspond closely to the ERG tokenization; hyphens are removed pre-processing step. For AMR we use automatic alignments and the graph topology to classify concepts as surface or abstract. %~\cite{BuysB17a}. The lexicon is restricted to Propbank~\cite{PalmerGK05} predicates; for other concepts we extract a lexicon from the training data.
Robust Incremental Neural Semantic Graph Parsing
1704.07092
Table 1: DMRS development set results for attention-based encoder-decoder models with alignments encoded in the linearization, for top-down (TD) and arc-eager (AE) linearizations, and lexicalized and unlexicalized predicate prediction.
[ "Model", "EDM", "EDM [ITALIC] P", "EDM [ITALIC] A" ]
[ [ "TD lex", "81.44", "85.20", "76.87" ], [ "TD unlex", "81.72", "85.59", "77.04" ], [ "AE lex", "81.35", "85.79", "76.02" ], [ "AE unlex", "82.56", "86.76", "77.54" ] ]
First we report results using standard attention-based encoder-decoders, with the alignments encoded as token strings in the linearization. We compare the top-down (TD) and arc-eager (AE) linearizations, as well as the effect of delexicalizing the predicates (factorizing lemmas out of the linearization and predicting them separately.) In both cases constants are predicted with a dictionary lookup based on the predicted spans. A special label is predicted for predicates not in the ERG lexicon – the words and POS tags that make up those predicates are recovered through the alignments during post-processing.
\section{Meaning Representations} We define a common framework for semantic graphs in which we can place both MRS-based graph representations (DMRS and EDS) and AMR. Sentence meaning is represented with rooted, labelled, connected, directed graphs~\cite{KuhlmannO16}. An example graph is visualized in Figure~\ref{fig:eds-graph}. representations. Node labels are referred to as \emph{predicates} (\emph{concepts} in AMR) and edge labels as \emph{arguments} (AMR \emph{relations}). In addition \emph{constants}, a special type of node modifiers, are used to denote the string values of named entities and numbers (including date and time expressions). Every node is aligned to a token or a continuous span of tokens in the sentence the graph corresponds to. Minimal Recursion Semantics (MRS) is a framework for computational semantics that can be used for parsing or generation~\cite{CopenstakeFPS05}. Instances and eventualities are represented with logical variables. Predicates take arguments with labels from a small, fixed set of roles. Arguments are either logical variables or handles, designated formalism-internal variables. Handle equality constraints support scope underspecification; multiple scope-resolved logical representations can be derived from one MRS structure. A predicate corresponds to its intrinsic argument and is aligned to a character span of the (untokenized) input sentence. Predicates representing named entities or numbers are parameterized by strings. Quantification is expressed through predicates that bound instance variables, rather than through logical operators such as $\exists$ or $\forall$. MRS was designed to be integrated with feature-based grammars such as Head-driven Phrase Structure Grammar (HPSG)~\cite{PollardS94} or Lexical Functional Grammar (LFG)~\cite{KaplanB82}. MRS has been implement the English Resource Grammar (ERG)~\cite{Flickinger00}, a broad-coverage high-precision HPSG grammar. \newcite{OepenL06} proposed Elementary Dependency Structure (EDS), a conversion of MRS to variable-free dependency graphs which drops scope underspecification. \newcite{Copenstake09} extended this conversion to avoid information loss, primarily through richer edge labels. The resulting representation, Dependency MRS (DMRS), can be converted back to the original MRS, or used directly in MRS-based applications~\cite{CopenstakeEa16}. We are interested in the empirical performance of parsers for both of these representations: while EDS is more interpretable as an independent semantic graph representation, DMRS can be related back to underspecified logical forms. A bilexical simplification of EDS has previously been used for semantic dependency parsing~\cite{OepenEa14,OepenEa15}. Figure~\ref{fig:eds-graph} illustrates an EDS graph. MRS makes an explicit distinction between surface and abstract predicates (by convention surface predicates are prefixed by an underscore). Surface predicates consist of a lemma followed by a coarse part-of-speech tag and an optional sense label. Predicates absent from the ERG lexicon are represented by their surface forms and POS tags. We convert the character-level predicate spans given by MRS to token-level spans for parsing purposes, but the representation does not require gold tokenization. Surface predicates usually align with the span of the token(s) they represent, while abstract predicates can span longer segments. In full MRS every predicate is annotated with a set of morphosyntactic features, encoding for example tense, aspect and number information; we do not currently model these features. AMR~\cite{BanarescuEa13} graphs can be represented in the same framework, despite a number of linguistic differences with MRS. Some information annotated explicitly in MRS is latent in AMR, including alignments and the distinction between surface (lexical) and abstract concepts. AMR predicates are based on PropBank~\cite{PalmerGK05}, annotated as lemmas plus sense labels, but they form only a subset of concepts. Other concepts are either English words or special keywords, corresponding to overt lexemes in some cases but not others. \section{Introduction} An important goal of Natural Language Understanding (NLU) is to parse sentences to structured, interpretable meaning representations that can be used for query execution, inference and reasoning. Recently end-to-end models have outperformed traditional pipeline approaches, predicting syntactic or semantic structure as intermediate steps, on NLU tasks such as sentiment analysis and semantic relatedness~\cite{LeM14,KirosEa15}, question answering~\cite{HermannEa15} % WestonCB14, and textual entailment~\cite{RocktaschelEa15}. However the linguistic structure used in applications has predominantly been shallow, restricted to bilexical dependencies or trees. In this paper we focus on robust parsing into linguistically deep representations. The main representation that we use is Minimal Recursion Semantics (MRS)~\cite{CopenstakeFMRS95,CopenstakeFPS05}, which serves as the semantic representation of the English Resource Grammar (ERG)~\cite{Flickinger00}. Existing parsers for full MRS (as opposed to bilexical semantic graphs derived from, but simplifying MRS) are grammar-based, performing disambiguation with a maximum entropy model~\cite{ToutanovaMFO05,ZhangOC07}; this approach has high precision but incomplete coverage. Our main contribution is to develop a fast and robust parser for full MRS-based semantic graphs. We exploit the power of global conditioning enabled by deep learning to predict linguistically deep graphs incrementally. The model does not have access to the underlying ERG or syntactic structures from which the MRS analyses were originally derived. We develop parsers for two graph-based conversions of MRS, Elementary Dependency Structure (EDS)~\cite{OepenL06} and Dependency MRS (DMRS)~\cite{Copenstake09}, of which the latter is inter-convertible with MRS. Abstract Meaning Representation (AMR)~\cite{BanarescuEa13} is a graph-based semantic representation that shares the goals of MRS. Aside from differences in the choice of which linguistic phenomena are annotated, MRS is a compositional representation explicitly coupled with the syntactic structure of the sentence, while AMR does not assume compositionality or alignment with the sentence structure. Recently a number of AMR parsers have been developed~\cite{FlaniganTCDS14,WangXP15,ArtziLZ15,DamonteCS16}, but corpora are still under active development and low inter-annotator agreement places on upper bound of $83\%$ F1 on expected parser performance~\cite{BanarescuEa13}. We apply our model to AMR parsing by introducing structure that is present explicitly in MRS but not in AMR~\cite{BuysB17a}. Parsers based on RNNs have achieved state-of-the-art performance for dependency parsing~\cite{DyerBLMS15,KiperwasserG16} and constituency parsing~\cite{VinyalsEa15,DyerKBS16,CrossH16a}. One of the main reasons for the prevalence of bilexical dependencies and tree-based representations is that they can be parsed with efficient and well-understood algorithms. However, one of the key advantages of deep learning is the ability to make predictions conditioned on unbounded contexts encoded with RNNs; this enables us to predict more complex structures without increasing algorithmic complexity. In this paper we show how to perform linguistically deep parsing with RNNs. Our parser is based on a transition system for semantic graphs. However, instead of generating arcs over an ordered, fixed set of nodes (the words in the sentence), we generate the nodes and their alignments jointly with the transition actions. We use a graph-based variant of the arc-eager transition-system. The sentence is encoded with a bidirectional RNN. The transition sequence, seen as a graph linearization, can be predicted with any encoder-decoder model, but we show that using hard attention, predicting the alignments with a pointer network and conditioning explicitly on stack-based features improves performance. In order to deal with data sparsity candidate lemmas are predicted as a pre-processing step, so that the RNN decoder predicts unlexicalized node labels. We evaluate our parser on DMRS, EDS and AMR graphs. Our model architecture improves performance from $79.68\%$ to $84.16\%$ F1 over an attention-based encoder-decoder baseline. Although our parser is less accurate that a high-precision grammar-based parser on a test set of sentences parsable by that grammar, incremental prediction and GPU batch processing enables it to parse $529$ tokens per second, against $7$ tokens per second for the grammar-based parser. On AMR parsing our model obtains $60.11\%$ Smatch. \begin{abstract} Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the $86.69\%$ Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.\footnote{Code, models and data preparation scripts are available at \url{https://github.com/janmbuys/DeepDeepParser}.} \end{abstract} \section{Experiments} \label{sec:experiments} \subsection{Data} DeepBank~\cite{FlickingerZK12} is an HPSG and MRS annotation of the Penn Treebank Wall Street Journal (WSJ) corpus. It was developed following an approach known as dynamic treebanking~\cite{OepenFTM04} that couples treebank annotation with grammar development, in this case of the ERG. This approach has been shown to lead to high inter-annotator agreement: $0.94$ against $0.71$ for AMR~\cite{BenderFOPC15}. Parses are only provided for sentences for which the ERG has an analysis acceptable to the annotator -- this means that we cannot evaluate parsing accuracy for sentences which the ERG cannot parse (approximately $15\%$ of the original corpus). We use Deepbank version $1.1$, corresponding to ERG \texttt{1214}\footnote{\url{http://svn.delph-in.net/erg/tags/1214/}}, following the suggested split of sections $0$ to $19$ as training data data, $20$ for development and $21$ for testing. The gold-annotated training data consists of 35,315 sentences. We use the LOGON environment\footnote{\url{http://moin.delph-in.net/LogonTop}} and the pyDelphin library\footnote{\url{https://github.com/delph-in/pydelphin}} to extract DMRS and EDS graphs. For AMR parsing we use LDC2015E86, the dataset released for the SemEval 2016 AMR parsing Shared Task~\cite{May16}. This data includes newswire, weblog and discussion forum text. The training set has 16,144 sentences. We obtain alignments using the rule-based JAMR aligner~\cite{FlaniganTCDS14}. \subsection{Evaluation} \newcite{DridanO11} proposed an evaluation metric called Elementary Dependency Matching (EDM) for MRS-based graphs. EDM computes the F1-score of tuples of predicates and arguments. A predicate tuple consists of the label and character span of a predicate, while an argument tuple consists of the character spans of the head and dependent nodes of the relation, together with the argument label. In order to tolerate subtle tokenization differences with respect to punctuation, we allow span pairs whose ends differ by one character to be matched. The Smatch metric~\cite{CaiK13}, proposed for evaluating AMR graphs, also measures graph overlap, but does not rely on sentence alignments to determine the correspondences between graph nodes. Smatch is instead computed by performing inference over graph alignments to estimate the maximum F1-score obtainable from a one-to-one matching between the predicted and gold graph nodes. \subsection{Model setup} Our parser is implemented in TensorFlow~\cite{AbadiEa15}. For training we use Adam~\cite{KingmaB14} with learning rate $0.01$ and batch-size $64$. Gradients norms are clipped to $5.0$~\cite{PascanuMB13}. We use single-layer LSTMs with dropout of $0.3$ (tuned on the development set) on input and output connections. We use encoder and decoder embeddings of size $256$, and POS and NE tag embeddings of size $32$, For DMRS and EDS graphs the hidden units size is set to $256$, for AMR it is $128$. This configuration, found using grid search and heuristic search within the range of models that fit into a single GPU, gave the best performance on the development set under multiple graph linearizations. Encoder word embeddings are initialized (in the first 100 dimensions) with pre-trained order-sensitive embeddings~\cite{LingDBT15}. Singletons in the encoder input are replaced with an unknown word symbol with probability $0.5$ for each iteration. \subsection{MRS parsing results} We compare different linearizations and model architectures for parsing DMRS on the development data, showing that our approach is more accurate than baseline neural approaches. We report EDM scores, including scores for predicate (EDM$_P$) and argument (EDM$_A$) prediction. First we report results using standard attention-based encoder-decoders, with the alignments encoded as token strings in the linearization. (Table~\ref{tab:dmrs-dev-delex}). We compare the top-down (TD) and arc-eager (AE) linearizations, as well as the effect of delexicalizing the predicates (factorizing lemmas out of the linearization and predicting them separately.) In both cases constants are predicted with a dictionary lookup based on the predicted spans. A special label is predicted for predicates not in the ERG lexicon -- the words and POS tags that make up those predicates are recovered through the alignments during post-processing. The arc-eager unlexicalized representation gives the best performance, even though the model has to learn to model the transition system stack through the recurrent hidden states without any supervision of the transition semantics. The unlexicalized models are more accurate, mostly due to their ability to generalize to sparse or unseen predicates occurring in the lexicon. For the arc-eager representation, the oracle EDM is $99\%$ for the lexicalized representation and $98.06\%$ for the delexicalized representation. The remaining errors are mostly due to discrepancies between the tokenization used by our system and the ERG tokenization. The unlexicalized models are also faster to train, as the decoder's output vocabulary is much smaller, reducing the expense of computing softmaxes over large vocabularies. Next we consider models with delexicalized linearizations that predict the alignments with pointer networks, contrasting soft and hard attention models (Table~\ref{tab:dmrs-dev-point}). The results show that the arc-eager models performs better than those based on top-down representation. For the arc-eager model we use hard attention, due to the natural interpretation of the alignment prediction corresponding to the transition system. The stack-based architecture gives further improvements. When comparing the effect of different predicate orderings for the arc-eager model, we find that the monotone ordering performs $0.44$ EDM better than the in-order ordering, despite having to parse more non-planar dependencies. We also trained models that only predict predicates (in monotone order) together with their start spans. The hard attention model obtains $91.36\%$ F1 on predicates together with their start spans with the unlexicalized model, compared to $88.22\%$ for lexicalized predicates and $91.65\%$ for the full parsing model. Table \ref{tab:dmrs-test} reports test set results for various evaluation metrics. Start EDM is calculated by requiring only the start of the alignment spans to match, not the ends. We compare the performance of our baseline and stack-based models against ACE, the ERG-based parser. Despite the promising performance of the model a gap remains between the accuracy of our parser and ACE. One reason for this is that the test set sentences will arguably be easier for ACE to parse as their choice was restricted by the same grammar that ACE uses. EDM metrics excluding end-span prediction (Start EDM) show that our parser has relatively more difficulty in parsing end-span predictions than the grammar-based parser. We also evaluate the speed of our model compared with ACE. For the unbatched version of our model, the stack-based parser parses $41.63$ tokens per second, while the batched implementation parses $529.42$ tokens per second using a batch size of $128$. In comparison, the setting of ACE for which we report accuracies parses $7.47$ tokens per second. By restricting the memory usage of ACE, which restricts its coverage, we see that ACE can parse $11.07$ tokens per second at $87.7\%$ coverage, and $15.11$ tokens per second at $77.8\%$ coverage. Finally we report results for parsing EDS (Table~\ref{tab:eds-test}). The EDS parsing task is slightly simpler than DMRS, due to the absence of rich argument labels and additional graph edges that allow the recovery of full MRS. We see that for ACE the accuracies are very similar, while for our model EDS parsing is more accurate on the EDM metrics. We hypothesize that most of the extra information in DMRS can be obtained through the ERG, to which ACE has access but our model doesn't. An EDS corpus which consists of about $95\%$ of the DeepBank data has also been released\footnote{\url{http://sdp.delph-in.net/osdp-12.tgz}}, with the goal of enabling comparison with other semantic graph parsing formalisms, including CCG dependencies and Prague Semantic Dependencies, on the same data set~\cite{KuhlmannO16}. On this corpus our model obtains $85.87$ EDM and $85.49$ Smatch. \subsection{AMR parsing} We apply the same approach to AMR parsing. Results on the development set are given in Table~\ref{tab:amr-dev}. The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy. However, concept prediction remains the most important weakness of the model; \newcite{DamonteCS16} reports that state-of-the-art AMR parsers score $83\%$ on concept prediction. We report test set results in Table~\ref{tab:amr-test}. Our best neural model outperforms the baseline JAMR parser~\cite{FlaniganTCDS14}, but still lags behind the performance of state-of-the-art AMR parsers such as CAMR~\cite{WangEa16} and AMR Eager~\cite{DamonteCS16}. These models make extensive use of external resources, including syntactic parsers and semantic role labellers. Our attention-based encoder-decoder model already outperforms previous sequence-to-sequence AMR parsers~\cite{BarzdinsG16,PengWGX17}, and the arc-eager model boosts accuracy further. Our model also outperforms a Synchronous Hyperedge Replacement Grammar model~\cite{PengG16} which is comparable as it does not make extensive use of external resources. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \newcommand{\pb}[1]{\textcolor{red}{\bf\small [#1 --PB]}} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Robust Incremental Neural Semantic Graph Parsing} \author{Jan Buys$^1$ and Phil Blunsom$^{1,2}$ \\ $^1$Department of Computer Science, University of Oxford \qquad $^2$DeepMind \\ \texttt{\{jan.buys,phil.blunsom\}@cs.ox.ac.uk} \\ } \date{} \begin{document} \maketitle \input{abstract} \input{introduction} \input{graphs} \input{parsing} \input{models} \input{related-work} \input{experiments} \section{Conclusion} In this paper we advance the state of parsing by employing deep learning techniques to parse sentence to linguistically expressive semantic representations that have not previously been parsed in an end-to-end fashion. We presented a robust, wide-coverage parser for MRS that is faster than existing parsers and amenable to batch processing. We believe that there are many future avenues to explore to further increase the accuracy of such parsers, including different training objectives, more structured architectures and semi-supervised learning. \section*{Acknowledgments} The first author thanks the financial support of the Clarendon Fund and the Skye Foundation. We thank Stephan Oepen for feedback and help with data preperation, and members of the Oxford NLP group for valuable discussions. \bibliographystyle{acl} \end{document} \section{Encoder-Decoder Models} \label{sec:models} \subsection{Sentence encoder} The sentence $\mathbf{e}$ is encoded with a bidirectional RNN. We use a standard LSTM architecture without peephole connections~\cite{JozefowiczZS15}. For every token $e$ we embed its word, POS tag and named entity (NE) tag as vectors $x_w$, $x_t$ and $x_n$, respectively. The embeddings are concatenated and passed through a linear transformation \[ g(e) = W^{(x)} [x_w; x_t; x_n] + b^{x}, \] such that $g(e)$ has the same dimension as the LSTM. Each input position $i$ is represented by a hidden state $h_i$, which is the concatenation of its forward and backward LSTM state vectors. \subsection{Hard attention decoder} We model the alignment of graph nodes to sentence tokens, $\mathbf{a}$, as a random variable. For the arc-eager model, $a_j$ corresponds to the alignment of the node of the buffer after action $t_j$ is executed. The distribution of $t_j$ is over all transitions and predicates (corresponding to shift transitions), predicted with a single softmax. The parser output is predicted by an RNN decoder. Let $s_j$ be the decoder hidden state at output position $j$. We initialize $s_0$ with the final state of the backward encoder. The alignment is predicted with a pointer network~\cite{VinyalsFJ15}. The logits are computed with an MLP scoring the decoder hidden state against each of the encoder hidden states (for $i = 1, \ldots, I$), \[ u_j^i = w^T \tanh(W^{(1)} h_i + W^{(2)} s_j). \] The alignment distribution is then estimated by \[ p(a_j = i | \mathbf{a}_{1:j-1}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(u_j^i). \] To predict the next transition $t_i$, the output vector is conditioned on the encoder state vector $h_{a_j}$, corresponding to the alignment: \begin{align*} o_j &= W^{(3)} s_j + W^{(4)} h_{a_j} \\ v_j &= R^{(d)} o_j + b^{(d)}, \end{align*} where $R^{(d)}$ and $b^{(d)}$ are the output representation matrix and bias vector, respectively. The transition distribution is then given by \[ p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(v_j). \] Let $e(t)$ be the embedding of decoder symbol $t$. The RNN state at the next time-step is computed as \begin{align*} d_{j+1} &= W^{(5)} e(t_{j}) + W^{(6)} h_{a_j} \\ s_{j+1} &= RNN(d_{j+1}, s_{j}). \end{align*} The end-of-span alignment $a_j^{(e)}$ for MRS-based graphs is predicted with another pointer network. The end alignment of a token is predicted only when a node is reduced from the stack, therefore this alignment is not observed at each time-step; it is also not fed back into the model. The hard attention approach, based on supervised alignments, can be contrasted to soft attention, which learns to attend over the input without supervision. The attention is computed as with hard attention, as $\alpha_j^i = \mathrm{softmax}(u_j^i)$. However instead of making a hard selection, a weighted average over the encoder vectors is computed as $q_j = \sum_{i=1}^{i=I} \alpha_j^i h_i$. This vector is used instead of $h_{a_j}$ for prediction and feeding to the next time-step. \subsection{Stack-based model} We extend the hard attention model to include features based on the transition system stack. These features are embeddings from the bidirectional RNN encoder, corresponding to the alignments of the nodes on the buffer and on top of the stack. This approach is similar to the features proposed by \newcite{KiperwasserG16} and \newcite{CrossH16} for dependency parsing, although they do not use RNN decoders. To implement these features the layer that computes the output vector is extended to \[ o_j = W^{(3)} s_j + W^{(4)} h_{a_j} + W^{(7)} h_{\textrm{st}_0}, \] where $\texttt{st}_0$ is the sentence alignment index of the element on top of the stack. The input layer to the next RNN time-step is similarly extended to \[ d_{j+1} = W^{(5)} e(t_{j}) + W^{(6)} h_{\textrm{buf}} + W^{(8)} h_{\textrm{st}_0}, \] where \texttt{buf} is the buffer alignment after $t_j$ is executed. Our implementation of the stack-based model enables batch processing in static computation graphs, similar to \newcite{BowmanEa16}. We maintain a stack of alignment indexes for each element in the batch, which is updated inside the computation graph after each parsing action. This enables minibatch SGD during training as well as efficient batch decoding. We perform greedy decoding. For the stack-based model we ensure that if the stack is empty, the next transition predicted has to be shift. For the other models we ensure that the output is well-formed during post-processing by robustly skipping over out-of-place symbols or inserting missing ones. \section{Related Work} Prior work for MRS parsing predominantly predicts structures in the context of grammar-based parsing, where sentences are parsed to HPSG derivations consistent with the grammar, in this case the ERG~\cite{Flickinger00}. The nodes in the derivation trees are feature structures, from which MRS is extracted through unification. This approach fails to parse sentences for which no valid derivation is found. Maximum entropy models are used to score the derivations in order to find the most likely parse~\cite{ToutanovaMFO05}. This approach is implemented in the PET~\cite{Callmeier00} and ACE\footnote{\url{http://sweaglesw.org/linguistics/ace/}} parsers. There have also been some efforts to develop robust MRS parsers. One proposed approach learns a PCFG grammar to approximate the HPSG derivations~\cite{ZhangK11,ZhangEa14}. MRS is then extracted with robust unification to compose potentially incompatible feature structures, although that still fails for a small proportion of sentences. The model is trained on a large corpus of Wikipedia text parsed with the grammar-based parser. \newcite{Ytrestol12} proposed a transition-based approach to HPSG parsing that produces derivations from which both syntactic and semantic (MRS) parses can be extracted. The parser has an option not to be restricted by the ERG. However, neither of these approaches have results available that can be compared directly to our setup, or generally available implementations. Although AMR parsers produce graphs that are similar in structure to MRS-based graphs, most of them make assumptions that are invalid for MRS, and rely on extensive external AMR-specific resources. \newcite{FlaniganTCDS14} proposed a two-stage parser that first predicts concepts or subgraphs corresponding to sentence segments, and then parses these concepts into a graph structure. However MRS has a large proportion of abstract nodes that cannot be predicted from short segments, and interact closely with the graph structure. \newcite{WangXP15,WangXP15a} proposed a custom transition-system for AMR parsing that converts dependency trees to AMR graphs, relying on assumptions on the relationship between these. \newcite{PustHKMM15} proposed a parser based on syntax-based machine translation (MT), while AMR has also been integrated into CCG Semantic Parsing~\cite{ArtziLZ15,MisraA16}. Recently \newcite{DamonteCS16} and \newcite{PengWGX17} proposed AMR parsers based on neural networks. \section{Incremental Graph Parsing} \label{sec:parsing} We parse sentences to their meaning representations by incrementally predicting semantic graphs together with their alignments. Let $\mathbf{e} = e_1, e_2, \ldots, e_I$ be a tokenized English sentence, $\mathbf{t} = t_1, t_2, \ldots, t_J$ a sequential representation of its graph derivation and $\mathbf{a} = a_1, a_2, \ldots, a_J$ an alignment sequence consisting of integers in the range $1, \ldots, I$. We model the conditional distribution $p(\mathbf{t}, \mathbf{a} | \mathbf{e})$ which decomposes as \[ \prod_{j=1}^{J} p(a_j | \mathbf{(a,t)}_{1:j-1}, \mathbf{e}) p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}). \] We also predict the end-of-span alignments as a seperate sequence $\mathbf{a^{(e)}}$. \subsection{Top-down linearization} We now consider how to linearize the semantic graphs, before defining the neural models to parameterize the parser in section~\ref{sec:models}. The first approach is to linearize a graph as the pre-order traversal of its spanning tree, starting at a designated root node (see Figure~\ref{fig:linear-eds}). Variants of this approach have been proposed for neural constituency parsing~\cite{VinyalsEa15}, logical form prediction~\cite{DongL16,JiaL16} and AMR parsing~\cite{BarzdinsG16,PengWGX17}. In the linearization, labels of edges whose direction are reversed in the spanning tree are marked by adding \texttt{-of}. Edges not included in the spanning tree, referred to as \emph{reentrancies}, are represented with special edges whose dependents are dummy nodes pointing back to the original nodes. Our potentially lossy representation represents these edges by repeating the dependent node labels and alignments, which are recovered heuristically. The alignment does not influence the linearized node ordering. \subsection{Transition-based parsing} Figure~\ref{fig:eds-graph} shows that the semantic graphs we work with can also be interpreted as dependency graphs, as nodes are aligned to sentence tokens. Transition-based parsing~\cite{Nivre08} has been used extensively to predict dependency graphs incrementally. We apply a variant of the arc-eager transition system that has been proposed for graph (as opposed to tree) parsing~\cite{SagaeTsujii08,TitovHMM09,GomezN10} to derive a transition-based parser for deep semantic graphs. In dependency parsing the sentence tokens also act as nodes in the graph, but here we need to generate the nodes incrementally as the transition-system proceeds, conditioning the generation on the given sentence. \newcite{DamonteCS16} proposed an arc-eager AMR parser, but their transition system is more narrowly restricted to AMR graphs. The transition system consists of a \emph{stack} of graph nodes being processed and a \emph{buffer}, holding a single node at a time. The main transition actions are \emph{shift}, \emph{reduce}, \emph{left-arc}, \emph{right-arc}. Figure~\ref{fig:transition-table} shows an example transition sequence together with the stack and buffer after each step. The shift transition moves the element on the buffer to the top of the stack, and generates a predicate and its alignment as the next node on the buffer. Left-arc and right-arc actions add labeled arcs between the buffer and stack top (for DMRS a transition for undirected arcs is included), but do not change the state of the stack or buffer. Finally, reduce pops the top element from the stack, and predicts its end-of-span alignment (if included in the representation). To predict non-planar arcs, we add another transition, which we call \emph{cross-arc}, which first predicts the stack index of a node which is not on top of the stack, adding an arc between the head of the buffer and that node. Another special transition designates the buffer node as the root. To derive an oracle for this transition system, it is necessary to determine the order in which the nodes are generated. We consider two approaches. The first ordering is obtained by performing an in-order traversal of the spanning tree, where the node order is determined by the alignment. In the resulting linearization the only non-planar arcs are reentrancies. The second approach lets the ordering be monotone (non-decreasing) with respect to the alignments, while respecting the in-order ordering for nodes with the same alignment. In an arc-eager oracle arcs are added greedily, while a reduce action can either be performed as soon as the stack top node has been connected to all its dependents, or delayed until it has to reduce to allow the correct parse tree to be formed. In our model the oracle delays reduce, where possible, until the end alignment of the stack top node spans the node on the buffer. As the span end alignments often cover phrases that they head (e.g. for quantifiers) this gives a natural interpretation to predicting the span end together with the reduce action. \subsection{Delexicalization and lemma prediction} Each token in MRS annotations is aligned to at most one surface predicate. We decompose surface predicate prediction by predicting candidate lemmas for input tokens, and delexicalized predicates consisting only of sense labels. The full surface predicates are then recovered through the predicted alignments. We extract a dictionary mapping words to lemmas from the ERG lexicon. Candidate lemmas are predicted using this dictionary, and where no dictionary entry is available with a lemmatizer. The same approach is applied to predict constants, along with additional normalizations such as mapping numbers to digit strings. We use the Stanford CoreNLP toolkit~\cite{ManningEa14} to tokenize and lemmatize sentences, and tag tokens with the Stanford Named Entity Recognizer~\cite{FinkelGM05}. The tokenization is customized to correspond closely to the ERG tokenization; hyphens are removed pre-processing step. For AMR we use automatic alignments and the graph topology to classify concepts as surface or abstract. %~\cite{BuysB17a}. The lexicon is restricted to Propbank~\cite{PalmerGK05} predicates; for other concepts we extract a lexicon from the training data.
Robust Incremental Neural Semantic Graph Parsing
1704.07092
Table 2: DMRS development set results of encoder-decoder models with pointer-based alignment prediction, delexicalized predicates and hard or soft attention.
[ "Model", "EDM", "EDM [ITALIC] P", "EDM [ITALIC] A" ]
[ [ "TD soft", "81.53", "85.32", "76.94" ], [ "TD hard", "82.75", "86.37", "78.37" ], [ "AE hard", "84.65", "87.77", "80.85" ], [ "AE stack", "85.28", "88.38", "81.51" ] ]
The results show that the arc-eager models performs better than those based on top-down representation. For the arc-eager model we use hard attention, due to the natural interpretation of the alignment prediction corresponding to the transition system. The stack-based architecture gives further improvements.
\section{Meaning Representations} We define a common framework for semantic graphs in which we can place both MRS-based graph representations (DMRS and EDS) and AMR. Sentence meaning is represented with rooted, labelled, connected, directed graphs~\cite{KuhlmannO16}. An example graph is visualized in Figure~\ref{fig:eds-graph}. representations. Node labels are referred to as \emph{predicates} (\emph{concepts} in AMR) and edge labels as \emph{arguments} (AMR \emph{relations}). In addition \emph{constants}, a special type of node modifiers, are used to denote the string values of named entities and numbers (including date and time expressions). Every node is aligned to a token or a continuous span of tokens in the sentence the graph corresponds to. Minimal Recursion Semantics (MRS) is a framework for computational semantics that can be used for parsing or generation~\cite{CopenstakeFPS05}. Instances and eventualities are represented with logical variables. Predicates take arguments with labels from a small, fixed set of roles. Arguments are either logical variables or handles, designated formalism-internal variables. Handle equality constraints support scope underspecification; multiple scope-resolved logical representations can be derived from one MRS structure. A predicate corresponds to its intrinsic argument and is aligned to a character span of the (untokenized) input sentence. Predicates representing named entities or numbers are parameterized by strings. Quantification is expressed through predicates that bound instance variables, rather than through logical operators such as $\exists$ or $\forall$. MRS was designed to be integrated with feature-based grammars such as Head-driven Phrase Structure Grammar (HPSG)~\cite{PollardS94} or Lexical Functional Grammar (LFG)~\cite{KaplanB82}. MRS has been implement the English Resource Grammar (ERG)~\cite{Flickinger00}, a broad-coverage high-precision HPSG grammar. \newcite{OepenL06} proposed Elementary Dependency Structure (EDS), a conversion of MRS to variable-free dependency graphs which drops scope underspecification. \newcite{Copenstake09} extended this conversion to avoid information loss, primarily through richer edge labels. The resulting representation, Dependency MRS (DMRS), can be converted back to the original MRS, or used directly in MRS-based applications~\cite{CopenstakeEa16}. We are interested in the empirical performance of parsers for both of these representations: while EDS is more interpretable as an independent semantic graph representation, DMRS can be related back to underspecified logical forms. A bilexical simplification of EDS has previously been used for semantic dependency parsing~\cite{OepenEa14,OepenEa15}. Figure~\ref{fig:eds-graph} illustrates an EDS graph. MRS makes an explicit distinction between surface and abstract predicates (by convention surface predicates are prefixed by an underscore). Surface predicates consist of a lemma followed by a coarse part-of-speech tag and an optional sense label. Predicates absent from the ERG lexicon are represented by their surface forms and POS tags. We convert the character-level predicate spans given by MRS to token-level spans for parsing purposes, but the representation does not require gold tokenization. Surface predicates usually align with the span of the token(s) they represent, while abstract predicates can span longer segments. In full MRS every predicate is annotated with a set of morphosyntactic features, encoding for example tense, aspect and number information; we do not currently model these features. AMR~\cite{BanarescuEa13} graphs can be represented in the same framework, despite a number of linguistic differences with MRS. Some information annotated explicitly in MRS is latent in AMR, including alignments and the distinction between surface (lexical) and abstract concepts. AMR predicates are based on PropBank~\cite{PalmerGK05}, annotated as lemmas plus sense labels, but they form only a subset of concepts. Other concepts are either English words or special keywords, corresponding to overt lexemes in some cases but not others. \section{Introduction} An important goal of Natural Language Understanding (NLU) is to parse sentences to structured, interpretable meaning representations that can be used for query execution, inference and reasoning. Recently end-to-end models have outperformed traditional pipeline approaches, predicting syntactic or semantic structure as intermediate steps, on NLU tasks such as sentiment analysis and semantic relatedness~\cite{LeM14,KirosEa15}, question answering~\cite{HermannEa15} % WestonCB14, and textual entailment~\cite{RocktaschelEa15}. However the linguistic structure used in applications has predominantly been shallow, restricted to bilexical dependencies or trees. In this paper we focus on robust parsing into linguistically deep representations. The main representation that we use is Minimal Recursion Semantics (MRS)~\cite{CopenstakeFMRS95,CopenstakeFPS05}, which serves as the semantic representation of the English Resource Grammar (ERG)~\cite{Flickinger00}. Existing parsers for full MRS (as opposed to bilexical semantic graphs derived from, but simplifying MRS) are grammar-based, performing disambiguation with a maximum entropy model~\cite{ToutanovaMFO05,ZhangOC07}; this approach has high precision but incomplete coverage. Our main contribution is to develop a fast and robust parser for full MRS-based semantic graphs. We exploit the power of global conditioning enabled by deep learning to predict linguistically deep graphs incrementally. The model does not have access to the underlying ERG or syntactic structures from which the MRS analyses were originally derived. We develop parsers for two graph-based conversions of MRS, Elementary Dependency Structure (EDS)~\cite{OepenL06} and Dependency MRS (DMRS)~\cite{Copenstake09}, of which the latter is inter-convertible with MRS. Abstract Meaning Representation (AMR)~\cite{BanarescuEa13} is a graph-based semantic representation that shares the goals of MRS. Aside from differences in the choice of which linguistic phenomena are annotated, MRS is a compositional representation explicitly coupled with the syntactic structure of the sentence, while AMR does not assume compositionality or alignment with the sentence structure. Recently a number of AMR parsers have been developed~\cite{FlaniganTCDS14,WangXP15,ArtziLZ15,DamonteCS16}, but corpora are still under active development and low inter-annotator agreement places on upper bound of $83\%$ F1 on expected parser performance~\cite{BanarescuEa13}. We apply our model to AMR parsing by introducing structure that is present explicitly in MRS but not in AMR~\cite{BuysB17a}. Parsers based on RNNs have achieved state-of-the-art performance for dependency parsing~\cite{DyerBLMS15,KiperwasserG16} and constituency parsing~\cite{VinyalsEa15,DyerKBS16,CrossH16a}. One of the main reasons for the prevalence of bilexical dependencies and tree-based representations is that they can be parsed with efficient and well-understood algorithms. However, one of the key advantages of deep learning is the ability to make predictions conditioned on unbounded contexts encoded with RNNs; this enables us to predict more complex structures without increasing algorithmic complexity. In this paper we show how to perform linguistically deep parsing with RNNs. Our parser is based on a transition system for semantic graphs. However, instead of generating arcs over an ordered, fixed set of nodes (the words in the sentence), we generate the nodes and their alignments jointly with the transition actions. We use a graph-based variant of the arc-eager transition-system. The sentence is encoded with a bidirectional RNN. The transition sequence, seen as a graph linearization, can be predicted with any encoder-decoder model, but we show that using hard attention, predicting the alignments with a pointer network and conditioning explicitly on stack-based features improves performance. In order to deal with data sparsity candidate lemmas are predicted as a pre-processing step, so that the RNN decoder predicts unlexicalized node labels. We evaluate our parser on DMRS, EDS and AMR graphs. Our model architecture improves performance from $79.68\%$ to $84.16\%$ F1 over an attention-based encoder-decoder baseline. Although our parser is less accurate that a high-precision grammar-based parser on a test set of sentences parsable by that grammar, incremental prediction and GPU batch processing enables it to parse $529$ tokens per second, against $7$ tokens per second for the grammar-based parser. On AMR parsing our model obtains $60.11\%$ Smatch. \begin{abstract} Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the $86.69\%$ Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.\footnote{Code, models and data preparation scripts are available at \url{https://github.com/janmbuys/DeepDeepParser}.} \end{abstract} \section{Experiments} \label{sec:experiments} \subsection{Data} DeepBank~\cite{FlickingerZK12} is an HPSG and MRS annotation of the Penn Treebank Wall Street Journal (WSJ) corpus. It was developed following an approach known as dynamic treebanking~\cite{OepenFTM04} that couples treebank annotation with grammar development, in this case of the ERG. This approach has been shown to lead to high inter-annotator agreement: $0.94$ against $0.71$ for AMR~\cite{BenderFOPC15}. Parses are only provided for sentences for which the ERG has an analysis acceptable to the annotator -- this means that we cannot evaluate parsing accuracy for sentences which the ERG cannot parse (approximately $15\%$ of the original corpus). We use Deepbank version $1.1$, corresponding to ERG \texttt{1214}\footnote{\url{http://svn.delph-in.net/erg/tags/1214/}}, following the suggested split of sections $0$ to $19$ as training data data, $20$ for development and $21$ for testing. The gold-annotated training data consists of 35,315 sentences. We use the LOGON environment\footnote{\url{http://moin.delph-in.net/LogonTop}} and the pyDelphin library\footnote{\url{https://github.com/delph-in/pydelphin}} to extract DMRS and EDS graphs. For AMR parsing we use LDC2015E86, the dataset released for the SemEval 2016 AMR parsing Shared Task~\cite{May16}. This data includes newswire, weblog and discussion forum text. The training set has 16,144 sentences. We obtain alignments using the rule-based JAMR aligner~\cite{FlaniganTCDS14}. \subsection{Evaluation} \newcite{DridanO11} proposed an evaluation metric called Elementary Dependency Matching (EDM) for MRS-based graphs. EDM computes the F1-score of tuples of predicates and arguments. A predicate tuple consists of the label and character span of a predicate, while an argument tuple consists of the character spans of the head and dependent nodes of the relation, together with the argument label. In order to tolerate subtle tokenization differences with respect to punctuation, we allow span pairs whose ends differ by one character to be matched. The Smatch metric~\cite{CaiK13}, proposed for evaluating AMR graphs, also measures graph overlap, but does not rely on sentence alignments to determine the correspondences between graph nodes. Smatch is instead computed by performing inference over graph alignments to estimate the maximum F1-score obtainable from a one-to-one matching between the predicted and gold graph nodes. \subsection{Model setup} Our parser is implemented in TensorFlow~\cite{AbadiEa15}. For training we use Adam~\cite{KingmaB14} with learning rate $0.01$ and batch-size $64$. Gradients norms are clipped to $5.0$~\cite{PascanuMB13}. We use single-layer LSTMs with dropout of $0.3$ (tuned on the development set) on input and output connections. We use encoder and decoder embeddings of size $256$, and POS and NE tag embeddings of size $32$, For DMRS and EDS graphs the hidden units size is set to $256$, for AMR it is $128$. This configuration, found using grid search and heuristic search within the range of models that fit into a single GPU, gave the best performance on the development set under multiple graph linearizations. Encoder word embeddings are initialized (in the first 100 dimensions) with pre-trained order-sensitive embeddings~\cite{LingDBT15}. Singletons in the encoder input are replaced with an unknown word symbol with probability $0.5$ for each iteration. \subsection{MRS parsing results} We compare different linearizations and model architectures for parsing DMRS on the development data, showing that our approach is more accurate than baseline neural approaches. We report EDM scores, including scores for predicate (EDM$_P$) and argument (EDM$_A$) prediction. First we report results using standard attention-based encoder-decoders, with the alignments encoded as token strings in the linearization. (Table~\ref{tab:dmrs-dev-delex}). We compare the top-down (TD) and arc-eager (AE) linearizations, as well as the effect of delexicalizing the predicates (factorizing lemmas out of the linearization and predicting them separately.) In both cases constants are predicted with a dictionary lookup based on the predicted spans. A special label is predicted for predicates not in the ERG lexicon -- the words and POS tags that make up those predicates are recovered through the alignments during post-processing. The arc-eager unlexicalized representation gives the best performance, even though the model has to learn to model the transition system stack through the recurrent hidden states without any supervision of the transition semantics. The unlexicalized models are more accurate, mostly due to their ability to generalize to sparse or unseen predicates occurring in the lexicon. For the arc-eager representation, the oracle EDM is $99\%$ for the lexicalized representation and $98.06\%$ for the delexicalized representation. The remaining errors are mostly due to discrepancies between the tokenization used by our system and the ERG tokenization. The unlexicalized models are also faster to train, as the decoder's output vocabulary is much smaller, reducing the expense of computing softmaxes over large vocabularies. Next we consider models with delexicalized linearizations that predict the alignments with pointer networks, contrasting soft and hard attention models (Table~\ref{tab:dmrs-dev-point}). The results show that the arc-eager models performs better than those based on top-down representation. For the arc-eager model we use hard attention, due to the natural interpretation of the alignment prediction corresponding to the transition system. The stack-based architecture gives further improvements. When comparing the effect of different predicate orderings for the arc-eager model, we find that the monotone ordering performs $0.44$ EDM better than the in-order ordering, despite having to parse more non-planar dependencies. We also trained models that only predict predicates (in monotone order) together with their start spans. The hard attention model obtains $91.36\%$ F1 on predicates together with their start spans with the unlexicalized model, compared to $88.22\%$ for lexicalized predicates and $91.65\%$ for the full parsing model. Table \ref{tab:dmrs-test} reports test set results for various evaluation metrics. Start EDM is calculated by requiring only the start of the alignment spans to match, not the ends. We compare the performance of our baseline and stack-based models against ACE, the ERG-based parser. Despite the promising performance of the model a gap remains between the accuracy of our parser and ACE. One reason for this is that the test set sentences will arguably be easier for ACE to parse as their choice was restricted by the same grammar that ACE uses. EDM metrics excluding end-span prediction (Start EDM) show that our parser has relatively more difficulty in parsing end-span predictions than the grammar-based parser. We also evaluate the speed of our model compared with ACE. For the unbatched version of our model, the stack-based parser parses $41.63$ tokens per second, while the batched implementation parses $529.42$ tokens per second using a batch size of $128$. In comparison, the setting of ACE for which we report accuracies parses $7.47$ tokens per second. By restricting the memory usage of ACE, which restricts its coverage, we see that ACE can parse $11.07$ tokens per second at $87.7\%$ coverage, and $15.11$ tokens per second at $77.8\%$ coverage. Finally we report results for parsing EDS (Table~\ref{tab:eds-test}). The EDS parsing task is slightly simpler than DMRS, due to the absence of rich argument labels and additional graph edges that allow the recovery of full MRS. We see that for ACE the accuracies are very similar, while for our model EDS parsing is more accurate on the EDM metrics. We hypothesize that most of the extra information in DMRS can be obtained through the ERG, to which ACE has access but our model doesn't. An EDS corpus which consists of about $95\%$ of the DeepBank data has also been released\footnote{\url{http://sdp.delph-in.net/osdp-12.tgz}}, with the goal of enabling comparison with other semantic graph parsing formalisms, including CCG dependencies and Prague Semantic Dependencies, on the same data set~\cite{KuhlmannO16}. On this corpus our model obtains $85.87$ EDM and $85.49$ Smatch. \subsection{AMR parsing} We apply the same approach to AMR parsing. Results on the development set are given in Table~\ref{tab:amr-dev}. The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy. However, concept prediction remains the most important weakness of the model; \newcite{DamonteCS16} reports that state-of-the-art AMR parsers score $83\%$ on concept prediction. We report test set results in Table~\ref{tab:amr-test}. Our best neural model outperforms the baseline JAMR parser~\cite{FlaniganTCDS14}, but still lags behind the performance of state-of-the-art AMR parsers such as CAMR~\cite{WangEa16} and AMR Eager~\cite{DamonteCS16}. These models make extensive use of external resources, including syntactic parsers and semantic role labellers. Our attention-based encoder-decoder model already outperforms previous sequence-to-sequence AMR parsers~\cite{BarzdinsG16,PengWGX17}, and the arc-eager model boosts accuracy further. Our model also outperforms a Synchronous Hyperedge Replacement Grammar model~\cite{PengG16} which is comparable as it does not make extensive use of external resources. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \newcommand{\pb}[1]{\textcolor{red}{\bf\small [#1 --PB]}} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Robust Incremental Neural Semantic Graph Parsing} \author{Jan Buys$^1$ and Phil Blunsom$^{1,2}$ \\ $^1$Department of Computer Science, University of Oxford \qquad $^2$DeepMind \\ \texttt{\{jan.buys,phil.blunsom\}@cs.ox.ac.uk} \\ } \date{} \begin{document} \maketitle \input{abstract} \input{introduction} \input{graphs} \input{parsing} \input{models} \input{related-work} \input{experiments} \section{Conclusion} In this paper we advance the state of parsing by employing deep learning techniques to parse sentence to linguistically expressive semantic representations that have not previously been parsed in an end-to-end fashion. We presented a robust, wide-coverage parser for MRS that is faster than existing parsers and amenable to batch processing. We believe that there are many future avenues to explore to further increase the accuracy of such parsers, including different training objectives, more structured architectures and semi-supervised learning. \section*{Acknowledgments} The first author thanks the financial support of the Clarendon Fund and the Skye Foundation. We thank Stephan Oepen for feedback and help with data preperation, and members of the Oxford NLP group for valuable discussions. \bibliographystyle{acl} \end{document} \section{Encoder-Decoder Models} \label{sec:models} \subsection{Sentence encoder} The sentence $\mathbf{e}$ is encoded with a bidirectional RNN. We use a standard LSTM architecture without peephole connections~\cite{JozefowiczZS15}. For every token $e$ we embed its word, POS tag and named entity (NE) tag as vectors $x_w$, $x_t$ and $x_n$, respectively. The embeddings are concatenated and passed through a linear transformation \[ g(e) = W^{(x)} [x_w; x_t; x_n] + b^{x}, \] such that $g(e)$ has the same dimension as the LSTM. Each input position $i$ is represented by a hidden state $h_i$, which is the concatenation of its forward and backward LSTM state vectors. \subsection{Hard attention decoder} We model the alignment of graph nodes to sentence tokens, $\mathbf{a}$, as a random variable. For the arc-eager model, $a_j$ corresponds to the alignment of the node of the buffer after action $t_j$ is executed. The distribution of $t_j$ is over all transitions and predicates (corresponding to shift transitions), predicted with a single softmax. The parser output is predicted by an RNN decoder. Let $s_j$ be the decoder hidden state at output position $j$. We initialize $s_0$ with the final state of the backward encoder. The alignment is predicted with a pointer network~\cite{VinyalsFJ15}. The logits are computed with an MLP scoring the decoder hidden state against each of the encoder hidden states (for $i = 1, \ldots, I$), \[ u_j^i = w^T \tanh(W^{(1)} h_i + W^{(2)} s_j). \] The alignment distribution is then estimated by \[ p(a_j = i | \mathbf{a}_{1:j-1}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(u_j^i). \] To predict the next transition $t_i$, the output vector is conditioned on the encoder state vector $h_{a_j}$, corresponding to the alignment: \begin{align*} o_j &= W^{(3)} s_j + W^{(4)} h_{a_j} \\ v_j &= R^{(d)} o_j + b^{(d)}, \end{align*} where $R^{(d)}$ and $b^{(d)}$ are the output representation matrix and bias vector, respectively. The transition distribution is then given by \[ p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(v_j). \] Let $e(t)$ be the embedding of decoder symbol $t$. The RNN state at the next time-step is computed as \begin{align*} d_{j+1} &= W^{(5)} e(t_{j}) + W^{(6)} h_{a_j} \\ s_{j+1} &= RNN(d_{j+1}, s_{j}). \end{align*} The end-of-span alignment $a_j^{(e)}$ for MRS-based graphs is predicted with another pointer network. The end alignment of a token is predicted only when a node is reduced from the stack, therefore this alignment is not observed at each time-step; it is also not fed back into the model. The hard attention approach, based on supervised alignments, can be contrasted to soft attention, which learns to attend over the input without supervision. The attention is computed as with hard attention, as $\alpha_j^i = \mathrm{softmax}(u_j^i)$. However instead of making a hard selection, a weighted average over the encoder vectors is computed as $q_j = \sum_{i=1}^{i=I} \alpha_j^i h_i$. This vector is used instead of $h_{a_j}$ for prediction and feeding to the next time-step. \subsection{Stack-based model} We extend the hard attention model to include features based on the transition system stack. These features are embeddings from the bidirectional RNN encoder, corresponding to the alignments of the nodes on the buffer and on top of the stack. This approach is similar to the features proposed by \newcite{KiperwasserG16} and \newcite{CrossH16} for dependency parsing, although they do not use RNN decoders. To implement these features the layer that computes the output vector is extended to \[ o_j = W^{(3)} s_j + W^{(4)} h_{a_j} + W^{(7)} h_{\textrm{st}_0}, \] where $\texttt{st}_0$ is the sentence alignment index of the element on top of the stack. The input layer to the next RNN time-step is similarly extended to \[ d_{j+1} = W^{(5)} e(t_{j}) + W^{(6)} h_{\textrm{buf}} + W^{(8)} h_{\textrm{st}_0}, \] where \texttt{buf} is the buffer alignment after $t_j$ is executed. Our implementation of the stack-based model enables batch processing in static computation graphs, similar to \newcite{BowmanEa16}. We maintain a stack of alignment indexes for each element in the batch, which is updated inside the computation graph after each parsing action. This enables minibatch SGD during training as well as efficient batch decoding. We perform greedy decoding. For the stack-based model we ensure that if the stack is empty, the next transition predicted has to be shift. For the other models we ensure that the output is well-formed during post-processing by robustly skipping over out-of-place symbols or inserting missing ones. \section{Related Work} Prior work for MRS parsing predominantly predicts structures in the context of grammar-based parsing, where sentences are parsed to HPSG derivations consistent with the grammar, in this case the ERG~\cite{Flickinger00}. The nodes in the derivation trees are feature structures, from which MRS is extracted through unification. This approach fails to parse sentences for which no valid derivation is found. Maximum entropy models are used to score the derivations in order to find the most likely parse~\cite{ToutanovaMFO05}. This approach is implemented in the PET~\cite{Callmeier00} and ACE\footnote{\url{http://sweaglesw.org/linguistics/ace/}} parsers. There have also been some efforts to develop robust MRS parsers. One proposed approach learns a PCFG grammar to approximate the HPSG derivations~\cite{ZhangK11,ZhangEa14}. MRS is then extracted with robust unification to compose potentially incompatible feature structures, although that still fails for a small proportion of sentences. The model is trained on a large corpus of Wikipedia text parsed with the grammar-based parser. \newcite{Ytrestol12} proposed a transition-based approach to HPSG parsing that produces derivations from which both syntactic and semantic (MRS) parses can be extracted. The parser has an option not to be restricted by the ERG. However, neither of these approaches have results available that can be compared directly to our setup, or generally available implementations. Although AMR parsers produce graphs that are similar in structure to MRS-based graphs, most of them make assumptions that are invalid for MRS, and rely on extensive external AMR-specific resources. \newcite{FlaniganTCDS14} proposed a two-stage parser that first predicts concepts or subgraphs corresponding to sentence segments, and then parses these concepts into a graph structure. However MRS has a large proportion of abstract nodes that cannot be predicted from short segments, and interact closely with the graph structure. \newcite{WangXP15,WangXP15a} proposed a custom transition-system for AMR parsing that converts dependency trees to AMR graphs, relying on assumptions on the relationship between these. \newcite{PustHKMM15} proposed a parser based on syntax-based machine translation (MT), while AMR has also been integrated into CCG Semantic Parsing~\cite{ArtziLZ15,MisraA16}. Recently \newcite{DamonteCS16} and \newcite{PengWGX17} proposed AMR parsers based on neural networks. \section{Incremental Graph Parsing} \label{sec:parsing} We parse sentences to their meaning representations by incrementally predicting semantic graphs together with their alignments. Let $\mathbf{e} = e_1, e_2, \ldots, e_I$ be a tokenized English sentence, $\mathbf{t} = t_1, t_2, \ldots, t_J$ a sequential representation of its graph derivation and $\mathbf{a} = a_1, a_2, \ldots, a_J$ an alignment sequence consisting of integers in the range $1, \ldots, I$. We model the conditional distribution $p(\mathbf{t}, \mathbf{a} | \mathbf{e})$ which decomposes as \[ \prod_{j=1}^{J} p(a_j | \mathbf{(a,t)}_{1:j-1}, \mathbf{e}) p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}). \] We also predict the end-of-span alignments as a seperate sequence $\mathbf{a^{(e)}}$. \subsection{Top-down linearization} We now consider how to linearize the semantic graphs, before defining the neural models to parameterize the parser in section~\ref{sec:models}. The first approach is to linearize a graph as the pre-order traversal of its spanning tree, starting at a designated root node (see Figure~\ref{fig:linear-eds}). Variants of this approach have been proposed for neural constituency parsing~\cite{VinyalsEa15}, logical form prediction~\cite{DongL16,JiaL16} and AMR parsing~\cite{BarzdinsG16,PengWGX17}. In the linearization, labels of edges whose direction are reversed in the spanning tree are marked by adding \texttt{-of}. Edges not included in the spanning tree, referred to as \emph{reentrancies}, are represented with special edges whose dependents are dummy nodes pointing back to the original nodes. Our potentially lossy representation represents these edges by repeating the dependent node labels and alignments, which are recovered heuristically. The alignment does not influence the linearized node ordering. \subsection{Transition-based parsing} Figure~\ref{fig:eds-graph} shows that the semantic graphs we work with can also be interpreted as dependency graphs, as nodes are aligned to sentence tokens. Transition-based parsing~\cite{Nivre08} has been used extensively to predict dependency graphs incrementally. We apply a variant of the arc-eager transition system that has been proposed for graph (as opposed to tree) parsing~\cite{SagaeTsujii08,TitovHMM09,GomezN10} to derive a transition-based parser for deep semantic graphs. In dependency parsing the sentence tokens also act as nodes in the graph, but here we need to generate the nodes incrementally as the transition-system proceeds, conditioning the generation on the given sentence. \newcite{DamonteCS16} proposed an arc-eager AMR parser, but their transition system is more narrowly restricted to AMR graphs. The transition system consists of a \emph{stack} of graph nodes being processed and a \emph{buffer}, holding a single node at a time. The main transition actions are \emph{shift}, \emph{reduce}, \emph{left-arc}, \emph{right-arc}. Figure~\ref{fig:transition-table} shows an example transition sequence together with the stack and buffer after each step. The shift transition moves the element on the buffer to the top of the stack, and generates a predicate and its alignment as the next node on the buffer. Left-arc and right-arc actions add labeled arcs between the buffer and stack top (for DMRS a transition for undirected arcs is included), but do not change the state of the stack or buffer. Finally, reduce pops the top element from the stack, and predicts its end-of-span alignment (if included in the representation). To predict non-planar arcs, we add another transition, which we call \emph{cross-arc}, which first predicts the stack index of a node which is not on top of the stack, adding an arc between the head of the buffer and that node. Another special transition designates the buffer node as the root. To derive an oracle for this transition system, it is necessary to determine the order in which the nodes are generated. We consider two approaches. The first ordering is obtained by performing an in-order traversal of the spanning tree, where the node order is determined by the alignment. In the resulting linearization the only non-planar arcs are reentrancies. The second approach lets the ordering be monotone (non-decreasing) with respect to the alignments, while respecting the in-order ordering for nodes with the same alignment. In an arc-eager oracle arcs are added greedily, while a reduce action can either be performed as soon as the stack top node has been connected to all its dependents, or delayed until it has to reduce to allow the correct parse tree to be formed. In our model the oracle delays reduce, where possible, until the end alignment of the stack top node spans the node on the buffer. As the span end alignments often cover phrases that they head (e.g. for quantifiers) this gives a natural interpretation to predicting the span end together with the reduce action. \subsection{Delexicalization and lemma prediction} Each token in MRS annotations is aligned to at most one surface predicate. We decompose surface predicate prediction by predicting candidate lemmas for input tokens, and delexicalized predicates consisting only of sense labels. The full surface predicates are then recovered through the predicted alignments. We extract a dictionary mapping words to lemmas from the ERG lexicon. Candidate lemmas are predicted using this dictionary, and where no dictionary entry is available with a lemmatizer. The same approach is applied to predict constants, along with additional normalizations such as mapping numbers to digit strings. We use the Stanford CoreNLP toolkit~\cite{ManningEa14} to tokenize and lemmatize sentences, and tag tokens with the Stanford Named Entity Recognizer~\cite{FinkelGM05}. The tokenization is customized to correspond closely to the ERG tokenization; hyphens are removed pre-processing step. For AMR we use automatic alignments and the graph topology to classify concepts as surface or abstract. %~\cite{BuysB17a}. The lexicon is restricted to Propbank~\cite{PalmerGK05} predicates; for other concepts we extract a lexicon from the training data.
Robust Incremental Neural Semantic Graph Parsing
1704.07092
Table 3: DMRS parsing test set results, comparing the standard top-down attention-based and arc-eager stack-based RNN models to the grammar-based ACE parser.
[ "Model", "TD RNN", "AE RNN", "ACE" ]
[ [ "EDM", "79.68", "84.16", "89.64" ], [ "EDM [ITALIC] P", "83.36", "87.54", "92.08" ], [ "EDM [ITALIC] A", "75.16", "80.10", "86.77" ], [ "Start EDM", "84.44", "87.81", "91.91" ], [ "Start EDM [ITALIC] A", "80.93", "85.61", "89.28" ], [ "Smatch", "85.28", "86.69", "93.50" ] ]
Start EDM is calculated by requiring only the start of the alignment spans to match, not the ends. We compare the performance of our baseline and stack-based models against ACE, the ERG-based parser.
\section{Meaning Representations} We define a common framework for semantic graphs in which we can place both MRS-based graph representations (DMRS and EDS) and AMR. Sentence meaning is represented with rooted, labelled, connected, directed graphs~\cite{KuhlmannO16}. An example graph is visualized in Figure~\ref{fig:eds-graph}. representations. Node labels are referred to as \emph{predicates} (\emph{concepts} in AMR) and edge labels as \emph{arguments} (AMR \emph{relations}). In addition \emph{constants}, a special type of node modifiers, are used to denote the string values of named entities and numbers (including date and time expressions). Every node is aligned to a token or a continuous span of tokens in the sentence the graph corresponds to. Minimal Recursion Semantics (MRS) is a framework for computational semantics that can be used for parsing or generation~\cite{CopenstakeFPS05}. Instances and eventualities are represented with logical variables. Predicates take arguments with labels from a small, fixed set of roles. Arguments are either logical variables or handles, designated formalism-internal variables. Handle equality constraints support scope underspecification; multiple scope-resolved logical representations can be derived from one MRS structure. A predicate corresponds to its intrinsic argument and is aligned to a character span of the (untokenized) input sentence. Predicates representing named entities or numbers are parameterized by strings. Quantification is expressed through predicates that bound instance variables, rather than through logical operators such as $\exists$ or $\forall$. MRS was designed to be integrated with feature-based grammars such as Head-driven Phrase Structure Grammar (HPSG)~\cite{PollardS94} or Lexical Functional Grammar (LFG)~\cite{KaplanB82}. MRS has been implement the English Resource Grammar (ERG)~\cite{Flickinger00}, a broad-coverage high-precision HPSG grammar. \newcite{OepenL06} proposed Elementary Dependency Structure (EDS), a conversion of MRS to variable-free dependency graphs which drops scope underspecification. \newcite{Copenstake09} extended this conversion to avoid information loss, primarily through richer edge labels. The resulting representation, Dependency MRS (DMRS), can be converted back to the original MRS, or used directly in MRS-based applications~\cite{CopenstakeEa16}. We are interested in the empirical performance of parsers for both of these representations: while EDS is more interpretable as an independent semantic graph representation, DMRS can be related back to underspecified logical forms. A bilexical simplification of EDS has previously been used for semantic dependency parsing~\cite{OepenEa14,OepenEa15}. Figure~\ref{fig:eds-graph} illustrates an EDS graph. MRS makes an explicit distinction between surface and abstract predicates (by convention surface predicates are prefixed by an underscore). Surface predicates consist of a lemma followed by a coarse part-of-speech tag and an optional sense label. Predicates absent from the ERG lexicon are represented by their surface forms and POS tags. We convert the character-level predicate spans given by MRS to token-level spans for parsing purposes, but the representation does not require gold tokenization. Surface predicates usually align with the span of the token(s) they represent, while abstract predicates can span longer segments. In full MRS every predicate is annotated with a set of morphosyntactic features, encoding for example tense, aspect and number information; we do not currently model these features. AMR~\cite{BanarescuEa13} graphs can be represented in the same framework, despite a number of linguistic differences with MRS. Some information annotated explicitly in MRS is latent in AMR, including alignments and the distinction between surface (lexical) and abstract concepts. AMR predicates are based on PropBank~\cite{PalmerGK05}, annotated as lemmas plus sense labels, but they form only a subset of concepts. Other concepts are either English words or special keywords, corresponding to overt lexemes in some cases but not others. \section{Introduction} An important goal of Natural Language Understanding (NLU) is to parse sentences to structured, interpretable meaning representations that can be used for query execution, inference and reasoning. Recently end-to-end models have outperformed traditional pipeline approaches, predicting syntactic or semantic structure as intermediate steps, on NLU tasks such as sentiment analysis and semantic relatedness~\cite{LeM14,KirosEa15}, question answering~\cite{HermannEa15} % WestonCB14, and textual entailment~\cite{RocktaschelEa15}. However the linguistic structure used in applications has predominantly been shallow, restricted to bilexical dependencies or trees. In this paper we focus on robust parsing into linguistically deep representations. The main representation that we use is Minimal Recursion Semantics (MRS)~\cite{CopenstakeFMRS95,CopenstakeFPS05}, which serves as the semantic representation of the English Resource Grammar (ERG)~\cite{Flickinger00}. Existing parsers for full MRS (as opposed to bilexical semantic graphs derived from, but simplifying MRS) are grammar-based, performing disambiguation with a maximum entropy model~\cite{ToutanovaMFO05,ZhangOC07}; this approach has high precision but incomplete coverage. Our main contribution is to develop a fast and robust parser for full MRS-based semantic graphs. We exploit the power of global conditioning enabled by deep learning to predict linguistically deep graphs incrementally. The model does not have access to the underlying ERG or syntactic structures from which the MRS analyses were originally derived. We develop parsers for two graph-based conversions of MRS, Elementary Dependency Structure (EDS)~\cite{OepenL06} and Dependency MRS (DMRS)~\cite{Copenstake09}, of which the latter is inter-convertible with MRS. Abstract Meaning Representation (AMR)~\cite{BanarescuEa13} is a graph-based semantic representation that shares the goals of MRS. Aside from differences in the choice of which linguistic phenomena are annotated, MRS is a compositional representation explicitly coupled with the syntactic structure of the sentence, while AMR does not assume compositionality or alignment with the sentence structure. Recently a number of AMR parsers have been developed~\cite{FlaniganTCDS14,WangXP15,ArtziLZ15,DamonteCS16}, but corpora are still under active development and low inter-annotator agreement places on upper bound of $83\%$ F1 on expected parser performance~\cite{BanarescuEa13}. We apply our model to AMR parsing by introducing structure that is present explicitly in MRS but not in AMR~\cite{BuysB17a}. Parsers based on RNNs have achieved state-of-the-art performance for dependency parsing~\cite{DyerBLMS15,KiperwasserG16} and constituency parsing~\cite{VinyalsEa15,DyerKBS16,CrossH16a}. One of the main reasons for the prevalence of bilexical dependencies and tree-based representations is that they can be parsed with efficient and well-understood algorithms. However, one of the key advantages of deep learning is the ability to make predictions conditioned on unbounded contexts encoded with RNNs; this enables us to predict more complex structures without increasing algorithmic complexity. In this paper we show how to perform linguistically deep parsing with RNNs. Our parser is based on a transition system for semantic graphs. However, instead of generating arcs over an ordered, fixed set of nodes (the words in the sentence), we generate the nodes and their alignments jointly with the transition actions. We use a graph-based variant of the arc-eager transition-system. The sentence is encoded with a bidirectional RNN. The transition sequence, seen as a graph linearization, can be predicted with any encoder-decoder model, but we show that using hard attention, predicting the alignments with a pointer network and conditioning explicitly on stack-based features improves performance. In order to deal with data sparsity candidate lemmas are predicted as a pre-processing step, so that the RNN decoder predicts unlexicalized node labels. We evaluate our parser on DMRS, EDS and AMR graphs. Our model architecture improves performance from $79.68\%$ to $84.16\%$ F1 over an attention-based encoder-decoder baseline. Although our parser is less accurate that a high-precision grammar-based parser on a test set of sentences parsable by that grammar, incremental prediction and GPU batch processing enables it to parse $529$ tokens per second, against $7$ tokens per second for the grammar-based parser. On AMR parsing our model obtains $60.11\%$ Smatch. \begin{abstract} Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the $86.69\%$ Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.\footnote{Code, models and data preparation scripts are available at \url{https://github.com/janmbuys/DeepDeepParser}.} \end{abstract} \section{Experiments} \label{sec:experiments} \subsection{Data} DeepBank~\cite{FlickingerZK12} is an HPSG and MRS annotation of the Penn Treebank Wall Street Journal (WSJ) corpus. It was developed following an approach known as dynamic treebanking~\cite{OepenFTM04} that couples treebank annotation with grammar development, in this case of the ERG. This approach has been shown to lead to high inter-annotator agreement: $0.94$ against $0.71$ for AMR~\cite{BenderFOPC15}. Parses are only provided for sentences for which the ERG has an analysis acceptable to the annotator -- this means that we cannot evaluate parsing accuracy for sentences which the ERG cannot parse (approximately $15\%$ of the original corpus). We use Deepbank version $1.1$, corresponding to ERG \texttt{1214}\footnote{\url{http://svn.delph-in.net/erg/tags/1214/}}, following the suggested split of sections $0$ to $19$ as training data data, $20$ for development and $21$ for testing. The gold-annotated training data consists of 35,315 sentences. We use the LOGON environment\footnote{\url{http://moin.delph-in.net/LogonTop}} and the pyDelphin library\footnote{\url{https://github.com/delph-in/pydelphin}} to extract DMRS and EDS graphs. For AMR parsing we use LDC2015E86, the dataset released for the SemEval 2016 AMR parsing Shared Task~\cite{May16}. This data includes newswire, weblog and discussion forum text. The training set has 16,144 sentences. We obtain alignments using the rule-based JAMR aligner~\cite{FlaniganTCDS14}. \subsection{Evaluation} \newcite{DridanO11} proposed an evaluation metric called Elementary Dependency Matching (EDM) for MRS-based graphs. EDM computes the F1-score of tuples of predicates and arguments. A predicate tuple consists of the label and character span of a predicate, while an argument tuple consists of the character spans of the head and dependent nodes of the relation, together with the argument label. In order to tolerate subtle tokenization differences with respect to punctuation, we allow span pairs whose ends differ by one character to be matched. The Smatch metric~\cite{CaiK13}, proposed for evaluating AMR graphs, also measures graph overlap, but does not rely on sentence alignments to determine the correspondences between graph nodes. Smatch is instead computed by performing inference over graph alignments to estimate the maximum F1-score obtainable from a one-to-one matching between the predicted and gold graph nodes. \subsection{Model setup} Our parser is implemented in TensorFlow~\cite{AbadiEa15}. For training we use Adam~\cite{KingmaB14} with learning rate $0.01$ and batch-size $64$. Gradients norms are clipped to $5.0$~\cite{PascanuMB13}. We use single-layer LSTMs with dropout of $0.3$ (tuned on the development set) on input and output connections. We use encoder and decoder embeddings of size $256$, and POS and NE tag embeddings of size $32$, For DMRS and EDS graphs the hidden units size is set to $256$, for AMR it is $128$. This configuration, found using grid search and heuristic search within the range of models that fit into a single GPU, gave the best performance on the development set under multiple graph linearizations. Encoder word embeddings are initialized (in the first 100 dimensions) with pre-trained order-sensitive embeddings~\cite{LingDBT15}. Singletons in the encoder input are replaced with an unknown word symbol with probability $0.5$ for each iteration. \subsection{MRS parsing results} We compare different linearizations and model architectures for parsing DMRS on the development data, showing that our approach is more accurate than baseline neural approaches. We report EDM scores, including scores for predicate (EDM$_P$) and argument (EDM$_A$) prediction. First we report results using standard attention-based encoder-decoders, with the alignments encoded as token strings in the linearization. (Table~\ref{tab:dmrs-dev-delex}). We compare the top-down (TD) and arc-eager (AE) linearizations, as well as the effect of delexicalizing the predicates (factorizing lemmas out of the linearization and predicting them separately.) In both cases constants are predicted with a dictionary lookup based on the predicted spans. A special label is predicted for predicates not in the ERG lexicon -- the words and POS tags that make up those predicates are recovered through the alignments during post-processing. The arc-eager unlexicalized representation gives the best performance, even though the model has to learn to model the transition system stack through the recurrent hidden states without any supervision of the transition semantics. The unlexicalized models are more accurate, mostly due to their ability to generalize to sparse or unseen predicates occurring in the lexicon. For the arc-eager representation, the oracle EDM is $99\%$ for the lexicalized representation and $98.06\%$ for the delexicalized representation. The remaining errors are mostly due to discrepancies between the tokenization used by our system and the ERG tokenization. The unlexicalized models are also faster to train, as the decoder's output vocabulary is much smaller, reducing the expense of computing softmaxes over large vocabularies. Next we consider models with delexicalized linearizations that predict the alignments with pointer networks, contrasting soft and hard attention models (Table~\ref{tab:dmrs-dev-point}). The results show that the arc-eager models performs better than those based on top-down representation. For the arc-eager model we use hard attention, due to the natural interpretation of the alignment prediction corresponding to the transition system. The stack-based architecture gives further improvements. When comparing the effect of different predicate orderings for the arc-eager model, we find that the monotone ordering performs $0.44$ EDM better than the in-order ordering, despite having to parse more non-planar dependencies. We also trained models that only predict predicates (in monotone order) together with their start spans. The hard attention model obtains $91.36\%$ F1 on predicates together with their start spans with the unlexicalized model, compared to $88.22\%$ for lexicalized predicates and $91.65\%$ for the full parsing model. Table \ref{tab:dmrs-test} reports test set results for various evaluation metrics. Start EDM is calculated by requiring only the start of the alignment spans to match, not the ends. We compare the performance of our baseline and stack-based models against ACE, the ERG-based parser. Despite the promising performance of the model a gap remains between the accuracy of our parser and ACE. One reason for this is that the test set sentences will arguably be easier for ACE to parse as their choice was restricted by the same grammar that ACE uses. EDM metrics excluding end-span prediction (Start EDM) show that our parser has relatively more difficulty in parsing end-span predictions than the grammar-based parser. We also evaluate the speed of our model compared with ACE. For the unbatched version of our model, the stack-based parser parses $41.63$ tokens per second, while the batched implementation parses $529.42$ tokens per second using a batch size of $128$. In comparison, the setting of ACE for which we report accuracies parses $7.47$ tokens per second. By restricting the memory usage of ACE, which restricts its coverage, we see that ACE can parse $11.07$ tokens per second at $87.7\%$ coverage, and $15.11$ tokens per second at $77.8\%$ coverage. Finally we report results for parsing EDS (Table~\ref{tab:eds-test}). The EDS parsing task is slightly simpler than DMRS, due to the absence of rich argument labels and additional graph edges that allow the recovery of full MRS. We see that for ACE the accuracies are very similar, while for our model EDS parsing is more accurate on the EDM metrics. We hypothesize that most of the extra information in DMRS can be obtained through the ERG, to which ACE has access but our model doesn't. An EDS corpus which consists of about $95\%$ of the DeepBank data has also been released\footnote{\url{http://sdp.delph-in.net/osdp-12.tgz}}, with the goal of enabling comparison with other semantic graph parsing formalisms, including CCG dependencies and Prague Semantic Dependencies, on the same data set~\cite{KuhlmannO16}. On this corpus our model obtains $85.87$ EDM and $85.49$ Smatch. \subsection{AMR parsing} We apply the same approach to AMR parsing. Results on the development set are given in Table~\ref{tab:amr-dev}. The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy. However, concept prediction remains the most important weakness of the model; \newcite{DamonteCS16} reports that state-of-the-art AMR parsers score $83\%$ on concept prediction. We report test set results in Table~\ref{tab:amr-test}. Our best neural model outperforms the baseline JAMR parser~\cite{FlaniganTCDS14}, but still lags behind the performance of state-of-the-art AMR parsers such as CAMR~\cite{WangEa16} and AMR Eager~\cite{DamonteCS16}. These models make extensive use of external resources, including syntactic parsers and semantic role labellers. Our attention-based encoder-decoder model already outperforms previous sequence-to-sequence AMR parsers~\cite{BarzdinsG16,PengWGX17}, and the arc-eager model boosts accuracy further. Our model also outperforms a Synchronous Hyperedge Replacement Grammar model~\cite{PengG16} which is comparable as it does not make extensive use of external resources. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \newcommand{\pb}[1]{\textcolor{red}{\bf\small [#1 --PB]}} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Robust Incremental Neural Semantic Graph Parsing} \author{Jan Buys$^1$ and Phil Blunsom$^{1,2}$ \\ $^1$Department of Computer Science, University of Oxford \qquad $^2$DeepMind \\ \texttt{\{jan.buys,phil.blunsom\}@cs.ox.ac.uk} \\ } \date{} \begin{document} \maketitle \input{abstract} \input{introduction} \input{graphs} \input{parsing} \input{models} \input{related-work} \input{experiments} \section{Conclusion} In this paper we advance the state of parsing by employing deep learning techniques to parse sentence to linguistically expressive semantic representations that have not previously been parsed in an end-to-end fashion. We presented a robust, wide-coverage parser for MRS that is faster than existing parsers and amenable to batch processing. We believe that there are many future avenues to explore to further increase the accuracy of such parsers, including different training objectives, more structured architectures and semi-supervised learning. \section*{Acknowledgments} The first author thanks the financial support of the Clarendon Fund and the Skye Foundation. We thank Stephan Oepen for feedback and help with data preperation, and members of the Oxford NLP group for valuable discussions. \bibliographystyle{acl} \end{document} \section{Encoder-Decoder Models} \label{sec:models} \subsection{Sentence encoder} The sentence $\mathbf{e}$ is encoded with a bidirectional RNN. We use a standard LSTM architecture without peephole connections~\cite{JozefowiczZS15}. For every token $e$ we embed its word, POS tag and named entity (NE) tag as vectors $x_w$, $x_t$ and $x_n$, respectively. The embeddings are concatenated and passed through a linear transformation \[ g(e) = W^{(x)} [x_w; x_t; x_n] + b^{x}, \] such that $g(e)$ has the same dimension as the LSTM. Each input position $i$ is represented by a hidden state $h_i$, which is the concatenation of its forward and backward LSTM state vectors. \subsection{Hard attention decoder} We model the alignment of graph nodes to sentence tokens, $\mathbf{a}$, as a random variable. For the arc-eager model, $a_j$ corresponds to the alignment of the node of the buffer after action $t_j$ is executed. The distribution of $t_j$ is over all transitions and predicates (corresponding to shift transitions), predicted with a single softmax. The parser output is predicted by an RNN decoder. Let $s_j$ be the decoder hidden state at output position $j$. We initialize $s_0$ with the final state of the backward encoder. The alignment is predicted with a pointer network~\cite{VinyalsFJ15}. The logits are computed with an MLP scoring the decoder hidden state against each of the encoder hidden states (for $i = 1, \ldots, I$), \[ u_j^i = w^T \tanh(W^{(1)} h_i + W^{(2)} s_j). \] The alignment distribution is then estimated by \[ p(a_j = i | \mathbf{a}_{1:j-1}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(u_j^i). \] To predict the next transition $t_i$, the output vector is conditioned on the encoder state vector $h_{a_j}$, corresponding to the alignment: \begin{align*} o_j &= W^{(3)} s_j + W^{(4)} h_{a_j} \\ v_j &= R^{(d)} o_j + b^{(d)}, \end{align*} where $R^{(d)}$ and $b^{(d)}$ are the output representation matrix and bias vector, respectively. The transition distribution is then given by \[ p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(v_j). \] Let $e(t)$ be the embedding of decoder symbol $t$. The RNN state at the next time-step is computed as \begin{align*} d_{j+1} &= W^{(5)} e(t_{j}) + W^{(6)} h_{a_j} \\ s_{j+1} &= RNN(d_{j+1}, s_{j}). \end{align*} The end-of-span alignment $a_j^{(e)}$ for MRS-based graphs is predicted with another pointer network. The end alignment of a token is predicted only when a node is reduced from the stack, therefore this alignment is not observed at each time-step; it is also not fed back into the model. The hard attention approach, based on supervised alignments, can be contrasted to soft attention, which learns to attend over the input without supervision. The attention is computed as with hard attention, as $\alpha_j^i = \mathrm{softmax}(u_j^i)$. However instead of making a hard selection, a weighted average over the encoder vectors is computed as $q_j = \sum_{i=1}^{i=I} \alpha_j^i h_i$. This vector is used instead of $h_{a_j}$ for prediction and feeding to the next time-step. \subsection{Stack-based model} We extend the hard attention model to include features based on the transition system stack. These features are embeddings from the bidirectional RNN encoder, corresponding to the alignments of the nodes on the buffer and on top of the stack. This approach is similar to the features proposed by \newcite{KiperwasserG16} and \newcite{CrossH16} for dependency parsing, although they do not use RNN decoders. To implement these features the layer that computes the output vector is extended to \[ o_j = W^{(3)} s_j + W^{(4)} h_{a_j} + W^{(7)} h_{\textrm{st}_0}, \] where $\texttt{st}_0$ is the sentence alignment index of the element on top of the stack. The input layer to the next RNN time-step is similarly extended to \[ d_{j+1} = W^{(5)} e(t_{j}) + W^{(6)} h_{\textrm{buf}} + W^{(8)} h_{\textrm{st}_0}, \] where \texttt{buf} is the buffer alignment after $t_j$ is executed. Our implementation of the stack-based model enables batch processing in static computation graphs, similar to \newcite{BowmanEa16}. We maintain a stack of alignment indexes for each element in the batch, which is updated inside the computation graph after each parsing action. This enables minibatch SGD during training as well as efficient batch decoding. We perform greedy decoding. For the stack-based model we ensure that if the stack is empty, the next transition predicted has to be shift. For the other models we ensure that the output is well-formed during post-processing by robustly skipping over out-of-place symbols or inserting missing ones. \section{Related Work} Prior work for MRS parsing predominantly predicts structures in the context of grammar-based parsing, where sentences are parsed to HPSG derivations consistent with the grammar, in this case the ERG~\cite{Flickinger00}. The nodes in the derivation trees are feature structures, from which MRS is extracted through unification. This approach fails to parse sentences for which no valid derivation is found. Maximum entropy models are used to score the derivations in order to find the most likely parse~\cite{ToutanovaMFO05}. This approach is implemented in the PET~\cite{Callmeier00} and ACE\footnote{\url{http://sweaglesw.org/linguistics/ace/}} parsers. There have also been some efforts to develop robust MRS parsers. One proposed approach learns a PCFG grammar to approximate the HPSG derivations~\cite{ZhangK11,ZhangEa14}. MRS is then extracted with robust unification to compose potentially incompatible feature structures, although that still fails for a small proportion of sentences. The model is trained on a large corpus of Wikipedia text parsed with the grammar-based parser. \newcite{Ytrestol12} proposed a transition-based approach to HPSG parsing that produces derivations from which both syntactic and semantic (MRS) parses can be extracted. The parser has an option not to be restricted by the ERG. However, neither of these approaches have results available that can be compared directly to our setup, or generally available implementations. Although AMR parsers produce graphs that are similar in structure to MRS-based graphs, most of them make assumptions that are invalid for MRS, and rely on extensive external AMR-specific resources. \newcite{FlaniganTCDS14} proposed a two-stage parser that first predicts concepts or subgraphs corresponding to sentence segments, and then parses these concepts into a graph structure. However MRS has a large proportion of abstract nodes that cannot be predicted from short segments, and interact closely with the graph structure. \newcite{WangXP15,WangXP15a} proposed a custom transition-system for AMR parsing that converts dependency trees to AMR graphs, relying on assumptions on the relationship between these. \newcite{PustHKMM15} proposed a parser based on syntax-based machine translation (MT), while AMR has also been integrated into CCG Semantic Parsing~\cite{ArtziLZ15,MisraA16}. Recently \newcite{DamonteCS16} and \newcite{PengWGX17} proposed AMR parsers based on neural networks. \section{Incremental Graph Parsing} \label{sec:parsing} We parse sentences to their meaning representations by incrementally predicting semantic graphs together with their alignments. Let $\mathbf{e} = e_1, e_2, \ldots, e_I$ be a tokenized English sentence, $\mathbf{t} = t_1, t_2, \ldots, t_J$ a sequential representation of its graph derivation and $\mathbf{a} = a_1, a_2, \ldots, a_J$ an alignment sequence consisting of integers in the range $1, \ldots, I$. We model the conditional distribution $p(\mathbf{t}, \mathbf{a} | \mathbf{e})$ which decomposes as \[ \prod_{j=1}^{J} p(a_j | \mathbf{(a,t)}_{1:j-1}, \mathbf{e}) p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}). \] We also predict the end-of-span alignments as a seperate sequence $\mathbf{a^{(e)}}$. \subsection{Top-down linearization} We now consider how to linearize the semantic graphs, before defining the neural models to parameterize the parser in section~\ref{sec:models}. The first approach is to linearize a graph as the pre-order traversal of its spanning tree, starting at a designated root node (see Figure~\ref{fig:linear-eds}). Variants of this approach have been proposed for neural constituency parsing~\cite{VinyalsEa15}, logical form prediction~\cite{DongL16,JiaL16} and AMR parsing~\cite{BarzdinsG16,PengWGX17}. In the linearization, labels of edges whose direction are reversed in the spanning tree are marked by adding \texttt{-of}. Edges not included in the spanning tree, referred to as \emph{reentrancies}, are represented with special edges whose dependents are dummy nodes pointing back to the original nodes. Our potentially lossy representation represents these edges by repeating the dependent node labels and alignments, which are recovered heuristically. The alignment does not influence the linearized node ordering. \subsection{Transition-based parsing} Figure~\ref{fig:eds-graph} shows that the semantic graphs we work with can also be interpreted as dependency graphs, as nodes are aligned to sentence tokens. Transition-based parsing~\cite{Nivre08} has been used extensively to predict dependency graphs incrementally. We apply a variant of the arc-eager transition system that has been proposed for graph (as opposed to tree) parsing~\cite{SagaeTsujii08,TitovHMM09,GomezN10} to derive a transition-based parser for deep semantic graphs. In dependency parsing the sentence tokens also act as nodes in the graph, but here we need to generate the nodes incrementally as the transition-system proceeds, conditioning the generation on the given sentence. \newcite{DamonteCS16} proposed an arc-eager AMR parser, but their transition system is more narrowly restricted to AMR graphs. The transition system consists of a \emph{stack} of graph nodes being processed and a \emph{buffer}, holding a single node at a time. The main transition actions are \emph{shift}, \emph{reduce}, \emph{left-arc}, \emph{right-arc}. Figure~\ref{fig:transition-table} shows an example transition sequence together with the stack and buffer after each step. The shift transition moves the element on the buffer to the top of the stack, and generates a predicate and its alignment as the next node on the buffer. Left-arc and right-arc actions add labeled arcs between the buffer and stack top (for DMRS a transition for undirected arcs is included), but do not change the state of the stack or buffer. Finally, reduce pops the top element from the stack, and predicts its end-of-span alignment (if included in the representation). To predict non-planar arcs, we add another transition, which we call \emph{cross-arc}, which first predicts the stack index of a node which is not on top of the stack, adding an arc between the head of the buffer and that node. Another special transition designates the buffer node as the root. To derive an oracle for this transition system, it is necessary to determine the order in which the nodes are generated. We consider two approaches. The first ordering is obtained by performing an in-order traversal of the spanning tree, where the node order is determined by the alignment. In the resulting linearization the only non-planar arcs are reentrancies. The second approach lets the ordering be monotone (non-decreasing) with respect to the alignments, while respecting the in-order ordering for nodes with the same alignment. In an arc-eager oracle arcs are added greedily, while a reduce action can either be performed as soon as the stack top node has been connected to all its dependents, or delayed until it has to reduce to allow the correct parse tree to be formed. In our model the oracle delays reduce, where possible, until the end alignment of the stack top node spans the node on the buffer. As the span end alignments often cover phrases that they head (e.g. for quantifiers) this gives a natural interpretation to predicting the span end together with the reduce action. \subsection{Delexicalization and lemma prediction} Each token in MRS annotations is aligned to at most one surface predicate. We decompose surface predicate prediction by predicting candidate lemmas for input tokens, and delexicalized predicates consisting only of sense labels. The full surface predicates are then recovered through the predicted alignments. We extract a dictionary mapping words to lemmas from the ERG lexicon. Candidate lemmas are predicted using this dictionary, and where no dictionary entry is available with a lemmatizer. The same approach is applied to predict constants, along with additional normalizations such as mapping numbers to digit strings. We use the Stanford CoreNLP toolkit~\cite{ManningEa14} to tokenize and lemmatize sentences, and tag tokens with the Stanford Named Entity Recognizer~\cite{FinkelGM05}. The tokenization is customized to correspond closely to the ERG tokenization; hyphens are removed pre-processing step. For AMR we use automatic alignments and the graph topology to classify concepts as surface or abstract. %~\cite{BuysB17a}. The lexicon is restricted to Propbank~\cite{PalmerGK05} predicates; for other concepts we extract a lexicon from the training data.
Robust Incremental Neural Semantic Graph Parsing
1704.07092
Table 5: Development set results for AMR parsing. All the models except the first predict alignments with pointer networks.
[ "Model", "Concept F1", "Smatch" ]
[ [ "TD no pointers", "70.16", "57.95" ], [ "TD soft", "71.25", "59.39" ], [ "TD soft unlex", "72.62", "59.88" ], [ "AE hard unlex", "76.83", "59.83" ], [ "AE stack unlex", "77.93", "61.21" ] ]
We apply the same approach to AMR parsing. The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy. However, concept prediction remains the most important weakness of the model; \newciteDamonteCS16 reports that state-of-the-art AMR parsers score 83% on concept prediction.
\section{Meaning Representations} We define a common framework for semantic graphs in which we can place both MRS-based graph representations (DMRS and EDS) and AMR. Sentence meaning is represented with rooted, labelled, connected, directed graphs~\cite{KuhlmannO16}. An example graph is visualized in Figure~\ref{fig:eds-graph}. representations. Node labels are referred to as \emph{predicates} (\emph{concepts} in AMR) and edge labels as \emph{arguments} (AMR \emph{relations}). In addition \emph{constants}, a special type of node modifiers, are used to denote the string values of named entities and numbers (including date and time expressions). Every node is aligned to a token or a continuous span of tokens in the sentence the graph corresponds to. Minimal Recursion Semantics (MRS) is a framework for computational semantics that can be used for parsing or generation~\cite{CopenstakeFPS05}. Instances and eventualities are represented with logical variables. Predicates take arguments with labels from a small, fixed set of roles. Arguments are either logical variables or handles, designated formalism-internal variables. Handle equality constraints support scope underspecification; multiple scope-resolved logical representations can be derived from one MRS structure. A predicate corresponds to its intrinsic argument and is aligned to a character span of the (untokenized) input sentence. Predicates representing named entities or numbers are parameterized by strings. Quantification is expressed through predicates that bound instance variables, rather than through logical operators such as $\exists$ or $\forall$. MRS was designed to be integrated with feature-based grammars such as Head-driven Phrase Structure Grammar (HPSG)~\cite{PollardS94} or Lexical Functional Grammar (LFG)~\cite{KaplanB82}. MRS has been implement the English Resource Grammar (ERG)~\cite{Flickinger00}, a broad-coverage high-precision HPSG grammar. \newcite{OepenL06} proposed Elementary Dependency Structure (EDS), a conversion of MRS to variable-free dependency graphs which drops scope underspecification. \newcite{Copenstake09} extended this conversion to avoid information loss, primarily through richer edge labels. The resulting representation, Dependency MRS (DMRS), can be converted back to the original MRS, or used directly in MRS-based applications~\cite{CopenstakeEa16}. We are interested in the empirical performance of parsers for both of these representations: while EDS is more interpretable as an independent semantic graph representation, DMRS can be related back to underspecified logical forms. A bilexical simplification of EDS has previously been used for semantic dependency parsing~\cite{OepenEa14,OepenEa15}. Figure~\ref{fig:eds-graph} illustrates an EDS graph. MRS makes an explicit distinction between surface and abstract predicates (by convention surface predicates are prefixed by an underscore). Surface predicates consist of a lemma followed by a coarse part-of-speech tag and an optional sense label. Predicates absent from the ERG lexicon are represented by their surface forms and POS tags. We convert the character-level predicate spans given by MRS to token-level spans for parsing purposes, but the representation does not require gold tokenization. Surface predicates usually align with the span of the token(s) they represent, while abstract predicates can span longer segments. In full MRS every predicate is annotated with a set of morphosyntactic features, encoding for example tense, aspect and number information; we do not currently model these features. AMR~\cite{BanarescuEa13} graphs can be represented in the same framework, despite a number of linguistic differences with MRS. Some information annotated explicitly in MRS is latent in AMR, including alignments and the distinction between surface (lexical) and abstract concepts. AMR predicates are based on PropBank~\cite{PalmerGK05}, annotated as lemmas plus sense labels, but they form only a subset of concepts. Other concepts are either English words or special keywords, corresponding to overt lexemes in some cases but not others. \section{Introduction} An important goal of Natural Language Understanding (NLU) is to parse sentences to structured, interpretable meaning representations that can be used for query execution, inference and reasoning. Recently end-to-end models have outperformed traditional pipeline approaches, predicting syntactic or semantic structure as intermediate steps, on NLU tasks such as sentiment analysis and semantic relatedness~\cite{LeM14,KirosEa15}, question answering~\cite{HermannEa15} % WestonCB14, and textual entailment~\cite{RocktaschelEa15}. However the linguistic structure used in applications has predominantly been shallow, restricted to bilexical dependencies or trees. In this paper we focus on robust parsing into linguistically deep representations. The main representation that we use is Minimal Recursion Semantics (MRS)~\cite{CopenstakeFMRS95,CopenstakeFPS05}, which serves as the semantic representation of the English Resource Grammar (ERG)~\cite{Flickinger00}. Existing parsers for full MRS (as opposed to bilexical semantic graphs derived from, but simplifying MRS) are grammar-based, performing disambiguation with a maximum entropy model~\cite{ToutanovaMFO05,ZhangOC07}; this approach has high precision but incomplete coverage. Our main contribution is to develop a fast and robust parser for full MRS-based semantic graphs. We exploit the power of global conditioning enabled by deep learning to predict linguistically deep graphs incrementally. The model does not have access to the underlying ERG or syntactic structures from which the MRS analyses were originally derived. We develop parsers for two graph-based conversions of MRS, Elementary Dependency Structure (EDS)~\cite{OepenL06} and Dependency MRS (DMRS)~\cite{Copenstake09}, of which the latter is inter-convertible with MRS. Abstract Meaning Representation (AMR)~\cite{BanarescuEa13} is a graph-based semantic representation that shares the goals of MRS. Aside from differences in the choice of which linguistic phenomena are annotated, MRS is a compositional representation explicitly coupled with the syntactic structure of the sentence, while AMR does not assume compositionality or alignment with the sentence structure. Recently a number of AMR parsers have been developed~\cite{FlaniganTCDS14,WangXP15,ArtziLZ15,DamonteCS16}, but corpora are still under active development and low inter-annotator agreement places on upper bound of $83\%$ F1 on expected parser performance~\cite{BanarescuEa13}. We apply our model to AMR parsing by introducing structure that is present explicitly in MRS but not in AMR~\cite{BuysB17a}. Parsers based on RNNs have achieved state-of-the-art performance for dependency parsing~\cite{DyerBLMS15,KiperwasserG16} and constituency parsing~\cite{VinyalsEa15,DyerKBS16,CrossH16a}. One of the main reasons for the prevalence of bilexical dependencies and tree-based representations is that they can be parsed with efficient and well-understood algorithms. However, one of the key advantages of deep learning is the ability to make predictions conditioned on unbounded contexts encoded with RNNs; this enables us to predict more complex structures without increasing algorithmic complexity. In this paper we show how to perform linguistically deep parsing with RNNs. Our parser is based on a transition system for semantic graphs. However, instead of generating arcs over an ordered, fixed set of nodes (the words in the sentence), we generate the nodes and their alignments jointly with the transition actions. We use a graph-based variant of the arc-eager transition-system. The sentence is encoded with a bidirectional RNN. The transition sequence, seen as a graph linearization, can be predicted with any encoder-decoder model, but we show that using hard attention, predicting the alignments with a pointer network and conditioning explicitly on stack-based features improves performance. In order to deal with data sparsity candidate lemmas are predicted as a pre-processing step, so that the RNN decoder predicts unlexicalized node labels. We evaluate our parser on DMRS, EDS and AMR graphs. Our model architecture improves performance from $79.68\%$ to $84.16\%$ F1 over an attention-based encoder-decoder baseline. Although our parser is less accurate that a high-precision grammar-based parser on a test set of sentences parsable by that grammar, incremental prediction and GPU batch processing enables it to parse $529$ tokens per second, against $7$ tokens per second for the grammar-based parser. On AMR parsing our model obtains $60.11\%$ Smatch. \begin{abstract} Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the $86.69\%$ Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.\footnote{Code, models and data preparation scripts are available at \url{https://github.com/janmbuys/DeepDeepParser}.} \end{abstract} \section{Experiments} \label{sec:experiments} \subsection{Data} DeepBank~\cite{FlickingerZK12} is an HPSG and MRS annotation of the Penn Treebank Wall Street Journal (WSJ) corpus. It was developed following an approach known as dynamic treebanking~\cite{OepenFTM04} that couples treebank annotation with grammar development, in this case of the ERG. This approach has been shown to lead to high inter-annotator agreement: $0.94$ against $0.71$ for AMR~\cite{BenderFOPC15}. Parses are only provided for sentences for which the ERG has an analysis acceptable to the annotator -- this means that we cannot evaluate parsing accuracy for sentences which the ERG cannot parse (approximately $15\%$ of the original corpus). We use Deepbank version $1.1$, corresponding to ERG \texttt{1214}\footnote{\url{http://svn.delph-in.net/erg/tags/1214/}}, following the suggested split of sections $0$ to $19$ as training data data, $20$ for development and $21$ for testing. The gold-annotated training data consists of 35,315 sentences. We use the LOGON environment\footnote{\url{http://moin.delph-in.net/LogonTop}} and the pyDelphin library\footnote{\url{https://github.com/delph-in/pydelphin}} to extract DMRS and EDS graphs. For AMR parsing we use LDC2015E86, the dataset released for the SemEval 2016 AMR parsing Shared Task~\cite{May16}. This data includes newswire, weblog and discussion forum text. The training set has 16,144 sentences. We obtain alignments using the rule-based JAMR aligner~\cite{FlaniganTCDS14}. \subsection{Evaluation} \newcite{DridanO11} proposed an evaluation metric called Elementary Dependency Matching (EDM) for MRS-based graphs. EDM computes the F1-score of tuples of predicates and arguments. A predicate tuple consists of the label and character span of a predicate, while an argument tuple consists of the character spans of the head and dependent nodes of the relation, together with the argument label. In order to tolerate subtle tokenization differences with respect to punctuation, we allow span pairs whose ends differ by one character to be matched. The Smatch metric~\cite{CaiK13}, proposed for evaluating AMR graphs, also measures graph overlap, but does not rely on sentence alignments to determine the correspondences between graph nodes. Smatch is instead computed by performing inference over graph alignments to estimate the maximum F1-score obtainable from a one-to-one matching between the predicted and gold graph nodes. \subsection{Model setup} Our parser is implemented in TensorFlow~\cite{AbadiEa15}. For training we use Adam~\cite{KingmaB14} with learning rate $0.01$ and batch-size $64$. Gradients norms are clipped to $5.0$~\cite{PascanuMB13}. We use single-layer LSTMs with dropout of $0.3$ (tuned on the development set) on input and output connections. We use encoder and decoder embeddings of size $256$, and POS and NE tag embeddings of size $32$, For DMRS and EDS graphs the hidden units size is set to $256$, for AMR it is $128$. This configuration, found using grid search and heuristic search within the range of models that fit into a single GPU, gave the best performance on the development set under multiple graph linearizations. Encoder word embeddings are initialized (in the first 100 dimensions) with pre-trained order-sensitive embeddings~\cite{LingDBT15}. Singletons in the encoder input are replaced with an unknown word symbol with probability $0.5$ for each iteration. \subsection{MRS parsing results} We compare different linearizations and model architectures for parsing DMRS on the development data, showing that our approach is more accurate than baseline neural approaches. We report EDM scores, including scores for predicate (EDM$_P$) and argument (EDM$_A$) prediction. First we report results using standard attention-based encoder-decoders, with the alignments encoded as token strings in the linearization. (Table~\ref{tab:dmrs-dev-delex}). We compare the top-down (TD) and arc-eager (AE) linearizations, as well as the effect of delexicalizing the predicates (factorizing lemmas out of the linearization and predicting them separately.) In both cases constants are predicted with a dictionary lookup based on the predicted spans. A special label is predicted for predicates not in the ERG lexicon -- the words and POS tags that make up those predicates are recovered through the alignments during post-processing. The arc-eager unlexicalized representation gives the best performance, even though the model has to learn to model the transition system stack through the recurrent hidden states without any supervision of the transition semantics. The unlexicalized models are more accurate, mostly due to their ability to generalize to sparse or unseen predicates occurring in the lexicon. For the arc-eager representation, the oracle EDM is $99\%$ for the lexicalized representation and $98.06\%$ for the delexicalized representation. The remaining errors are mostly due to discrepancies between the tokenization used by our system and the ERG tokenization. The unlexicalized models are also faster to train, as the decoder's output vocabulary is much smaller, reducing the expense of computing softmaxes over large vocabularies. Next we consider models with delexicalized linearizations that predict the alignments with pointer networks, contrasting soft and hard attention models (Table~\ref{tab:dmrs-dev-point}). The results show that the arc-eager models performs better than those based on top-down representation. For the arc-eager model we use hard attention, due to the natural interpretation of the alignment prediction corresponding to the transition system. The stack-based architecture gives further improvements. When comparing the effect of different predicate orderings for the arc-eager model, we find that the monotone ordering performs $0.44$ EDM better than the in-order ordering, despite having to parse more non-planar dependencies. We also trained models that only predict predicates (in monotone order) together with their start spans. The hard attention model obtains $91.36\%$ F1 on predicates together with their start spans with the unlexicalized model, compared to $88.22\%$ for lexicalized predicates and $91.65\%$ for the full parsing model. Table \ref{tab:dmrs-test} reports test set results for various evaluation metrics. Start EDM is calculated by requiring only the start of the alignment spans to match, not the ends. We compare the performance of our baseline and stack-based models against ACE, the ERG-based parser. Despite the promising performance of the model a gap remains between the accuracy of our parser and ACE. One reason for this is that the test set sentences will arguably be easier for ACE to parse as their choice was restricted by the same grammar that ACE uses. EDM metrics excluding end-span prediction (Start EDM) show that our parser has relatively more difficulty in parsing end-span predictions than the grammar-based parser. We also evaluate the speed of our model compared with ACE. For the unbatched version of our model, the stack-based parser parses $41.63$ tokens per second, while the batched implementation parses $529.42$ tokens per second using a batch size of $128$. In comparison, the setting of ACE for which we report accuracies parses $7.47$ tokens per second. By restricting the memory usage of ACE, which restricts its coverage, we see that ACE can parse $11.07$ tokens per second at $87.7\%$ coverage, and $15.11$ tokens per second at $77.8\%$ coverage. Finally we report results for parsing EDS (Table~\ref{tab:eds-test}). The EDS parsing task is slightly simpler than DMRS, due to the absence of rich argument labels and additional graph edges that allow the recovery of full MRS. We see that for ACE the accuracies are very similar, while for our model EDS parsing is more accurate on the EDM metrics. We hypothesize that most of the extra information in DMRS can be obtained through the ERG, to which ACE has access but our model doesn't. An EDS corpus which consists of about $95\%$ of the DeepBank data has also been released\footnote{\url{http://sdp.delph-in.net/osdp-12.tgz}}, with the goal of enabling comparison with other semantic graph parsing formalisms, including CCG dependencies and Prague Semantic Dependencies, on the same data set~\cite{KuhlmannO16}. On this corpus our model obtains $85.87$ EDM and $85.49$ Smatch. \subsection{AMR parsing} We apply the same approach to AMR parsing. Results on the development set are given in Table~\ref{tab:amr-dev}. The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy. However, concept prediction remains the most important weakness of the model; \newcite{DamonteCS16} reports that state-of-the-art AMR parsers score $83\%$ on concept prediction. We report test set results in Table~\ref{tab:amr-test}. Our best neural model outperforms the baseline JAMR parser~\cite{FlaniganTCDS14}, but still lags behind the performance of state-of-the-art AMR parsers such as CAMR~\cite{WangEa16} and AMR Eager~\cite{DamonteCS16}. These models make extensive use of external resources, including syntactic parsers and semantic role labellers. Our attention-based encoder-decoder model already outperforms previous sequence-to-sequence AMR parsers~\cite{BarzdinsG16,PengWGX17}, and the arc-eager model boosts accuracy further. Our model also outperforms a Synchronous Hyperedge Replacement Grammar model~\cite{PengG16} which is comparable as it does not make extensive use of external resources. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \newcommand{\pb}[1]{\textcolor{red}{\bf\small [#1 --PB]}} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Robust Incremental Neural Semantic Graph Parsing} \author{Jan Buys$^1$ and Phil Blunsom$^{1,2}$ \\ $^1$Department of Computer Science, University of Oxford \qquad $^2$DeepMind \\ \texttt{\{jan.buys,phil.blunsom\}@cs.ox.ac.uk} \\ } \date{} \begin{document} \maketitle \input{abstract} \input{introduction} \input{graphs} \input{parsing} \input{models} \input{related-work} \input{experiments} \section{Conclusion} In this paper we advance the state of parsing by employing deep learning techniques to parse sentence to linguistically expressive semantic representations that have not previously been parsed in an end-to-end fashion. We presented a robust, wide-coverage parser for MRS that is faster than existing parsers and amenable to batch processing. We believe that there are many future avenues to explore to further increase the accuracy of such parsers, including different training objectives, more structured architectures and semi-supervised learning. \section*{Acknowledgments} The first author thanks the financial support of the Clarendon Fund and the Skye Foundation. We thank Stephan Oepen for feedback and help with data preperation, and members of the Oxford NLP group for valuable discussions. \bibliographystyle{acl} \end{document} \section{Encoder-Decoder Models} \label{sec:models} \subsection{Sentence encoder} The sentence $\mathbf{e}$ is encoded with a bidirectional RNN. We use a standard LSTM architecture without peephole connections~\cite{JozefowiczZS15}. For every token $e$ we embed its word, POS tag and named entity (NE) tag as vectors $x_w$, $x_t$ and $x_n$, respectively. The embeddings are concatenated and passed through a linear transformation \[ g(e) = W^{(x)} [x_w; x_t; x_n] + b^{x}, \] such that $g(e)$ has the same dimension as the LSTM. Each input position $i$ is represented by a hidden state $h_i$, which is the concatenation of its forward and backward LSTM state vectors. \subsection{Hard attention decoder} We model the alignment of graph nodes to sentence tokens, $\mathbf{a}$, as a random variable. For the arc-eager model, $a_j$ corresponds to the alignment of the node of the buffer after action $t_j$ is executed. The distribution of $t_j$ is over all transitions and predicates (corresponding to shift transitions), predicted with a single softmax. The parser output is predicted by an RNN decoder. Let $s_j$ be the decoder hidden state at output position $j$. We initialize $s_0$ with the final state of the backward encoder. The alignment is predicted with a pointer network~\cite{VinyalsFJ15}. The logits are computed with an MLP scoring the decoder hidden state against each of the encoder hidden states (for $i = 1, \ldots, I$), \[ u_j^i = w^T \tanh(W^{(1)} h_i + W^{(2)} s_j). \] The alignment distribution is then estimated by \[ p(a_j = i | \mathbf{a}_{1:j-1}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(u_j^i). \] To predict the next transition $t_i$, the output vector is conditioned on the encoder state vector $h_{a_j}$, corresponding to the alignment: \begin{align*} o_j &= W^{(3)} s_j + W^{(4)} h_{a_j} \\ v_j &= R^{(d)} o_j + b^{(d)}, \end{align*} where $R^{(d)}$ and $b^{(d)}$ are the output representation matrix and bias vector, respectively. The transition distribution is then given by \[ p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(v_j). \] Let $e(t)$ be the embedding of decoder symbol $t$. The RNN state at the next time-step is computed as \begin{align*} d_{j+1} &= W^{(5)} e(t_{j}) + W^{(6)} h_{a_j} \\ s_{j+1} &= RNN(d_{j+1}, s_{j}). \end{align*} The end-of-span alignment $a_j^{(e)}$ for MRS-based graphs is predicted with another pointer network. The end alignment of a token is predicted only when a node is reduced from the stack, therefore this alignment is not observed at each time-step; it is also not fed back into the model. The hard attention approach, based on supervised alignments, can be contrasted to soft attention, which learns to attend over the input without supervision. The attention is computed as with hard attention, as $\alpha_j^i = \mathrm{softmax}(u_j^i)$. However instead of making a hard selection, a weighted average over the encoder vectors is computed as $q_j = \sum_{i=1}^{i=I} \alpha_j^i h_i$. This vector is used instead of $h_{a_j}$ for prediction and feeding to the next time-step. \subsection{Stack-based model} We extend the hard attention model to include features based on the transition system stack. These features are embeddings from the bidirectional RNN encoder, corresponding to the alignments of the nodes on the buffer and on top of the stack. This approach is similar to the features proposed by \newcite{KiperwasserG16} and \newcite{CrossH16} for dependency parsing, although they do not use RNN decoders. To implement these features the layer that computes the output vector is extended to \[ o_j = W^{(3)} s_j + W^{(4)} h_{a_j} + W^{(7)} h_{\textrm{st}_0}, \] where $\texttt{st}_0$ is the sentence alignment index of the element on top of the stack. The input layer to the next RNN time-step is similarly extended to \[ d_{j+1} = W^{(5)} e(t_{j}) + W^{(6)} h_{\textrm{buf}} + W^{(8)} h_{\textrm{st}_0}, \] where \texttt{buf} is the buffer alignment after $t_j$ is executed. Our implementation of the stack-based model enables batch processing in static computation graphs, similar to \newcite{BowmanEa16}. We maintain a stack of alignment indexes for each element in the batch, which is updated inside the computation graph after each parsing action. This enables minibatch SGD during training as well as efficient batch decoding. We perform greedy decoding. For the stack-based model we ensure that if the stack is empty, the next transition predicted has to be shift. For the other models we ensure that the output is well-formed during post-processing by robustly skipping over out-of-place symbols or inserting missing ones. \section{Related Work} Prior work for MRS parsing predominantly predicts structures in the context of grammar-based parsing, where sentences are parsed to HPSG derivations consistent with the grammar, in this case the ERG~\cite{Flickinger00}. The nodes in the derivation trees are feature structures, from which MRS is extracted through unification. This approach fails to parse sentences for which no valid derivation is found. Maximum entropy models are used to score the derivations in order to find the most likely parse~\cite{ToutanovaMFO05}. This approach is implemented in the PET~\cite{Callmeier00} and ACE\footnote{\url{http://sweaglesw.org/linguistics/ace/}} parsers. There have also been some efforts to develop robust MRS parsers. One proposed approach learns a PCFG grammar to approximate the HPSG derivations~\cite{ZhangK11,ZhangEa14}. MRS is then extracted with robust unification to compose potentially incompatible feature structures, although that still fails for a small proportion of sentences. The model is trained on a large corpus of Wikipedia text parsed with the grammar-based parser. \newcite{Ytrestol12} proposed a transition-based approach to HPSG parsing that produces derivations from which both syntactic and semantic (MRS) parses can be extracted. The parser has an option not to be restricted by the ERG. However, neither of these approaches have results available that can be compared directly to our setup, or generally available implementations. Although AMR parsers produce graphs that are similar in structure to MRS-based graphs, most of them make assumptions that are invalid for MRS, and rely on extensive external AMR-specific resources. \newcite{FlaniganTCDS14} proposed a two-stage parser that first predicts concepts or subgraphs corresponding to sentence segments, and then parses these concepts into a graph structure. However MRS has a large proportion of abstract nodes that cannot be predicted from short segments, and interact closely with the graph structure. \newcite{WangXP15,WangXP15a} proposed a custom transition-system for AMR parsing that converts dependency trees to AMR graphs, relying on assumptions on the relationship between these. \newcite{PustHKMM15} proposed a parser based on syntax-based machine translation (MT), while AMR has also been integrated into CCG Semantic Parsing~\cite{ArtziLZ15,MisraA16}. Recently \newcite{DamonteCS16} and \newcite{PengWGX17} proposed AMR parsers based on neural networks. \section{Incremental Graph Parsing} \label{sec:parsing} We parse sentences to their meaning representations by incrementally predicting semantic graphs together with their alignments. Let $\mathbf{e} = e_1, e_2, \ldots, e_I$ be a tokenized English sentence, $\mathbf{t} = t_1, t_2, \ldots, t_J$ a sequential representation of its graph derivation and $\mathbf{a} = a_1, a_2, \ldots, a_J$ an alignment sequence consisting of integers in the range $1, \ldots, I$. We model the conditional distribution $p(\mathbf{t}, \mathbf{a} | \mathbf{e})$ which decomposes as \[ \prod_{j=1}^{J} p(a_j | \mathbf{(a,t)}_{1:j-1}, \mathbf{e}) p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}). \] We also predict the end-of-span alignments as a seperate sequence $\mathbf{a^{(e)}}$. \subsection{Top-down linearization} We now consider how to linearize the semantic graphs, before defining the neural models to parameterize the parser in section~\ref{sec:models}. The first approach is to linearize a graph as the pre-order traversal of its spanning tree, starting at a designated root node (see Figure~\ref{fig:linear-eds}). Variants of this approach have been proposed for neural constituency parsing~\cite{VinyalsEa15}, logical form prediction~\cite{DongL16,JiaL16} and AMR parsing~\cite{BarzdinsG16,PengWGX17}. In the linearization, labels of edges whose direction are reversed in the spanning tree are marked by adding \texttt{-of}. Edges not included in the spanning tree, referred to as \emph{reentrancies}, are represented with special edges whose dependents are dummy nodes pointing back to the original nodes. Our potentially lossy representation represents these edges by repeating the dependent node labels and alignments, which are recovered heuristically. The alignment does not influence the linearized node ordering. \subsection{Transition-based parsing} Figure~\ref{fig:eds-graph} shows that the semantic graphs we work with can also be interpreted as dependency graphs, as nodes are aligned to sentence tokens. Transition-based parsing~\cite{Nivre08} has been used extensively to predict dependency graphs incrementally. We apply a variant of the arc-eager transition system that has been proposed for graph (as opposed to tree) parsing~\cite{SagaeTsujii08,TitovHMM09,GomezN10} to derive a transition-based parser for deep semantic graphs. In dependency parsing the sentence tokens also act as nodes in the graph, but here we need to generate the nodes incrementally as the transition-system proceeds, conditioning the generation on the given sentence. \newcite{DamonteCS16} proposed an arc-eager AMR parser, but their transition system is more narrowly restricted to AMR graphs. The transition system consists of a \emph{stack} of graph nodes being processed and a \emph{buffer}, holding a single node at a time. The main transition actions are \emph{shift}, \emph{reduce}, \emph{left-arc}, \emph{right-arc}. Figure~\ref{fig:transition-table} shows an example transition sequence together with the stack and buffer after each step. The shift transition moves the element on the buffer to the top of the stack, and generates a predicate and its alignment as the next node on the buffer. Left-arc and right-arc actions add labeled arcs between the buffer and stack top (for DMRS a transition for undirected arcs is included), but do not change the state of the stack or buffer. Finally, reduce pops the top element from the stack, and predicts its end-of-span alignment (if included in the representation). To predict non-planar arcs, we add another transition, which we call \emph{cross-arc}, which first predicts the stack index of a node which is not on top of the stack, adding an arc between the head of the buffer and that node. Another special transition designates the buffer node as the root. To derive an oracle for this transition system, it is necessary to determine the order in which the nodes are generated. We consider two approaches. The first ordering is obtained by performing an in-order traversal of the spanning tree, where the node order is determined by the alignment. In the resulting linearization the only non-planar arcs are reentrancies. The second approach lets the ordering be monotone (non-decreasing) with respect to the alignments, while respecting the in-order ordering for nodes with the same alignment. In an arc-eager oracle arcs are added greedily, while a reduce action can either be performed as soon as the stack top node has been connected to all its dependents, or delayed until it has to reduce to allow the correct parse tree to be formed. In our model the oracle delays reduce, where possible, until the end alignment of the stack top node spans the node on the buffer. As the span end alignments often cover phrases that they head (e.g. for quantifiers) this gives a natural interpretation to predicting the span end together with the reduce action. \subsection{Delexicalization and lemma prediction} Each token in MRS annotations is aligned to at most one surface predicate. We decompose surface predicate prediction by predicting candidate lemmas for input tokens, and delexicalized predicates consisting only of sense labels. The full surface predicates are then recovered through the predicted alignments. We extract a dictionary mapping words to lemmas from the ERG lexicon. Candidate lemmas are predicted using this dictionary, and where no dictionary entry is available with a lemmatizer. The same approach is applied to predict constants, along with additional normalizations such as mapping numbers to digit strings. We use the Stanford CoreNLP toolkit~\cite{ManningEa14} to tokenize and lemmatize sentences, and tag tokens with the Stanford Named Entity Recognizer~\cite{FinkelGM05}. The tokenization is customized to correspond closely to the ERG tokenization; hyphens are removed pre-processing step. For AMR we use automatic alignments and the graph topology to classify concepts as surface or abstract. %~\cite{BuysB17a}. The lexicon is restricted to Propbank~\cite{PalmerGK05} predicates; for other concepts we extract a lexicon from the training data.
Robust Incremental Neural Semantic Graph Parsing
1704.07092
Table 6: AMR parsing test set results (Smatch F1 scores). Published results follow the number of decimals which were reported.
[ "Model", "Smatch" ]
[ [ "FlaniganTCDS14", "56" ], [ "WangEa16", "66.54" ], [ "DamonteCS16", "64" ], [ "PengG16", "55" ], [ "PengWGX17", "52" ], [ "BarzdinsG16", "43.3" ], [ "TD no pointers", "56.56" ], [ "AE stack delex", "60.11" ] ]
Our best neural model outperforms the baseline JAMR parser Flanigan et al. AMR Eager Damonte et al. These models make extensive use of external resources, including syntactic parsers and semantic role labellers. Our model also outperforms a Synchronous Hyperedge Replacement Grammar model Peng and Gildea
\section{Meaning Representations} We define a common framework for semantic graphs in which we can place both MRS-based graph representations (DMRS and EDS) and AMR. Sentence meaning is represented with rooted, labelled, connected, directed graphs~\cite{KuhlmannO16}. An example graph is visualized in Figure~\ref{fig:eds-graph}. representations. Node labels are referred to as \emph{predicates} (\emph{concepts} in AMR) and edge labels as \emph{arguments} (AMR \emph{relations}). In addition \emph{constants}, a special type of node modifiers, are used to denote the string values of named entities and numbers (including date and time expressions). Every node is aligned to a token or a continuous span of tokens in the sentence the graph corresponds to. Minimal Recursion Semantics (MRS) is a framework for computational semantics that can be used for parsing or generation~\cite{CopenstakeFPS05}. Instances and eventualities are represented with logical variables. Predicates take arguments with labels from a small, fixed set of roles. Arguments are either logical variables or handles, designated formalism-internal variables. Handle equality constraints support scope underspecification; multiple scope-resolved logical representations can be derived from one MRS structure. A predicate corresponds to its intrinsic argument and is aligned to a character span of the (untokenized) input sentence. Predicates representing named entities or numbers are parameterized by strings. Quantification is expressed through predicates that bound instance variables, rather than through logical operators such as $\exists$ or $\forall$. MRS was designed to be integrated with feature-based grammars such as Head-driven Phrase Structure Grammar (HPSG)~\cite{PollardS94} or Lexical Functional Grammar (LFG)~\cite{KaplanB82}. MRS has been implement the English Resource Grammar (ERG)~\cite{Flickinger00}, a broad-coverage high-precision HPSG grammar. \newcite{OepenL06} proposed Elementary Dependency Structure (EDS), a conversion of MRS to variable-free dependency graphs which drops scope underspecification. \newcite{Copenstake09} extended this conversion to avoid information loss, primarily through richer edge labels. The resulting representation, Dependency MRS (DMRS), can be converted back to the original MRS, or used directly in MRS-based applications~\cite{CopenstakeEa16}. We are interested in the empirical performance of parsers for both of these representations: while EDS is more interpretable as an independent semantic graph representation, DMRS can be related back to underspecified logical forms. A bilexical simplification of EDS has previously been used for semantic dependency parsing~\cite{OepenEa14,OepenEa15}. Figure~\ref{fig:eds-graph} illustrates an EDS graph. MRS makes an explicit distinction between surface and abstract predicates (by convention surface predicates are prefixed by an underscore). Surface predicates consist of a lemma followed by a coarse part-of-speech tag and an optional sense label. Predicates absent from the ERG lexicon are represented by their surface forms and POS tags. We convert the character-level predicate spans given by MRS to token-level spans for parsing purposes, but the representation does not require gold tokenization. Surface predicates usually align with the span of the token(s) they represent, while abstract predicates can span longer segments. In full MRS every predicate is annotated with a set of morphosyntactic features, encoding for example tense, aspect and number information; we do not currently model these features. AMR~\cite{BanarescuEa13} graphs can be represented in the same framework, despite a number of linguistic differences with MRS. Some information annotated explicitly in MRS is latent in AMR, including alignments and the distinction between surface (lexical) and abstract concepts. AMR predicates are based on PropBank~\cite{PalmerGK05}, annotated as lemmas plus sense labels, but they form only a subset of concepts. Other concepts are either English words or special keywords, corresponding to overt lexemes in some cases but not others. \section{Introduction} An important goal of Natural Language Understanding (NLU) is to parse sentences to structured, interpretable meaning representations that can be used for query execution, inference and reasoning. Recently end-to-end models have outperformed traditional pipeline approaches, predicting syntactic or semantic structure as intermediate steps, on NLU tasks such as sentiment analysis and semantic relatedness~\cite{LeM14,KirosEa15}, question answering~\cite{HermannEa15} % WestonCB14, and textual entailment~\cite{RocktaschelEa15}. However the linguistic structure used in applications has predominantly been shallow, restricted to bilexical dependencies or trees. In this paper we focus on robust parsing into linguistically deep representations. The main representation that we use is Minimal Recursion Semantics (MRS)~\cite{CopenstakeFMRS95,CopenstakeFPS05}, which serves as the semantic representation of the English Resource Grammar (ERG)~\cite{Flickinger00}. Existing parsers for full MRS (as opposed to bilexical semantic graphs derived from, but simplifying MRS) are grammar-based, performing disambiguation with a maximum entropy model~\cite{ToutanovaMFO05,ZhangOC07}; this approach has high precision but incomplete coverage. Our main contribution is to develop a fast and robust parser for full MRS-based semantic graphs. We exploit the power of global conditioning enabled by deep learning to predict linguistically deep graphs incrementally. The model does not have access to the underlying ERG or syntactic structures from which the MRS analyses were originally derived. We develop parsers for two graph-based conversions of MRS, Elementary Dependency Structure (EDS)~\cite{OepenL06} and Dependency MRS (DMRS)~\cite{Copenstake09}, of which the latter is inter-convertible with MRS. Abstract Meaning Representation (AMR)~\cite{BanarescuEa13} is a graph-based semantic representation that shares the goals of MRS. Aside from differences in the choice of which linguistic phenomena are annotated, MRS is a compositional representation explicitly coupled with the syntactic structure of the sentence, while AMR does not assume compositionality or alignment with the sentence structure. Recently a number of AMR parsers have been developed~\cite{FlaniganTCDS14,WangXP15,ArtziLZ15,DamonteCS16}, but corpora are still under active development and low inter-annotator agreement places on upper bound of $83\%$ F1 on expected parser performance~\cite{BanarescuEa13}. We apply our model to AMR parsing by introducing structure that is present explicitly in MRS but not in AMR~\cite{BuysB17a}. Parsers based on RNNs have achieved state-of-the-art performance for dependency parsing~\cite{DyerBLMS15,KiperwasserG16} and constituency parsing~\cite{VinyalsEa15,DyerKBS16,CrossH16a}. One of the main reasons for the prevalence of bilexical dependencies and tree-based representations is that they can be parsed with efficient and well-understood algorithms. However, one of the key advantages of deep learning is the ability to make predictions conditioned on unbounded contexts encoded with RNNs; this enables us to predict more complex structures without increasing algorithmic complexity. In this paper we show how to perform linguistically deep parsing with RNNs. Our parser is based on a transition system for semantic graphs. However, instead of generating arcs over an ordered, fixed set of nodes (the words in the sentence), we generate the nodes and their alignments jointly with the transition actions. We use a graph-based variant of the arc-eager transition-system. The sentence is encoded with a bidirectional RNN. The transition sequence, seen as a graph linearization, can be predicted with any encoder-decoder model, but we show that using hard attention, predicting the alignments with a pointer network and conditioning explicitly on stack-based features improves performance. In order to deal with data sparsity candidate lemmas are predicted as a pre-processing step, so that the RNN decoder predicts unlexicalized node labels. We evaluate our parser on DMRS, EDS and AMR graphs. Our model architecture improves performance from $79.68\%$ to $84.16\%$ F1 over an attention-based encoder-decoder baseline. Although our parser is less accurate that a high-precision grammar-based parser on a test set of sentences parsable by that grammar, incremental prediction and GPU batch processing enables it to parse $529$ tokens per second, against $7$ tokens per second for the grammar-based parser. On AMR parsing our model obtains $60.11\%$ Smatch. \begin{abstract} Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the $86.69\%$ Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.\footnote{Code, models and data preparation scripts are available at \url{https://github.com/janmbuys/DeepDeepParser}.} \end{abstract} \section{Experiments} \label{sec:experiments} \subsection{Data} DeepBank~\cite{FlickingerZK12} is an HPSG and MRS annotation of the Penn Treebank Wall Street Journal (WSJ) corpus. It was developed following an approach known as dynamic treebanking~\cite{OepenFTM04} that couples treebank annotation with grammar development, in this case of the ERG. This approach has been shown to lead to high inter-annotator agreement: $0.94$ against $0.71$ for AMR~\cite{BenderFOPC15}. Parses are only provided for sentences for which the ERG has an analysis acceptable to the annotator -- this means that we cannot evaluate parsing accuracy for sentences which the ERG cannot parse (approximately $15\%$ of the original corpus). We use Deepbank version $1.1$, corresponding to ERG \texttt{1214}\footnote{\url{http://svn.delph-in.net/erg/tags/1214/}}, following the suggested split of sections $0$ to $19$ as training data data, $20$ for development and $21$ for testing. The gold-annotated training data consists of 35,315 sentences. We use the LOGON environment\footnote{\url{http://moin.delph-in.net/LogonTop}} and the pyDelphin library\footnote{\url{https://github.com/delph-in/pydelphin}} to extract DMRS and EDS graphs. For AMR parsing we use LDC2015E86, the dataset released for the SemEval 2016 AMR parsing Shared Task~\cite{May16}. This data includes newswire, weblog and discussion forum text. The training set has 16,144 sentences. We obtain alignments using the rule-based JAMR aligner~\cite{FlaniganTCDS14}. \subsection{Evaluation} \newcite{DridanO11} proposed an evaluation metric called Elementary Dependency Matching (EDM) for MRS-based graphs. EDM computes the F1-score of tuples of predicates and arguments. A predicate tuple consists of the label and character span of a predicate, while an argument tuple consists of the character spans of the head and dependent nodes of the relation, together with the argument label. In order to tolerate subtle tokenization differences with respect to punctuation, we allow span pairs whose ends differ by one character to be matched. The Smatch metric~\cite{CaiK13}, proposed for evaluating AMR graphs, also measures graph overlap, but does not rely on sentence alignments to determine the correspondences between graph nodes. Smatch is instead computed by performing inference over graph alignments to estimate the maximum F1-score obtainable from a one-to-one matching between the predicted and gold graph nodes. \subsection{Model setup} Our parser is implemented in TensorFlow~\cite{AbadiEa15}. For training we use Adam~\cite{KingmaB14} with learning rate $0.01$ and batch-size $64$. Gradients norms are clipped to $5.0$~\cite{PascanuMB13}. We use single-layer LSTMs with dropout of $0.3$ (tuned on the development set) on input and output connections. We use encoder and decoder embeddings of size $256$, and POS and NE tag embeddings of size $32$, For DMRS and EDS graphs the hidden units size is set to $256$, for AMR it is $128$. This configuration, found using grid search and heuristic search within the range of models that fit into a single GPU, gave the best performance on the development set under multiple graph linearizations. Encoder word embeddings are initialized (in the first 100 dimensions) with pre-trained order-sensitive embeddings~\cite{LingDBT15}. Singletons in the encoder input are replaced with an unknown word symbol with probability $0.5$ for each iteration. \subsection{MRS parsing results} We compare different linearizations and model architectures for parsing DMRS on the development data, showing that our approach is more accurate than baseline neural approaches. We report EDM scores, including scores for predicate (EDM$_P$) and argument (EDM$_A$) prediction. First we report results using standard attention-based encoder-decoders, with the alignments encoded as token strings in the linearization. (Table~\ref{tab:dmrs-dev-delex}). We compare the top-down (TD) and arc-eager (AE) linearizations, as well as the effect of delexicalizing the predicates (factorizing lemmas out of the linearization and predicting them separately.) In both cases constants are predicted with a dictionary lookup based on the predicted spans. A special label is predicted for predicates not in the ERG lexicon -- the words and POS tags that make up those predicates are recovered through the alignments during post-processing. The arc-eager unlexicalized representation gives the best performance, even though the model has to learn to model the transition system stack through the recurrent hidden states without any supervision of the transition semantics. The unlexicalized models are more accurate, mostly due to their ability to generalize to sparse or unseen predicates occurring in the lexicon. For the arc-eager representation, the oracle EDM is $99\%$ for the lexicalized representation and $98.06\%$ for the delexicalized representation. The remaining errors are mostly due to discrepancies between the tokenization used by our system and the ERG tokenization. The unlexicalized models are also faster to train, as the decoder's output vocabulary is much smaller, reducing the expense of computing softmaxes over large vocabularies. Next we consider models with delexicalized linearizations that predict the alignments with pointer networks, contrasting soft and hard attention models (Table~\ref{tab:dmrs-dev-point}). The results show that the arc-eager models performs better than those based on top-down representation. For the arc-eager model we use hard attention, due to the natural interpretation of the alignment prediction corresponding to the transition system. The stack-based architecture gives further improvements. When comparing the effect of different predicate orderings for the arc-eager model, we find that the monotone ordering performs $0.44$ EDM better than the in-order ordering, despite having to parse more non-planar dependencies. We also trained models that only predict predicates (in monotone order) together with their start spans. The hard attention model obtains $91.36\%$ F1 on predicates together with their start spans with the unlexicalized model, compared to $88.22\%$ for lexicalized predicates and $91.65\%$ for the full parsing model. Table \ref{tab:dmrs-test} reports test set results for various evaluation metrics. Start EDM is calculated by requiring only the start of the alignment spans to match, not the ends. We compare the performance of our baseline and stack-based models against ACE, the ERG-based parser. Despite the promising performance of the model a gap remains between the accuracy of our parser and ACE. One reason for this is that the test set sentences will arguably be easier for ACE to parse as their choice was restricted by the same grammar that ACE uses. EDM metrics excluding end-span prediction (Start EDM) show that our parser has relatively more difficulty in parsing end-span predictions than the grammar-based parser. We also evaluate the speed of our model compared with ACE. For the unbatched version of our model, the stack-based parser parses $41.63$ tokens per second, while the batched implementation parses $529.42$ tokens per second using a batch size of $128$. In comparison, the setting of ACE for which we report accuracies parses $7.47$ tokens per second. By restricting the memory usage of ACE, which restricts its coverage, we see that ACE can parse $11.07$ tokens per second at $87.7\%$ coverage, and $15.11$ tokens per second at $77.8\%$ coverage. Finally we report results for parsing EDS (Table~\ref{tab:eds-test}). The EDS parsing task is slightly simpler than DMRS, due to the absence of rich argument labels and additional graph edges that allow the recovery of full MRS. We see that for ACE the accuracies are very similar, while for our model EDS parsing is more accurate on the EDM metrics. We hypothesize that most of the extra information in DMRS can be obtained through the ERG, to which ACE has access but our model doesn't. An EDS corpus which consists of about $95\%$ of the DeepBank data has also been released\footnote{\url{http://sdp.delph-in.net/osdp-12.tgz}}, with the goal of enabling comparison with other semantic graph parsing formalisms, including CCG dependencies and Prague Semantic Dependencies, on the same data set~\cite{KuhlmannO16}. On this corpus our model obtains $85.87$ EDM and $85.49$ Smatch. \subsection{AMR parsing} We apply the same approach to AMR parsing. Results on the development set are given in Table~\ref{tab:amr-dev}. The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy. However, concept prediction remains the most important weakness of the model; \newcite{DamonteCS16} reports that state-of-the-art AMR parsers score $83\%$ on concept prediction. We report test set results in Table~\ref{tab:amr-test}. Our best neural model outperforms the baseline JAMR parser~\cite{FlaniganTCDS14}, but still lags behind the performance of state-of-the-art AMR parsers such as CAMR~\cite{WangEa16} and AMR Eager~\cite{DamonteCS16}. These models make extensive use of external resources, including syntactic parsers and semantic role labellers. Our attention-based encoder-decoder model already outperforms previous sequence-to-sequence AMR parsers~\cite{BarzdinsG16,PengWGX17}, and the arc-eager model boosts accuracy further. Our model also outperforms a Synchronous Hyperedge Replacement Grammar model~\cite{PengG16} which is comparable as it does not make extensive use of external resources. \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \newcommand{\pb}[1]{\textcolor{red}{\bf\small [#1 --PB]}} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Robust Incremental Neural Semantic Graph Parsing} \author{Jan Buys$^1$ and Phil Blunsom$^{1,2}$ \\ $^1$Department of Computer Science, University of Oxford \qquad $^2$DeepMind \\ \texttt{\{jan.buys,phil.blunsom\}@cs.ox.ac.uk} \\ } \date{} \begin{document} \maketitle \input{abstract} \input{introduction} \input{graphs} \input{parsing} \input{models} \input{related-work} \input{experiments} \section{Conclusion} In this paper we advance the state of parsing by employing deep learning techniques to parse sentence to linguistically expressive semantic representations that have not previously been parsed in an end-to-end fashion. We presented a robust, wide-coverage parser for MRS that is faster than existing parsers and amenable to batch processing. We believe that there are many future avenues to explore to further increase the accuracy of such parsers, including different training objectives, more structured architectures and semi-supervised learning. \section*{Acknowledgments} The first author thanks the financial support of the Clarendon Fund and the Skye Foundation. We thank Stephan Oepen for feedback and help with data preperation, and members of the Oxford NLP group for valuable discussions. \bibliographystyle{acl} \end{document} \section{Encoder-Decoder Models} \label{sec:models} \subsection{Sentence encoder} The sentence $\mathbf{e}$ is encoded with a bidirectional RNN. We use a standard LSTM architecture without peephole connections~\cite{JozefowiczZS15}. For every token $e$ we embed its word, POS tag and named entity (NE) tag as vectors $x_w$, $x_t$ and $x_n$, respectively. The embeddings are concatenated and passed through a linear transformation \[ g(e) = W^{(x)} [x_w; x_t; x_n] + b^{x}, \] such that $g(e)$ has the same dimension as the LSTM. Each input position $i$ is represented by a hidden state $h_i$, which is the concatenation of its forward and backward LSTM state vectors. \subsection{Hard attention decoder} We model the alignment of graph nodes to sentence tokens, $\mathbf{a}$, as a random variable. For the arc-eager model, $a_j$ corresponds to the alignment of the node of the buffer after action $t_j$ is executed. The distribution of $t_j$ is over all transitions and predicates (corresponding to shift transitions), predicted with a single softmax. The parser output is predicted by an RNN decoder. Let $s_j$ be the decoder hidden state at output position $j$. We initialize $s_0$ with the final state of the backward encoder. The alignment is predicted with a pointer network~\cite{VinyalsFJ15}. The logits are computed with an MLP scoring the decoder hidden state against each of the encoder hidden states (for $i = 1, \ldots, I$), \[ u_j^i = w^T \tanh(W^{(1)} h_i + W^{(2)} s_j). \] The alignment distribution is then estimated by \[ p(a_j = i | \mathbf{a}_{1:j-1}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(u_j^i). \] To predict the next transition $t_i$, the output vector is conditioned on the encoder state vector $h_{a_j}$, corresponding to the alignment: \begin{align*} o_j &= W^{(3)} s_j + W^{(4)} h_{a_j} \\ v_j &= R^{(d)} o_j + b^{(d)}, \end{align*} where $R^{(d)}$ and $b^{(d)}$ are the output representation matrix and bias vector, respectively. The transition distribution is then given by \[ p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}) = \mathrm{softmax}(v_j). \] Let $e(t)$ be the embedding of decoder symbol $t$. The RNN state at the next time-step is computed as \begin{align*} d_{j+1} &= W^{(5)} e(t_{j}) + W^{(6)} h_{a_j} \\ s_{j+1} &= RNN(d_{j+1}, s_{j}). \end{align*} The end-of-span alignment $a_j^{(e)}$ for MRS-based graphs is predicted with another pointer network. The end alignment of a token is predicted only when a node is reduced from the stack, therefore this alignment is not observed at each time-step; it is also not fed back into the model. The hard attention approach, based on supervised alignments, can be contrasted to soft attention, which learns to attend over the input without supervision. The attention is computed as with hard attention, as $\alpha_j^i = \mathrm{softmax}(u_j^i)$. However instead of making a hard selection, a weighted average over the encoder vectors is computed as $q_j = \sum_{i=1}^{i=I} \alpha_j^i h_i$. This vector is used instead of $h_{a_j}$ for prediction and feeding to the next time-step. \subsection{Stack-based model} We extend the hard attention model to include features based on the transition system stack. These features are embeddings from the bidirectional RNN encoder, corresponding to the alignments of the nodes on the buffer and on top of the stack. This approach is similar to the features proposed by \newcite{KiperwasserG16} and \newcite{CrossH16} for dependency parsing, although they do not use RNN decoders. To implement these features the layer that computes the output vector is extended to \[ o_j = W^{(3)} s_j + W^{(4)} h_{a_j} + W^{(7)} h_{\textrm{st}_0}, \] where $\texttt{st}_0$ is the sentence alignment index of the element on top of the stack. The input layer to the next RNN time-step is similarly extended to \[ d_{j+1} = W^{(5)} e(t_{j}) + W^{(6)} h_{\textrm{buf}} + W^{(8)} h_{\textrm{st}_0}, \] where \texttt{buf} is the buffer alignment after $t_j$ is executed. Our implementation of the stack-based model enables batch processing in static computation graphs, similar to \newcite{BowmanEa16}. We maintain a stack of alignment indexes for each element in the batch, which is updated inside the computation graph after each parsing action. This enables minibatch SGD during training as well as efficient batch decoding. We perform greedy decoding. For the stack-based model we ensure that if the stack is empty, the next transition predicted has to be shift. For the other models we ensure that the output is well-formed during post-processing by robustly skipping over out-of-place symbols or inserting missing ones. \section{Related Work} Prior work for MRS parsing predominantly predicts structures in the context of grammar-based parsing, where sentences are parsed to HPSG derivations consistent with the grammar, in this case the ERG~\cite{Flickinger00}. The nodes in the derivation trees are feature structures, from which MRS is extracted through unification. This approach fails to parse sentences for which no valid derivation is found. Maximum entropy models are used to score the derivations in order to find the most likely parse~\cite{ToutanovaMFO05}. This approach is implemented in the PET~\cite{Callmeier00} and ACE\footnote{\url{http://sweaglesw.org/linguistics/ace/}} parsers. There have also been some efforts to develop robust MRS parsers. One proposed approach learns a PCFG grammar to approximate the HPSG derivations~\cite{ZhangK11,ZhangEa14}. MRS is then extracted with robust unification to compose potentially incompatible feature structures, although that still fails for a small proportion of sentences. The model is trained on a large corpus of Wikipedia text parsed with the grammar-based parser. \newcite{Ytrestol12} proposed a transition-based approach to HPSG parsing that produces derivations from which both syntactic and semantic (MRS) parses can be extracted. The parser has an option not to be restricted by the ERG. However, neither of these approaches have results available that can be compared directly to our setup, or generally available implementations. Although AMR parsers produce graphs that are similar in structure to MRS-based graphs, most of them make assumptions that are invalid for MRS, and rely on extensive external AMR-specific resources. \newcite{FlaniganTCDS14} proposed a two-stage parser that first predicts concepts or subgraphs corresponding to sentence segments, and then parses these concepts into a graph structure. However MRS has a large proportion of abstract nodes that cannot be predicted from short segments, and interact closely with the graph structure. \newcite{WangXP15,WangXP15a} proposed a custom transition-system for AMR parsing that converts dependency trees to AMR graphs, relying on assumptions on the relationship between these. \newcite{PustHKMM15} proposed a parser based on syntax-based machine translation (MT), while AMR has also been integrated into CCG Semantic Parsing~\cite{ArtziLZ15,MisraA16}. Recently \newcite{DamonteCS16} and \newcite{PengWGX17} proposed AMR parsers based on neural networks. \section{Incremental Graph Parsing} \label{sec:parsing} We parse sentences to their meaning representations by incrementally predicting semantic graphs together with their alignments. Let $\mathbf{e} = e_1, e_2, \ldots, e_I$ be a tokenized English sentence, $\mathbf{t} = t_1, t_2, \ldots, t_J$ a sequential representation of its graph derivation and $\mathbf{a} = a_1, a_2, \ldots, a_J$ an alignment sequence consisting of integers in the range $1, \ldots, I$. We model the conditional distribution $p(\mathbf{t}, \mathbf{a} | \mathbf{e})$ which decomposes as \[ \prod_{j=1}^{J} p(a_j | \mathbf{(a,t)}_{1:j-1}, \mathbf{e}) p(t_j | \mathbf{a}_{1:j}, \mathbf{t}_{1:j-1}, \mathbf{e}). \] We also predict the end-of-span alignments as a seperate sequence $\mathbf{a^{(e)}}$. \subsection{Top-down linearization} We now consider how to linearize the semantic graphs, before defining the neural models to parameterize the parser in section~\ref{sec:models}. The first approach is to linearize a graph as the pre-order traversal of its spanning tree, starting at a designated root node (see Figure~\ref{fig:linear-eds}). Variants of this approach have been proposed for neural constituency parsing~\cite{VinyalsEa15}, logical form prediction~\cite{DongL16,JiaL16} and AMR parsing~\cite{BarzdinsG16,PengWGX17}. In the linearization, labels of edges whose direction are reversed in the spanning tree are marked by adding \texttt{-of}. Edges not included in the spanning tree, referred to as \emph{reentrancies}, are represented with special edges whose dependents are dummy nodes pointing back to the original nodes. Our potentially lossy representation represents these edges by repeating the dependent node labels and alignments, which are recovered heuristically. The alignment does not influence the linearized node ordering. \subsection{Transition-based parsing} Figure~\ref{fig:eds-graph} shows that the semantic graphs we work with can also be interpreted as dependency graphs, as nodes are aligned to sentence tokens. Transition-based parsing~\cite{Nivre08} has been used extensively to predict dependency graphs incrementally. We apply a variant of the arc-eager transition system that has been proposed for graph (as opposed to tree) parsing~\cite{SagaeTsujii08,TitovHMM09,GomezN10} to derive a transition-based parser for deep semantic graphs. In dependency parsing the sentence tokens also act as nodes in the graph, but here we need to generate the nodes incrementally as the transition-system proceeds, conditioning the generation on the given sentence. \newcite{DamonteCS16} proposed an arc-eager AMR parser, but their transition system is more narrowly restricted to AMR graphs. The transition system consists of a \emph{stack} of graph nodes being processed and a \emph{buffer}, holding a single node at a time. The main transition actions are \emph{shift}, \emph{reduce}, \emph{left-arc}, \emph{right-arc}. Figure~\ref{fig:transition-table} shows an example transition sequence together with the stack and buffer after each step. The shift transition moves the element on the buffer to the top of the stack, and generates a predicate and its alignment as the next node on the buffer. Left-arc and right-arc actions add labeled arcs between the buffer and stack top (for DMRS a transition for undirected arcs is included), but do not change the state of the stack or buffer. Finally, reduce pops the top element from the stack, and predicts its end-of-span alignment (if included in the representation). To predict non-planar arcs, we add another transition, which we call \emph{cross-arc}, which first predicts the stack index of a node which is not on top of the stack, adding an arc between the head of the buffer and that node. Another special transition designates the buffer node as the root. To derive an oracle for this transition system, it is necessary to determine the order in which the nodes are generated. We consider two approaches. The first ordering is obtained by performing an in-order traversal of the spanning tree, where the node order is determined by the alignment. In the resulting linearization the only non-planar arcs are reentrancies. The second approach lets the ordering be monotone (non-decreasing) with respect to the alignments, while respecting the in-order ordering for nodes with the same alignment. In an arc-eager oracle arcs are added greedily, while a reduce action can either be performed as soon as the stack top node has been connected to all its dependents, or delayed until it has to reduce to allow the correct parse tree to be formed. In our model the oracle delays reduce, where possible, until the end alignment of the stack top node spans the node on the buffer. As the span end alignments often cover phrases that they head (e.g. for quantifiers) this gives a natural interpretation to predicting the span end together with the reduce action. \subsection{Delexicalization and lemma prediction} Each token in MRS annotations is aligned to at most one surface predicate. We decompose surface predicate prediction by predicting candidate lemmas for input tokens, and delexicalized predicates consisting only of sense labels. The full surface predicates are then recovered through the predicted alignments. We extract a dictionary mapping words to lemmas from the ERG lexicon. Candidate lemmas are predicted using this dictionary, and where no dictionary entry is available with a lemmatizer. The same approach is applied to predict constants, along with additional normalizations such as mapping numbers to digit strings. We use the Stanford CoreNLP toolkit~\cite{ManningEa14} to tokenize and lemmatize sentences, and tag tokens with the Stanford Named Entity Recognizer~\cite{FinkelGM05}. The tokenization is customized to correspond closely to the ERG tokenization; hyphens are removed pre-processing step. For AMR we use automatic alignments and the graph topology to classify concepts as surface or abstract. %~\cite{BuysB17a}. The lexicon is restricted to Propbank~\cite{PalmerGK05} predicates; for other concepts we extract a lexicon from the training data.
Diverse beam Search: Decoding Diverse Solutions from Neural Sequence Models
1610.02424
Table 1: Top: Oracle SPICE@k and distinct n-grams on the COCO image captioning task at B=20. While we report SPICE, we observe similar trends in other metrics (reported in the supplement). Bottom: Breakdown of results by difficulty class, highlighting the relative improvement over BS.
[ "[EMPTY]", "[BOLD] Method", "[BOLD] SPICE", "[BOLD] Oracle SPICE@k @5", "[BOLD] Oracle SPICE@k @10", "[BOLD] Oracle SPICE@k @20", "[BOLD] Distinct n-Grams n = 1", "[BOLD] Distinct n-Grams 2", "[BOLD] Distinct n-Grams 3", "[BOLD] Distinct n-Grams 4" ]
[ [ "COCO", "BS", "16.27", "22.96", "25.14", "27.34", "0.40", "1.51", "3.25", "5.67" ], [ "COCO", "Li & Jurafsky ( 2016 )", "16.35", "22.71", "25.23", "27.59", "0.54", "2.40", "5.69", "8.94" ], [ "COCO", "DBS", "[BOLD] 16.783", "[BOLD] 23.08", "[BOLD] 26.08", "[BOLD] 28.09", "[BOLD] 0.56", "[BOLD] 2.96", "[BOLD] 7.38", "[BOLD] 13.44" ], [ "COCO", "Li et al. ( 2015 )", "16.74", "23.27", "26.10", "27.94", "0.42", "1.37", "3.46", "6.10" ], [ "PASCAL-50S", "BS", "4.93", "7.04", "7.94", "8.74", "0.12", "0.57", "1.35", "2.50" ], [ "PASCAL-50S", "Li & Jurafsky ( 2016 )", "5.08", "7.24", "8.09", "8.91", "0.15", "0.97", "2.43", "5.31" ], [ "PASCAL-50S", "DBS", "[BOLD] 5.357", "[BOLD] 7.357", "[BOLD] 8.269", "[BOLD] 9.293", "[BOLD] 0.18", "[BOLD] 1.26", "[BOLD] 3.67", "[BOLD] 7.33" ], [ "PASCAL-50S", "Li et al. ( 2015 )", "5.12", "7.17", "8.16", "8.56", "0.13", "1.15", "3.58", "8.42" ] ]
Our method produces significantly more distinct n-grams (almost 300% increase in the number of 4-grams produced) as compared to BS. We also note that our method tends to produce slightly longer captions compared to beam search on average. Moreover, on the PASCAL-50S test split we observe that DBS finds more likely top-1 solutions on average – DBS obtains a maximum log-probability of -6.53 as against -6.91 got by BS of same beam width. While the performance of DBS is guaranteed to be better than a BS of size \nicefracBG, this experimental evidence suggests that using DBS as a replacement to BS leads to better or at least comparable performance. Results by Image Complexity. For example, at Oracle Spice@20, DBS achieves significant improvements over BS of 0.67, 0.91, and 1.13 for Simple, Average, and Complex images respectively. While DBS improves over BS in all settings, complex images benefit even more from diversity-inducing inference than simple images.
\parskip=3pt \abovedisplayskip 3.0pt plus2pt minus2pt% \belowdisplayskip \abovedisplayskip \renewcommand{\baselinestretch}{0.98} \newenvironment{packed_enum}{ \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} } {\end{enumerate}} \newenvironment{packed_item}{ \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} }{\end{itemize}} \newlength{\sectionReduceTop} \newlength{\sectionReduceBot} \newlength{\subsectionReduceTop} \newlength{\subsectionReduceBot} \newlength{\abstractReduceTop} \newlength{\abstractReduceBot} \newlength{\captionReduceTop} \newlength{\captionReduceBot} \newlength{\subsubsectionReduceTop} \newlength{\subsubsectionReduceBot} \newlength{\eqnReduceTop} \newlength{\eqnReduceBot} \newlength{\horSkip} \newlength{\verSkip} \newlength{\figureHeight} \setlength{\figureHeight}{1.7in} \setlength{\horSkip}{-.09in} \setlength{\verSkip}{-.1in} \setlength{\subsectionReduceTop}{-0.12in} \setlength{\subsectionReduceBot}{-0.09in} \setlength{\sectionReduceTop}{-0.08in} \setlength{\sectionReduceBot}{-0.10in} \setlength{\subsubsectionReduceTop}{-0.06in} \setlength{\subsubsectionReduceBot}{-0.05in} \setlength{\abstractReduceTop}{-0.05in} \setlength{\abstractReduceBot}{-0.10in} \setlength{\eqnReduceTop}{-0.05in} \setlength{\eqnReduceBot}{-0.05in} \setlength{\captionReduceTop}{-0.14in} \setlength{\captionReduceBot}{-0.12in} \subsection{Sensitivity Analysis and Effect of Diversity Functions} \label{div_choice} \vspace{-5pt} \label{div_type} In this section, we study the impact of the number of groups, the strength of diversity penalty, and various forms of diversity functions for language models. Further discussion and experimental details are included in the supplementary materials. \textbf{Number of Groups ($\mathbf{G}$).} Setting $G{=}B$ allows for the maximum exploration of the space, while setting $G{=}1$ reduces our method to BS, resulting in increased exploitation of the search-space around the 1-best decoding. Thus, increasing the number of groups enables us to explore various modes of the model. Empirically, we find that maximum exploration correlates with improved oracle accuracy and hence use $G{=}B$ to report results unless mentioned otherwise. \iffalse \fi \textbf{Diversity Strength ($\mathbf{\lambda}$).} The diversity strength $\lambda$ specifies the trade-off between the joint $\log$-probability and the diversity terms. As expected, we find that a higher value of $\lambda$ produces a more diverse list; however, excessively high values of $\lambda$ can overpower model probability and result in grammatically incorrect outputs. We set $\lambda$ by performing a grid search on the validation set for all experiments. We find a wide range of $\lambda$ values (0.2 to 0.8) work well for most tasks and datasets. \textbf{Choice of Diversity Function ($\delta$).} \ar{As mentioned in \ref{form}, the sequence level dissimilarity term $\delta(\cdot, \cdot)$ can be design to satisfy different design choices.} We discuss some of these below: \setlength{\plitemsep}{-1.5ex} \begin{compactenum}[\hspace{0pt}-] \item \emph{Hamming Diversity.} This form penalizes the selection of tokens used in previous groups proportional to the number of times it was selected before. \\ \item \emph{Cumulative Diversity.} Once two sequences have diverged sufficiently, it seems unnecessary and perhaps harmful to restrict that they cannot use the same words at the same time. To encode this `backing-off' of the diversity penalty we introduce cumulative diversity which keeps a count of identical words used at every time step, indicative of overall dissimilarity. Specifically, $\delta(\yb_{[t]}, \yb_{b,[t]}^g) = \exp\{-\nicefrac{\left(\sum_{\tau{\in}t}\sum_{b{\in}B'}I[y_{b,\tau}^h\neq y_{b,\tau}^g]\right)}{\Gamma}\}$ where $\Gamma$ is a temperature parameter controlling the strength of the cumulative diversity term and $I[\cdot]$ is the indicator function. \\ \item \emph{n-gram Diversity.} The current group is penalized for producing the same n-grams as previous groups, regardless of alignment in time -- similar to \cite{gimpel_emnlp13}. This is proportional to the number of times each n-gram in a candidate occurred in previous groups. Unlike hamming diversity, n-grams capture higher order structures in the sequences. \\ \item \emph{Neural-embedding Diversity.} While all the previous diversity functions discussed above perform exact matches, neural embeddings such as word2vec \citep{mikolov_nips13} can penalize semantically similar words like synonyms. This is incorporated in each of the previous diversity functions by replacing the hamming similarity with a soft version obtained by computing the cosine similarity between word2vec representations. When using with n-gram diversity, the representation of the n-gram is obtained by summing the vectors of the constituent words. \end{compactenum} Each of these various forms encode different notions of diversity. Hamming diversity ensures different words are used at different times, but can be circumvented by small changes in sequence alignment. While n-gram diversity captures higher order statistics, it ignores sentence alignment. Neural-embedding based encodings can be seen as a semantic blurring of either the hamming or n-gram metrics, with word2vec representation similarity propagating diversity penalties not only to exact matches but also to close synonyms. We find that using any of the above functions help outperform BS in the tasks we examine; hamming diversity achieves the best oracle performance despite its simplicity. A comparison of the performance of these functions for image-captioning is provided in the supplementary. \iffalse \fi \newcommand{\yb}{\mathbf{y}} \newcommand{\xb}{\mathbf{x}} In the last few years, Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs) or more generally, neural sequence models have become the standard choice for modeling time-series data for a wide range of applications such as speech recognition \citep{graves_arxiv13}, machine translation \citep{bahdanau_arxiv14}, conversation modeling \citep{vinyals_arxiv15}, image and video captioning \citep{vinyals_cvpr15, venugopalan_cvpr15}, and visual question answering \citep{antol_iccv15}. RNN based sequence generation architectures model the conditional probability, $\prob(\yb | \xb)$ of an output sequence $\yb = (y_1,\ldots,y_T)$ given an input $\xb$ (possibly also a sequence); where the output tokens $y_t$ are from a finite vocabulary, $\calV$. \textbf{Inference in RNNs.} Maximum a Posteriori (MAP) inference for RNNs is te task of finding the most likely output sequence given the input. Since the number of possible sequences grows as $|\calV|^T$, exact inference is NP-hard so approximate inference algorithms like Beam Search (BS) are commonly employed. BS is a heuristic graph-search algorithm that maintains the $B$ top-scoring partial sequences expanded in a greedy left-to-right fashion. Fig.~\ref{fig:cover} shows a sample BS search tree. \textbf{Lack of Diversity in BS.} Despite the widespread usage of BS, it has long been understood that solutions decoded by BS are generic and lacking in diversity~\citep{finkel_emnlp06,gimpel_emnlp13,li_arxiv15,li_arxiv16}. To illustrate this, a comparison of captions provided by humans (bottom) and BS (topmost) are shown in \figref{fig:cover}. While this behavior of BS is disadvantageous for many reasons, we highlight the three most crucial ones here: \setlength{\plitemsep}{-1.5ex} \begin{compactenum}[\hspace{-1pt}i)] \item The production of near-identical beams make BS a computationally wasteful algorithm, with essentially the same computation being repeated for no significant gain in performance.\\ \item Due to \emph{loss-evaluation mismatch} \ie improvements in posterior-probabilities not necessarily corresponding to improvements in task-specific metrics, it is common practice \citep{vinyals_cvpr15, karpathy_cvpr15, ferraro_arxiv16} to \emph{deliberately throttle BS to become a poorer optimization algorithm} by using reduced beam widths. This treatment of an optimization algorithm as a hyper-parameter is not only intellectually dissatisfying but also has a significant practical side-effect -- it leads to the decoding of largely bland, generic, and ``safe'' outputs, \eg always saying ``I don't know'' in conversation models~\citep{corrado_blog15}. \\ \item Most importantly, lack of diversity in the decoded solutions is fundamentally crippling in AI problems with \emph{significant ambiguity} -- \eg there are multiple ways of describing an image or responding in a conversation that are ``correct'' and it is important to capture this ambiguity by finding several diverse plausible hypotheses. \end{compactenum} \setlength{\plitemsep}{1ex} \textbf{Overview and Contributions.} To address these shortcomings, we propose \emph{Diverse Beam Search (DBS)} -- a general framework to decode a list of diverse sequences that can be used as an \emph{alternative} to BS. At a high level, DBS decodes diverse lists by dividing the beam budget into groups and enforcing diversity between groups of beams. Drawing from recent work in the probabilistic graphical models literature on Diverse M-Best (\texttt{DivMBest}) MAP inference~\citep{batra_eccv12, prasad_nips14,kirillov_cvpr15}, we optimize an objective that consists of two terms -- the sequence likelihood under the model and a dissimilarity term that encourages beams across groups to differ. This diversity-augmented model score is optimized in a \emph{doubly greedy} manner -- greedily optimizing along both time (like BS) and groups (like DivMBest). \\ \\ To summarize, our primary technical contribution is Diverse Beam Search, a doubly greedy approximate inference algorithm for decoding diverse sequences. \ar{To demonstrate its broad applicability, we report results on two image-grounded language generation tasks, captioning and question generation and on machine translation. Our method consistently outperforms BS while being comparable in terms of both run-time and memory requirements. We find that DBS results in improvements on both oracle task-specific and diversity-related metrics against baselines. Further, we notice that these gains are more pronounced as the image becomes more complex consisting of multiple objects and interactions.} We \ar{also} conduct human studies to evaluate the role of diversity in human preferences between BS and DBS for image captions. We also analyze the parameters of DBS and show they are robust over a wide range of values. Finally, we also show that our method is general enough to incorporate various forms for the dissimilarity term. Our implementation is available at \url{https://github.com/ashwinkalyan/dbs}. Also, a demo of DBS on image-captioning is available at \url{dbs.cloudcv.org}. \documentclass{article} % For LaTeX2e \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false,citecolor=blue]{hyperref} \usepackage[T1]{fontenc} % use 8-bit T1 fonts % simple URL typesetting % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. \usepackage[belowskip=0pt,aboveskip=0pt,font=small]{caption} \usepackage[belowskip=0pt,aboveskip=0pt,font=small]{subcaption} \setlength{\intextsep}{7pt plus 0pt minus 0pt} \usepackage[olditem,oldenum]{paralist} \usepackage[ruled,vlined,oldcommands,linesnumbered]{algorithm2e} \SetCommentSty{mycommfont} \usetikzlibrary{arrows,shapes, calc, shapes.misc} \newcommand\mycommfont[1]{\footnotesize\ttfamily\textcolor{blue}{#1}} \newcommand{\xhdr}[1]{{\vspace{2pt}\noindent\textbf{#1}}} \newcommand{\mac}[1]{\textcolor{green}{#1}} % --Michael}} \newcommand{\ar}[1]{\textcolor{black}{#1}} % --Michael}} \title{Diverse beam Search: \\ Decoding Diverse Solutions from \\ Neural Sequence Models} \author{Ashwin K Vijayakumar$^1$, Michael Cogswell$^1$, Ramprasath R. Selvaraju$^1$, Qing Sun$^1$ \\ \textbf{Stefan Lee$^1$, David Crandall$^2$ \& Dhruv Batra$^{1}$} \\ \texttt{\{ashwinkv,cogswell,ram21,sunqing,steflee\}@vt.edu} \\ \texttt{djcran@indiana.edu}, \texttt{dbatra@vt.edu} \\ \\ $^1$ Department of Electrical and Computer Engineering, \\ Virginia Tech, Blacksburg, VA, USA \\ \\ $^2$ School of Informatics and Computing\\ Indiana University, Bloomington, IN, USA } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} Neural sequence models are widely used to model time-series data. % in many fields. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top-$B$ candidates -- resulting in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose \emph{Diverse Beam Search} (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing for a diversity-augmented objective. We observe that our method finds better top-1 solutions by controlling for the exploration and exploitation of the search space -- implying that DBS is a \emph{better search algorithm}. Moreover, these gains are achieved with minimal computational or memory overhead as compared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation and visual question generation using both standard quantitative metrics and qualitative human studies. \ar{Further, we study the role of diversity for image-grounded language generation tasks as the complexity of the image changes.} \ar{We observe that} our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models. \end{abstract} \section{Introduction} \label{intro} \input{intro.tex} \input{formulation.tex} \input{rel_work.tex} \input{results.tex} \input{conclusion.tex} \bibliographystyle{iclr2017_conference} \clearpage \input{appendix.tex} \end{document} \section{Conclusion} Beam search is the most commonly used approximate inference algorithm to decode sequences from RNNs; however, it suffers from a lack of diversity. Producing multiple highly similar and generic outputs is not only wasteful in terms of computation but also detrimental for tasks with inherent ambiguity like image captioning. In this work, we presented \emph{Diverse Beam Search}, which describes beam search as an optimization problem and augments the objective with a diversity term. The result is a `doubly greedy' approximate algorithm that produces diverse decodings while using about the same time and resources as beam search. Our method consistently outperforms beam search and other baselines across all our experiments without \emph{extra computation} or \emph{task-specific overhead}. \ar{Further, in the case of image-grounded language generation tasks, we find that DBS provides increased gains as the complexity of the images increases.} DBS is \emph{task-agnostic} and can be applied to any case where BS is used -- making it applicable in multiple domains. \section{Related Work} \label{rel} \xhdr{Diverse M-Best Lists.} The task of generating diverse structured outputs from probabilistic models has been studied extensively \citep{park_iccv11,batra_eccv12,kirillov_cvpr15,prasad_nips14}. \cite{batra_eccv12} formalized this task for Markov Random Fields as the \texttt{DivMBest} problem and presented a greedy approach which solves for outputs iteratively, conditioning on previous solutions to induce diversity. \cite{kirillov_cvpr15} show how these solutions can be found jointly for certain kinds of energy functions. The techniques developed by Kirillov are not directly applicable to decoding from RNNs, which do not satisfy the assumptions made. Most related to our proposed approach is that of \cite{gimpel_emnlp13} who apply the \texttt{DivMBest} approach to machine translation using beam search as a black-box inference algorithm. To obtain diverse solutions, beam searches of arbitrary size are sequentially performed while retaining the top-scoring candidate and using it to update the diversity term. This approach is extremely wasteful because in each iteration only one solution returned by beam search is kept. Consequently, the iterative method is time consuming and is poorly suited for batch processing or producing a large number of solutions. Our algorithm avoids these shortcomings by integrating diversity within BS such that \emph{no} beams are discarded. By running multiple beam searches \emph{in parallel} and at staggered time offsets, we obtain large time savings making our method comparable to classical BS. One potential advantage over our method is that more complex diversity measures at the sentence-level can be incorporated. However, as observed empirically by us and \cite{li_arxiv15}, initial words tend to significantly impact the diversity of the resultant sequences -- suggesting that later words may not contribute significantly to diverse inference. \xhdr{Diverse Decoding for RNNs.} Some efforts have been made to produce diverse decodings from recurrent models for conversation modeling and machine translation. In this context, our work is closely related to \cite{li_arxiv16}, who propose a BS diversification heuristic to overcome the shortcomings of \cite{gimpel_emnlp13}. This discourages sequences from sharing common roots, implicitly resulting in diverse lists. Introducing diversity through a modified objective as in DBS rather than a heuristic provides easier generalization to incorporate different notions of diversity and control for the exploration-exploitation trade-off as detailed in \secref{div_type}. Furthermore, we find that DBS outperforms this method. Through a novel decoding objective that maximizes mutual information between inputs and predicted outputs, \cite{li_arxiv15} penalize decoding generic, input independent sequences. This is achieved by training an additional target language model. Although this work and DBS share the same goals (producing diverse decodings), the techniques developed are disjoint and complementary -- \cite{li_arxiv15} develops a new model (RNN translation model with an RNN target language model), while DBS is a modified \emph{inference} algorithm that can be applied to \emph{any} model where BS is applicable. Combination of these complementary techniques is left as interesting future work. \vspace{-10pt} \section{Experiments} \label{sec:results} \vspace{-5pt} We first explain the baselines and evaluation metrics used in this paper. Next, we proceed to the analysis of the effects of DBS parameters. Further, we report results on image-captioning, machine translation and visual question generation. \ar{In the context of image-grounded language generation tasks, we additionally study the role of diversity with varying \emph{complexity} of the image.} Although results are reported on these tasks, it should be noted that DBS is a task-agnostic algorithm that can replace BS to decode diverse solutions. \xhdr{Baselines.} We compare with beam search and the following existing methods: \begin{compactenum}[\hspace{0pt} -] \item \cite{li_arxiv16}: This work modifies BS by introducing an intra-sibling rank. For each partial solution, the set of $|\calV|$ continuations are sorted and assigned intra-sibling ranks $k\in[L]$ in order of decreasing log-probabilities, $\theta_t(y_t)$. The log-probability of an extenstion is then reduced in proportion to its rank, and continuations are re-sorted under these modified log-probabilities to select the top-B \emph{diverse} beam extensions. \item \cite{li_arxiv15}: These models are decoded using a modified objective, $P(\mathbf{y}|x) - \lambda U(\mathbf{y})$, where $U(\mathbf{y})$ is an unconditioned target sequence model. This additional term penalizes generic input independent decoding. \end{compactenum} Both works use secondary mechanisms such as \emph{re-rankers} to pick a single solution from the generated lists. As we are interested in evaluating the quality of the generated lists and in isolating the gains due to diverse decoding, we do not implement any re-rankers. Instead, we simply sort the list based on log-probability. We compare to our own implementations of these methods as none are publicly available. \xhdr{Evaluation Metrics.} We evaluate the performance of the generated lists using the following two metrics that quantify complementary details: \begin{compactenum}[\hspace{0pt} -] \item \emph{Oracle Accuracy}: Oracle or top-$k$ accuracy for some task-specific metric like BLEU is the maximum value of the metric over a list of $k$ potential solutions. It is an upper bound on the potential impact diversity plays in finding relevant solutions. \\ \item \emph{Diversity Statistics}: We count the number of distinct n-grams present in the list of generated outputs. Similar to \cite{li_arxiv15}, we divide these counts by the total number of words generated to bias against long sentences. \end{compactenum} \emph{Simultaneous improvements} in both metrics indicate that output lists have increased diversity without sacrificing fluency and correctness with respect to target tasks. Human preference studies which compare image captions produced by DBS and BS also compare these methods. Finally, We discuss the role of diversity by relating it to intrinsic details contained in images. \vspace{-10pt} \input{div_choice.tex} \vspace{-5pt} \subsection{Estimating Image Complexity} Diversity in the output space is often dependent on the input. For example, ``complex'' scenes consisting of various objects and interactions tend to be described in multiple ways as compared to ``simple'' images that tend to focus on one specific object. We study this by inspecting the gains due to DBS with varying complexity of images. One notion of image complexity is studied by Ionescu \etal \cite{ionescu_cvpr16}, defining a ``difficulty score'' as the human response time for solving a visual search task for images in PASCAL-50S \cite{vedantam_cvpr15}. Using the data from \cite{ionescu_cvpr16}, we train a Support Vector Regressor on ResNet \citep{he2016deep} features to predict this difficulty score. This model achieves a 0.41 correlation with the ground truth (comparable to the best model of \cite{ionescu_cvpr16} at 0.47). To evaluate the relationship between image complexity and performance gains from diverse decoding, we use this trained predictor to estimate a difficulty score $s$ for each image in the COCO \cite{coco} dataset. We compute the mean ($\mu=3.3$) and standard deviation ($\sigma=0.61$) and divide the images into three bins, \texttt{Simple} ($s\leq \mu-\sigma$), \texttt{Average} ($\mu{-}\sigma > s <\mu{+}\sigma$), and \texttt{Complex} ($s \geq \mu+\sigma$) consisting of 745, 3416 and 839 images respectively. Figure 3 shows some sample \texttt{Simple}, \texttt{Average}, and \texttt{Complex} images from the PASCAL-50S dataset. While simple images like close-up pictures of cats may only be described in a handful of ways by human captioners (first column), complex images with multiple objects and interactions will be described in many different ways depending on what is the focus of the captioner (last column). In the subsequent experiments on image-grounded language generation tasks, we show that improvements from DBS are greater for more complex images. \subsection{Image Captioning} \vspace{-5pt} \xhdr{Dataset and Models.} We evaluate on two datasets -- COCO \citep{coco} and PASCAL-50S \citep{vedantam_cvpr15}. We use the public splits as in \cite{karpathy_cvpr15} for COCO. PASCAL-50S is used only for testing save 200 validation images used to tune hyperparameters. We train a captioning model \citep{vinyals_cvpr15} using the \texttt{neuraltalk2}\footnote{\texttt{\url{https://github.com/karpathy/neuraltalk2}}} code repository. \xhdr{Results.} As it can be observed from \tabref{tab:coco_quant} \ar{(Top)}, DBS outperforms both BS and \cite{li_arxiv16} on both COCO and PASCAL-50S datasets. We observe that gains on PASCAL-50S are more pronounced (7.24\% and 9.60\% Oracle@20 improvements against BS and \cite{li_arxiv16}) than COCO. This suggests diverse predictions are especially advantageous when there is a mismatch between training and testing sets making DBS a better inference strategy in real-world applications. \iffalse \fi Table \ref{tab:coco_quant} \ar{(Top)} also shows the number of distinct n-grams produced by different techniques. Our method produces significantly more distinct n-grams (almost 300\% increase in the number of 4-grams produced) as compared to BS. We also note that our method tends to produce slightly longer captions compared to beam search on average. Moreover, on the PASCAL-50S test split we observe that DBS finds more likely top-1 solutions on average -- DBS obtains a maximum $\log$-probability of -6.53 as against -6.91 got by BS of same beam width. While the performance of DBS is guaranteed to be better than a BS of size $\nicefrac{B}{G}$, this experimental evidence suggests that using DBS as a replacement to BS leads to better or at least comparable performance. \iffalse \fi \iffalse \fi \iffalse \fi \xhdr{Results by Image Complexity.} From Table \ref{tab:coco_quant}, we can see that as the complexity of images increases DBS outperforms standard beam search (difference shown in parentheses) and other baselines by larger margins for all values of $k$. For example, at Oracle Spice@20, DBS achieves significant improvements over BS of 0.67, 0.91, and 1.13 for \texttt{Simple}, \texttt{Average}, and \texttt{Complex} images respectively. While DBS improves over BS in all settings, complex images benefit even more from diversity-inducing inference than simple images. \newcommand{\imin}[1]{\includegraphics[width=60px, height=40px]{#1}} \newcommand{\iminb}[1]{\setlength{\fboxsep}{-2pt}\setlength{\fboxrule}{2pt}\fbox{\includegraphics[width=60px, height=40px]{#1}}} \ar{\xhdr{Human Preference by Difficulty.} To further establish the effectiveness of our method, we evaluate human preference between captions decoded using DBS and BS. In this forced-choice test, DBS captions were preferred over BS 60\% of the time by human annotators. Further, they were preferred about 50\%, 69\% and 83\% of the times for \texttt{Simple}, \texttt{Average} and \texttt{Difficult} images respectively. Furthermore, we observe a positive correlation ($\rho = 0.73$) between difficulty scores and humans preferring DBS to BS. Further details about this experiment are provided in the supplement.} \iffalse \fi \subsection{Visual Question Generation} We also report results on Visual Question Generation (VQG) \citep{mostafazadeh_arxiv16}, where a model is trained to produce questions \emph{about an image}. Generating visually focused questions requires reasoning about multiple problems that are central to vision -- \eg, object attributes, relationships between objects, and natural language. Similar to captioning, there are many sensible questions for a given image. The VQG dataset \citep{mostafazadeh_arxiv16} consists of 5 human-generated questions per image for 5000 images from COCO \citep{coco}. We use a model similar to the one used for captioning, except that it is now trained to output questions rather than captions. Similar to previous results, using beam search to sample outputs results in similarly worded question while DBS decoded questions ask about multiple details of the image (see Fig.~\ref{fig:vqg}). We show quantitative evaluations in Table \ref{tab:vqg_quant} for the VQG dataset as a whole and when partitioned by image difficulty. We find DBS significantly outperforms the baseline methods on this task both on standard metrics (SPICE) and measure of diversity. We also observe that gap between DBS and the baseline methods is more pronounced than in the captioning task and attribute this to the increased variety of possible visually grounded questions compared to captions which often describe only a few major salient objects. The general trend that more complex images benefit more from diverse decoding also persists in this setting. \subsection{Machine Translation} \xhdr{Dataset and Models.} We use the English-French parallel data from the \emph{europarl} corpus as the training set. We report results on \emph{news-test-2013} and \emph{news-test-2014} and use the \emph{newstest2012} to tune DBS parameters. We train a encoder-decoder architecture as proposed in \cite{bahdanau_arxiv14} using the \texttt{dl4mt-tutorial}\footnote{\url{https://github.com/nyu-dl/dl4mt-tutorial}} code repository. The encoder consists of a bi-directional recurrent network (Gated Recurrent Unit) with attention. We use sentence level BLEU scores \citep{papineni_acl02} to compute oracle metrics and report distinct n-grams similar to image-captioning. From \tabref{tab: mt_quant}, we see that DBS consistently outperforms standard baselines with respect to both metrics. \section*{Appendix} \subsection*{Sensivity Studies} \textbf{Number of Groups.} \figref{fig:G} presents snapshots of the transition from BS to DBS at $B=6$ and $G=\{1,3,6\}$. As beam width moves from 1 to $G$, the exploration of the method increases resulting in more diverse lists. \textbf{Diversity Strength.} As noted in \secref{div_type}, our method is robust to a wide range of values of the diversity strength ($\lambda$). \figref{fig:lambdagrid} shows a grid search of $\lambda$ for image-captioning on the PASCAL-50S dataset. \textbf{Choice of Diversity Function.} \figref{fig:divtype} shows the oracle performace of various forms of the diversity function described in \secref{div_type}. We observe that hamming diversity surprisingly performs the best. Other forms perform comparably while outperforming BS. \subsection*{Human Studies} For image-captioning, we conduct a human preference study between BS and DBS captions as explained in \secref{sec:results}. A screen shot of the interface used to collect human preferences for captions generated using DBS and BS is presented in \figref{fig:amt}. The lists were shuffled to guard the task from being gamed by a turker. As mentioned in \secref{sec:results}, we observe that \emph{difficulty score} of an image and human preference for DBS captions are positively correlated. The dataset contains more images that are less difficulty and so, we analyze the correlation by dividing the data into three bins. For each bin, we report the \% of images for which DBS captions were preferred after a majority vote (\ie at least 3/5 turkers voted in favor of DBS) in \tabref{tab:amt}. At low difficulty scores consisting mostly of iconic images -- one might expect that BS would be preferred more often than chance. However, mismatch between the statistics of the training and testing data results in a better performance of DBS. Some examples for this case are provided in \figref{fig:qual_2}. More general qualitative examples are provided in \figref{fig:qual_1}. \section{Preliminaries: Decoding RNNs with Beam Search} \label{sec:prelim} We begin with a refresher on BS, before describing our \ar{extension}, Diverse Beam Search. For notational convenience, let $[n]$ denote the set of natural numbers from $1$ to $n$ and let $\mathbf{v}_{[n]} = [v_1, v_2, \dots v_n]$ index the first $n$ elements of a vector $\mathbf{v} \in \RR^m$, where $n\leq m$. \textbf{The Decoding Problem.} RNNs are trained to estimate the likelihood of sequences of tokens from a finite dictionary $\calV$ given an input $\xb$. The RNN updates its internal state and estimates the conditional probability distribution over the next output given the input and all previous output tokens. We denote the logarithm of this conditional probability distribution over all tokens at time $t$ as $\theta(y_t) = \log \prob(y_t | y_{t-1},\ldots,y_1, \xb)$. To simplify notation, we index $\theta(\cdot)$ with a single variable $y_t$; but it should be clear that it depends on the previous outputs, $\yb_{[t-1]}$ from the context. The $\log$-probability of a partial solution (\ie the sum of $\log$-probabilities of all previous tokens decoded) can now be written as $\Theta(\yb_{[t]})=\sum_{\tau\in[t]} \theta(y_\tau)$. The decoding problem is then the task of finding a sequence $\yb$ that maximizes $\Theta(\yb)$. As each output is conditioned on all the previous outputs, decoding the optimal length-$T$ sequence in this setting can be viewed as MAP inference on $T$-order Markov chain with the $T$ nodes corresponding to output tokens. Not only does the size of the largest factor in such a graph grow as $|\calV|^T$, but also requires wastefully forwarding of the RNN repeatedly to compute entries in the factors. Thus, approximate algorithms are employed. \textbf{Beam Search.} The most prevalent method for approximate decoding is BS, which stores the top-$B$ highly scoring candidates at each time step; where $B$ is known as the \emph{beam width}. Let us denote the set of $B$ solutions held by BS at the start of time $t$ as $Y_{[t-1]} = \{\yb_{1,[t-1]},\dots,\yb_{B,[t-1]}\}$. At each time step, BS considers all possible single token extensions of these beams given by the set $\calY_t = Y_{[t-1]}\times\calV$ and selects the $B$ most likely extensions. More formally, at each step, \begin{equation} \label{eq: bs} Y_{[t]} \ \ = \argmax_{\yb_{1,[t]},\dots,\yb_{B,[t]} \in \calY_t} \sum_{b \in [B]} \Theta(\yb_{b,[t]}) ~~~~~~s.t.~~\yb_{i,[t]} \neq \yb_{j,[t]} \end{equation} The above objective can be trivially maximized by sorting all $B\times|\calV|$ members of $\calY_t$ by their $\log$-probabilities and selecting the top-$B$. This process is repeated until time $T$ and the most likely sequence is selected by ranking the B beams based on $\log$-probabilities. While this method allows for multiple sequences to be explored in parallel, most completions tend to stem from a single highly valued beam -- resulting in outputs that are typically only minor perturbations of a single sequence. \section{Diverse Beam Search: Formulation and Algorithm} \label{form} To overcome this shortcoming, we consider augmenting the objective in Eq. \ref{eq: bs} with a dissimilarity term $\Delta(Y_{[t]})$ that measures the diversity between candidate sequences. Jointly optimizing for all $B$ candidates at each time step is intractable as the number of possible solutions grows with $|\calV|^B$ (which can easily reach $10^{60}$ for typical language modeling settings). To avoid this joint optimization problem, we divide the beam budget $B$ into $G$ groups and greedily optimize each group using beam search while holding previous groups fixed. This doubly greedy approximation along both time and across groups turns $\Delta(Y_{[t]})$ into a function of only the current group's possible extensions. We detail the specifics of our approach in this section. \textbf{Diverse Beam Search.} Let $Y_{[t]}$, the set of all $B$ beams at time $t$ be partitioned into $G$ {non-empty}, {disjoint} subsets $Y_{[t]}^g, \ g{\in}[G]$. Without loss of generality, consider an equal partition such that each group contains $B'=\nicefrac{B}{G}$ groups. Beam search can be applied to each group to produce $B$ solutions; however, each group would produce identical outputs. \ar{Unlike BS, we optimize a modified version of the objective of eq. \ref{eq: bs} which adds a dissimilarity term $\Delta(\yb_{[t]}, Y_{[t]}^g)$, measuring the dissimilarity of a sequence $\yb_{[t]}$ against a group $Y_{[t]}^g$.} While $\Delta(\cdot, \cdot)$ can take various forms, for simplicity we define one broad class that decomposes across beams within each group as: \begin{equation}\label{eq:diversity-term} \Delta(\yb_{[t]}, Y_{[t]}^g) = \sum_{b=1}^{B'}\delta\left(\yb_{[t]}, \yb_{b,[t]}^g\right) \end{equation} where $\delta(\cdot, \cdot)$ is a measure of sequence dissimilarity -- \eg a negative cost for each co-occurring n-gram in two sentences or distance between distributed sentence representations. The exact form of the sequence-level dissimilarity term can vary and we discuss some choices in \secref{div_type}. As we optimize each group with the previous groups fixed, extending group $g$ at time $t$ amounts to a standard BS using dissimilarity augmented $\log$-probabilities and can be written as: \begin{eqnarray} \label{eq:divminB} Y^g_{[t]} \ \ = \argmax_{\yb^g_{1,[t]}, \dots, \yb^g_{B',[t]} \in \mathcal{Y}^g_{t}} && \sum_{b \in [B']} \Theta(\yb^g_{b,[t]}) + \sum_{h=1}^{g-1}\lambda_g\Delta\left(\yb_{b,[t]}^g, Y_{[t]}^h\right)\\ && \st \ \yb^g_{i,[t]} \neq \yb^g_{j,[t]},\ \lambda_g \geq 0 \nonumber \end{eqnarray} \tikzstyle{g1vertex}=[rounded rectangle,fill=green!25,minimum size=20pt,inner sep=3pt] \tikzstyle{g2vertex}=[rounded rectangle,fill=red!25,minimum size=20pt,inner sep=3pt] \tikzstyle{g3vertex}=[rounded rectangle,fill=blue!25,minimum size=20pt,inner sep=3pt] \tikzstyle{g3shade}=[rounded rectangle,opacity=0.65, fill=blue!25,minimum size=20pt,inner sep=3pt] \tikzstyle{edge} = [draw,thick,<-] \tikzstyle{bedge} = [draw,opacity=0.75,thick,<-,bend right=35] \tikzstyle{weight} = [font=\small] \tikzstyle{selected edge} = [draw,line width=5pt,-,red!50] \tikzstyle{ignored edge} = [draw,line width=5pt,-,black!20] \tikzstyle{group box} = [thick, rounded corners=0.25cm, minimum width=30pt, minimum height=50pt] \tikzstyle{searchbox} = [thick, fill=white, rounded corners=0.25cm, minimum width=120pt, minimum height=50pt] \newcommand{\vertspace}{-1.15} \newcommand{\inspace}{-0.45} \newcommand{\of}{1.25} \newcommand{\hz}{1.3} This approach, which we call Diverse Beam Search (DBS) is detailed in Algorithm \ref{alg:dbs}. An example run of DBS is shown in Figure \ref{fig:overview} for decoding image-captions. In the example, $B{=}6$ and $G{=}3$ and so, each group performs a smaller, diversity-augmented BS of size 2. In the snapshot shown, group 3 is being stepped forward and the diversity augmented score of all words in the dictionary is computed conditioned on previous groups. The score of all words are adjusted by their similarity to previously chosen words -- `birds', `the' and `an' (Algorithm \ref{alg:dbs}, Line \ref{alg:aug}). The optimal continuations are then found by standard BS (Algorithm \ref{alg:dbs}, Line \ref{alg:dobs}). \begin{algorithm}[h] \caption{Diverse Beam Search} \label{alg:dbs} Perform a diverse beam search with $G$ groups using a beam width of $B$ \\ \For{$t=1,\ \ldots \,T$}{ \tcp{perform one step of beam search for first group without diversity} $Y_{[t]}^1 \leftarrow \argmax_{(\yb_{1,[t]}^1, \dots, \yb_{B',[t]}^1)} \sum_{b\in[B']}\Theta(\yb_{b,[t]}^1)$ \\ \For{$g=2,\ \ldots \,G$}{ \tcp{augment log-probabilities with diversity penalty} $\Theta(\yb_{b,[t]}^g) \leftarrow \Theta(\yb_{b,[t]}^g) + \sum_h\lambda_g \Delta(\yb_{b,[t]}^g, Y_{[t]}^h) \quad$ $b \in [B'], \yb_{b,[t]}^g \in \mathcal{Y}^g$ and $\lambda_g > 0$ \label{alg:aug}\\ \tcp{perform one step of beam search for the group} $Y_{[t]}^g \leftarrow \argmax_{(\yb_{1,[t]}^g, \dots, \yb_{B',[t]}^g)} \sum_{b\in[B']}\Theta(\yb_{b,[t]}^g)$ \label{alg:dobs} \\ } } Return set of B solutions, $Y_{[T]} = \bigcup^G_{g=1} Y_{[T]}^g$ \end{algorithm} There are a number of advantages worth noting about our approach. By encouraging diversity between beams at each step (rather than just between highest ranked solutions like in \citet{gimpel_emnlp13}, our approach rewards each group for spending its beam budget to explore different parts of the output space rather than repeatedly chasing sub-optimal beams from prior groups. Furthermore, the staggered group structure enables each group beam search to be performed in parallel with a time offset. This parallel algorithm completes in $T + G$ time steps compared to $T\times G$ running time for a black-box approach of \citet{gimpel_emnlp13}. \\ \\ In summary, DBS is a task agnostic, doubly greedy algorithm that incorporates diversity in beam search with little memory or computational overhead. Moreover, as the first group is not conditioned on other groups during optimization, our method is guaranteed to be at least as good as a beam search of size $B/G$.
Diverse beam Search: Decoding Diverse Solutions from Neural Sequence Models
1610.02424
Table 2: Top: Oracle SPICE@k and distinct n-grams on the VQG task at B=20. Bottom: Results by difficulty class, highlighting the relative improvement over BS.
[ "[EMPTY]", "[BOLD] Method", "[BOLD] SPICE", "[BOLD] Oracle SPICE@k @5", "[BOLD] Oracle SPICE@k @10", "[BOLD] Oracle SPICE@k @20", "[BOLD] Distinct n-Grams n = 1", "[BOLD] Distinct n-Grams 2", "[BOLD] Distinct n-Grams 3", "[BOLD] Distinct n-Grams 4" ]
[ [ "VQG", "BS", "15.17", "21.96", "23.16", "26.74", "0.31", "1.36", "3.15", "5.23" ], [ "VQG", "Li & Jurafsky ( 2016 )", "15.45", "22.41", "25.23", "27.59", "0.34", "2.40", "5.69", "8.94" ], [ "VQG", "DBS", "[BOLD] 16.49", "[BOLD] 23.11", "[BOLD] 25.71", "[BOLD] 27.94", "[BOLD] 0.43", "[BOLD] 2.17", "[BOLD] 6.49", "[BOLD] 12.24" ], [ "VQG", "Li et al. ( 2015 )", "16.34", "22.92", "25.12", "27.19", "0.35", "1.56", "3.69", "7.21" ] ]
We find DBS significantly outperforms the baseline methods on this task both on standard metrics (SPICE) and measure of diversity. We also observe that gap between DBS and the baseline methods is more pronounced than in the captioning task and attribute this to the increased variety of possible visually grounded questions compared to captions which often describe only a few major salient objects. The general trend that more complex images benefit more from diverse decoding also persists in this setting.
\parskip=3pt \abovedisplayskip 3.0pt plus2pt minus2pt% \belowdisplayskip \abovedisplayskip \renewcommand{\baselinestretch}{0.98} \newenvironment{packed_enum}{ \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} } {\end{enumerate}} \newenvironment{packed_item}{ \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} }{\end{itemize}} \newlength{\sectionReduceTop} \newlength{\sectionReduceBot} \newlength{\subsectionReduceTop} \newlength{\subsectionReduceBot} \newlength{\abstractReduceTop} \newlength{\abstractReduceBot} \newlength{\captionReduceTop} \newlength{\captionReduceBot} \newlength{\subsubsectionReduceTop} \newlength{\subsubsectionReduceBot} \newlength{\eqnReduceTop} \newlength{\eqnReduceBot} \newlength{\horSkip} \newlength{\verSkip} \newlength{\figureHeight} \setlength{\figureHeight}{1.7in} \setlength{\horSkip}{-.09in} \setlength{\verSkip}{-.1in} \setlength{\subsectionReduceTop}{-0.12in} \setlength{\subsectionReduceBot}{-0.09in} \setlength{\sectionReduceTop}{-0.08in} \setlength{\sectionReduceBot}{-0.10in} \setlength{\subsubsectionReduceTop}{-0.06in} \setlength{\subsubsectionReduceBot}{-0.05in} \setlength{\abstractReduceTop}{-0.05in} \setlength{\abstractReduceBot}{-0.10in} \setlength{\eqnReduceTop}{-0.05in} \setlength{\eqnReduceBot}{-0.05in} \setlength{\captionReduceTop}{-0.14in} \setlength{\captionReduceBot}{-0.12in} \subsection{Sensitivity Analysis and Effect of Diversity Functions} \label{div_choice} \vspace{-5pt} \label{div_type} In this section, we study the impact of the number of groups, the strength of diversity penalty, and various forms of diversity functions for language models. Further discussion and experimental details are included in the supplementary materials. \textbf{Number of Groups ($\mathbf{G}$).} Setting $G{=}B$ allows for the maximum exploration of the space, while setting $G{=}1$ reduces our method to BS, resulting in increased exploitation of the search-space around the 1-best decoding. Thus, increasing the number of groups enables us to explore various modes of the model. Empirically, we find that maximum exploration correlates with improved oracle accuracy and hence use $G{=}B$ to report results unless mentioned otherwise. \iffalse \fi \textbf{Diversity Strength ($\mathbf{\lambda}$).} The diversity strength $\lambda$ specifies the trade-off between the joint $\log$-probability and the diversity terms. As expected, we find that a higher value of $\lambda$ produces a more diverse list; however, excessively high values of $\lambda$ can overpower model probability and result in grammatically incorrect outputs. We set $\lambda$ by performing a grid search on the validation set for all experiments. We find a wide range of $\lambda$ values (0.2 to 0.8) work well for most tasks and datasets. \textbf{Choice of Diversity Function ($\delta$).} \ar{As mentioned in \ref{form}, the sequence level dissimilarity term $\delta(\cdot, \cdot)$ can be design to satisfy different design choices.} We discuss some of these below: \setlength{\plitemsep}{-1.5ex} \begin{compactenum}[\hspace{0pt}-] \item \emph{Hamming Diversity.} This form penalizes the selection of tokens used in previous groups proportional to the number of times it was selected before. \\ \item \emph{Cumulative Diversity.} Once two sequences have diverged sufficiently, it seems unnecessary and perhaps harmful to restrict that they cannot use the same words at the same time. To encode this `backing-off' of the diversity penalty we introduce cumulative diversity which keeps a count of identical words used at every time step, indicative of overall dissimilarity. Specifically, $\delta(\yb_{[t]}, \yb_{b,[t]}^g) = \exp\{-\nicefrac{\left(\sum_{\tau{\in}t}\sum_{b{\in}B'}I[y_{b,\tau}^h\neq y_{b,\tau}^g]\right)}{\Gamma}\}$ where $\Gamma$ is a temperature parameter controlling the strength of the cumulative diversity term and $I[\cdot]$ is the indicator function. \\ \item \emph{n-gram Diversity.} The current group is penalized for producing the same n-grams as previous groups, regardless of alignment in time -- similar to \cite{gimpel_emnlp13}. This is proportional to the number of times each n-gram in a candidate occurred in previous groups. Unlike hamming diversity, n-grams capture higher order structures in the sequences. \\ \item \emph{Neural-embedding Diversity.} While all the previous diversity functions discussed above perform exact matches, neural embeddings such as word2vec \citep{mikolov_nips13} can penalize semantically similar words like synonyms. This is incorporated in each of the previous diversity functions by replacing the hamming similarity with a soft version obtained by computing the cosine similarity between word2vec representations. When using with n-gram diversity, the representation of the n-gram is obtained by summing the vectors of the constituent words. \end{compactenum} Each of these various forms encode different notions of diversity. Hamming diversity ensures different words are used at different times, but can be circumvented by small changes in sequence alignment. While n-gram diversity captures higher order statistics, it ignores sentence alignment. Neural-embedding based encodings can be seen as a semantic blurring of either the hamming or n-gram metrics, with word2vec representation similarity propagating diversity penalties not only to exact matches but also to close synonyms. We find that using any of the above functions help outperform BS in the tasks we examine; hamming diversity achieves the best oracle performance despite its simplicity. A comparison of the performance of these functions for image-captioning is provided in the supplementary. \iffalse \fi \newcommand{\yb}{\mathbf{y}} \newcommand{\xb}{\mathbf{x}} In the last few years, Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs) or more generally, neural sequence models have become the standard choice for modeling time-series data for a wide range of applications such as speech recognition \citep{graves_arxiv13}, machine translation \citep{bahdanau_arxiv14}, conversation modeling \citep{vinyals_arxiv15}, image and video captioning \citep{vinyals_cvpr15, venugopalan_cvpr15}, and visual question answering \citep{antol_iccv15}. RNN based sequence generation architectures model the conditional probability, $\prob(\yb | \xb)$ of an output sequence $\yb = (y_1,\ldots,y_T)$ given an input $\xb$ (possibly also a sequence); where the output tokens $y_t$ are from a finite vocabulary, $\calV$. \textbf{Inference in RNNs.} Maximum a Posteriori (MAP) inference for RNNs is te task of finding the most likely output sequence given the input. Since the number of possible sequences grows as $|\calV|^T$, exact inference is NP-hard so approximate inference algorithms like Beam Search (BS) are commonly employed. BS is a heuristic graph-search algorithm that maintains the $B$ top-scoring partial sequences expanded in a greedy left-to-right fashion. Fig.~\ref{fig:cover} shows a sample BS search tree. \textbf{Lack of Diversity in BS.} Despite the widespread usage of BS, it has long been understood that solutions decoded by BS are generic and lacking in diversity~\citep{finkel_emnlp06,gimpel_emnlp13,li_arxiv15,li_arxiv16}. To illustrate this, a comparison of captions provided by humans (bottom) and BS (topmost) are shown in \figref{fig:cover}. While this behavior of BS is disadvantageous for many reasons, we highlight the three most crucial ones here: \setlength{\plitemsep}{-1.5ex} \begin{compactenum}[\hspace{-1pt}i)] \item The production of near-identical beams make BS a computationally wasteful algorithm, with essentially the same computation being repeated for no significant gain in performance.\\ \item Due to \emph{loss-evaluation mismatch} \ie improvements in posterior-probabilities not necessarily corresponding to improvements in task-specific metrics, it is common practice \citep{vinyals_cvpr15, karpathy_cvpr15, ferraro_arxiv16} to \emph{deliberately throttle BS to become a poorer optimization algorithm} by using reduced beam widths. This treatment of an optimization algorithm as a hyper-parameter is not only intellectually dissatisfying but also has a significant practical side-effect -- it leads to the decoding of largely bland, generic, and ``safe'' outputs, \eg always saying ``I don't know'' in conversation models~\citep{corrado_blog15}. \\ \item Most importantly, lack of diversity in the decoded solutions is fundamentally crippling in AI problems with \emph{significant ambiguity} -- \eg there are multiple ways of describing an image or responding in a conversation that are ``correct'' and it is important to capture this ambiguity by finding several diverse plausible hypotheses. \end{compactenum} \setlength{\plitemsep}{1ex} \textbf{Overview and Contributions.} To address these shortcomings, we propose \emph{Diverse Beam Search (DBS)} -- a general framework to decode a list of diverse sequences that can be used as an \emph{alternative} to BS. At a high level, DBS decodes diverse lists by dividing the beam budget into groups and enforcing diversity between groups of beams. Drawing from recent work in the probabilistic graphical models literature on Diverse M-Best (\texttt{DivMBest}) MAP inference~\citep{batra_eccv12, prasad_nips14,kirillov_cvpr15}, we optimize an objective that consists of two terms -- the sequence likelihood under the model and a dissimilarity term that encourages beams across groups to differ. This diversity-augmented model score is optimized in a \emph{doubly greedy} manner -- greedily optimizing along both time (like BS) and groups (like DivMBest). \\ \\ To summarize, our primary technical contribution is Diverse Beam Search, a doubly greedy approximate inference algorithm for decoding diverse sequences. \ar{To demonstrate its broad applicability, we report results on two image-grounded language generation tasks, captioning and question generation and on machine translation. Our method consistently outperforms BS while being comparable in terms of both run-time and memory requirements. We find that DBS results in improvements on both oracle task-specific and diversity-related metrics against baselines. Further, we notice that these gains are more pronounced as the image becomes more complex consisting of multiple objects and interactions.} We \ar{also} conduct human studies to evaluate the role of diversity in human preferences between BS and DBS for image captions. We also analyze the parameters of DBS and show they are robust over a wide range of values. Finally, we also show that our method is general enough to incorporate various forms for the dissimilarity term. Our implementation is available at \url{https://github.com/ashwinkalyan/dbs}. Also, a demo of DBS on image-captioning is available at \url{dbs.cloudcv.org}. \documentclass{article} % For LaTeX2e \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false,citecolor=blue]{hyperref} \usepackage[T1]{fontenc} % use 8-bit T1 fonts % simple URL typesetting % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. \usepackage[belowskip=0pt,aboveskip=0pt,font=small]{caption} \usepackage[belowskip=0pt,aboveskip=0pt,font=small]{subcaption} \setlength{\intextsep}{7pt plus 0pt minus 0pt} \usepackage[olditem,oldenum]{paralist} \usepackage[ruled,vlined,oldcommands,linesnumbered]{algorithm2e} \SetCommentSty{mycommfont} \usetikzlibrary{arrows,shapes, calc, shapes.misc} \newcommand\mycommfont[1]{\footnotesize\ttfamily\textcolor{blue}{#1}} \newcommand{\xhdr}[1]{{\vspace{2pt}\noindent\textbf{#1}}} \newcommand{\mac}[1]{\textcolor{green}{#1}} % --Michael}} \newcommand{\ar}[1]{\textcolor{black}{#1}} % --Michael}} \title{Diverse beam Search: \\ Decoding Diverse Solutions from \\ Neural Sequence Models} \author{Ashwin K Vijayakumar$^1$, Michael Cogswell$^1$, Ramprasath R. Selvaraju$^1$, Qing Sun$^1$ \\ \textbf{Stefan Lee$^1$, David Crandall$^2$ \& Dhruv Batra$^{1}$} \\ \texttt{\{ashwinkv,cogswell,ram21,sunqing,steflee\}@vt.edu} \\ \texttt{djcran@indiana.edu}, \texttt{dbatra@vt.edu} \\ \\ $^1$ Department of Electrical and Computer Engineering, \\ Virginia Tech, Blacksburg, VA, USA \\ \\ $^2$ School of Informatics and Computing\\ Indiana University, Bloomington, IN, USA } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} Neural sequence models are widely used to model time-series data. % in many fields. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top-$B$ candidates -- resulting in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose \emph{Diverse Beam Search} (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing for a diversity-augmented objective. We observe that our method finds better top-1 solutions by controlling for the exploration and exploitation of the search space -- implying that DBS is a \emph{better search algorithm}. Moreover, these gains are achieved with minimal computational or memory overhead as compared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation and visual question generation using both standard quantitative metrics and qualitative human studies. \ar{Further, we study the role of diversity for image-grounded language generation tasks as the complexity of the image changes.} \ar{We observe that} our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models. \end{abstract} \section{Introduction} \label{intro} \input{intro.tex} \input{formulation.tex} \input{rel_work.tex} \input{results.tex} \input{conclusion.tex} \bibliographystyle{iclr2017_conference} \clearpage \input{appendix.tex} \end{document} \section{Conclusion} Beam search is the most commonly used approximate inference algorithm to decode sequences from RNNs; however, it suffers from a lack of diversity. Producing multiple highly similar and generic outputs is not only wasteful in terms of computation but also detrimental for tasks with inherent ambiguity like image captioning. In this work, we presented \emph{Diverse Beam Search}, which describes beam search as an optimization problem and augments the objective with a diversity term. The result is a `doubly greedy' approximate algorithm that produces diverse decodings while using about the same time and resources as beam search. Our method consistently outperforms beam search and other baselines across all our experiments without \emph{extra computation} or \emph{task-specific overhead}. \ar{Further, in the case of image-grounded language generation tasks, we find that DBS provides increased gains as the complexity of the images increases.} DBS is \emph{task-agnostic} and can be applied to any case where BS is used -- making it applicable in multiple domains. \section{Related Work} \label{rel} \xhdr{Diverse M-Best Lists.} The task of generating diverse structured outputs from probabilistic models has been studied extensively \citep{park_iccv11,batra_eccv12,kirillov_cvpr15,prasad_nips14}. \cite{batra_eccv12} formalized this task for Markov Random Fields as the \texttt{DivMBest} problem and presented a greedy approach which solves for outputs iteratively, conditioning on previous solutions to induce diversity. \cite{kirillov_cvpr15} show how these solutions can be found jointly for certain kinds of energy functions. The techniques developed by Kirillov are not directly applicable to decoding from RNNs, which do not satisfy the assumptions made. Most related to our proposed approach is that of \cite{gimpel_emnlp13} who apply the \texttt{DivMBest} approach to machine translation using beam search as a black-box inference algorithm. To obtain diverse solutions, beam searches of arbitrary size are sequentially performed while retaining the top-scoring candidate and using it to update the diversity term. This approach is extremely wasteful because in each iteration only one solution returned by beam search is kept. Consequently, the iterative method is time consuming and is poorly suited for batch processing or producing a large number of solutions. Our algorithm avoids these shortcomings by integrating diversity within BS such that \emph{no} beams are discarded. By running multiple beam searches \emph{in parallel} and at staggered time offsets, we obtain large time savings making our method comparable to classical BS. One potential advantage over our method is that more complex diversity measures at the sentence-level can be incorporated. However, as observed empirically by us and \cite{li_arxiv15}, initial words tend to significantly impact the diversity of the resultant sequences -- suggesting that later words may not contribute significantly to diverse inference. \xhdr{Diverse Decoding for RNNs.} Some efforts have been made to produce diverse decodings from recurrent models for conversation modeling and machine translation. In this context, our work is closely related to \cite{li_arxiv16}, who propose a BS diversification heuristic to overcome the shortcomings of \cite{gimpel_emnlp13}. This discourages sequences from sharing common roots, implicitly resulting in diverse lists. Introducing diversity through a modified objective as in DBS rather than a heuristic provides easier generalization to incorporate different notions of diversity and control for the exploration-exploitation trade-off as detailed in \secref{div_type}. Furthermore, we find that DBS outperforms this method. Through a novel decoding objective that maximizes mutual information between inputs and predicted outputs, \cite{li_arxiv15} penalize decoding generic, input independent sequences. This is achieved by training an additional target language model. Although this work and DBS share the same goals (producing diverse decodings), the techniques developed are disjoint and complementary -- \cite{li_arxiv15} develops a new model (RNN translation model with an RNN target language model), while DBS is a modified \emph{inference} algorithm that can be applied to \emph{any} model where BS is applicable. Combination of these complementary techniques is left as interesting future work. \vspace{-10pt} \section{Experiments} \label{sec:results} \vspace{-5pt} We first explain the baselines and evaluation metrics used in this paper. Next, we proceed to the analysis of the effects of DBS parameters. Further, we report results on image-captioning, machine translation and visual question generation. \ar{In the context of image-grounded language generation tasks, we additionally study the role of diversity with varying \emph{complexity} of the image.} Although results are reported on these tasks, it should be noted that DBS is a task-agnostic algorithm that can replace BS to decode diverse solutions. \xhdr{Baselines.} We compare with beam search and the following existing methods: \begin{compactenum}[\hspace{0pt} -] \item \cite{li_arxiv16}: This work modifies BS by introducing an intra-sibling rank. For each partial solution, the set of $|\calV|$ continuations are sorted and assigned intra-sibling ranks $k\in[L]$ in order of decreasing log-probabilities, $\theta_t(y_t)$. The log-probability of an extenstion is then reduced in proportion to its rank, and continuations are re-sorted under these modified log-probabilities to select the top-B \emph{diverse} beam extensions. \item \cite{li_arxiv15}: These models are decoded using a modified objective, $P(\mathbf{y}|x) - \lambda U(\mathbf{y})$, where $U(\mathbf{y})$ is an unconditioned target sequence model. This additional term penalizes generic input independent decoding. \end{compactenum} Both works use secondary mechanisms such as \emph{re-rankers} to pick a single solution from the generated lists. As we are interested in evaluating the quality of the generated lists and in isolating the gains due to diverse decoding, we do not implement any re-rankers. Instead, we simply sort the list based on log-probability. We compare to our own implementations of these methods as none are publicly available. \xhdr{Evaluation Metrics.} We evaluate the performance of the generated lists using the following two metrics that quantify complementary details: \begin{compactenum}[\hspace{0pt} -] \item \emph{Oracle Accuracy}: Oracle or top-$k$ accuracy for some task-specific metric like BLEU is the maximum value of the metric over a list of $k$ potential solutions. It is an upper bound on the potential impact diversity plays in finding relevant solutions. \\ \item \emph{Diversity Statistics}: We count the number of distinct n-grams present in the list of generated outputs. Similar to \cite{li_arxiv15}, we divide these counts by the total number of words generated to bias against long sentences. \end{compactenum} \emph{Simultaneous improvements} in both metrics indicate that output lists have increased diversity without sacrificing fluency and correctness with respect to target tasks. Human preference studies which compare image captions produced by DBS and BS also compare these methods. Finally, We discuss the role of diversity by relating it to intrinsic details contained in images. \vspace{-10pt} \input{div_choice.tex} \vspace{-5pt} \subsection{Estimating Image Complexity} Diversity in the output space is often dependent on the input. For example, ``complex'' scenes consisting of various objects and interactions tend to be described in multiple ways as compared to ``simple'' images that tend to focus on one specific object. We study this by inspecting the gains due to DBS with varying complexity of images. One notion of image complexity is studied by Ionescu \etal \cite{ionescu_cvpr16}, defining a ``difficulty score'' as the human response time for solving a visual search task for images in PASCAL-50S \cite{vedantam_cvpr15}. Using the data from \cite{ionescu_cvpr16}, we train a Support Vector Regressor on ResNet \citep{he2016deep} features to predict this difficulty score. This model achieves a 0.41 correlation with the ground truth (comparable to the best model of \cite{ionescu_cvpr16} at 0.47). To evaluate the relationship between image complexity and performance gains from diverse decoding, we use this trained predictor to estimate a difficulty score $s$ for each image in the COCO \cite{coco} dataset. We compute the mean ($\mu=3.3$) and standard deviation ($\sigma=0.61$) and divide the images into three bins, \texttt{Simple} ($s\leq \mu-\sigma$), \texttt{Average} ($\mu{-}\sigma > s <\mu{+}\sigma$), and \texttt{Complex} ($s \geq \mu+\sigma$) consisting of 745, 3416 and 839 images respectively. Figure 3 shows some sample \texttt{Simple}, \texttt{Average}, and \texttt{Complex} images from the PASCAL-50S dataset. While simple images like close-up pictures of cats may only be described in a handful of ways by human captioners (first column), complex images with multiple objects and interactions will be described in many different ways depending on what is the focus of the captioner (last column). In the subsequent experiments on image-grounded language generation tasks, we show that improvements from DBS are greater for more complex images. \subsection{Image Captioning} \vspace{-5pt} \xhdr{Dataset and Models.} We evaluate on two datasets -- COCO \citep{coco} and PASCAL-50S \citep{vedantam_cvpr15}. We use the public splits as in \cite{karpathy_cvpr15} for COCO. PASCAL-50S is used only for testing save 200 validation images used to tune hyperparameters. We train a captioning model \citep{vinyals_cvpr15} using the \texttt{neuraltalk2}\footnote{\texttt{\url{https://github.com/karpathy/neuraltalk2}}} code repository. \xhdr{Results.} As it can be observed from \tabref{tab:coco_quant} \ar{(Top)}, DBS outperforms both BS and \cite{li_arxiv16} on both COCO and PASCAL-50S datasets. We observe that gains on PASCAL-50S are more pronounced (7.24\% and 9.60\% Oracle@20 improvements against BS and \cite{li_arxiv16}) than COCO. This suggests diverse predictions are especially advantageous when there is a mismatch between training and testing sets making DBS a better inference strategy in real-world applications. \iffalse \fi Table \ref{tab:coco_quant} \ar{(Top)} also shows the number of distinct n-grams produced by different techniques. Our method produces significantly more distinct n-grams (almost 300\% increase in the number of 4-grams produced) as compared to BS. We also note that our method tends to produce slightly longer captions compared to beam search on average. Moreover, on the PASCAL-50S test split we observe that DBS finds more likely top-1 solutions on average -- DBS obtains a maximum $\log$-probability of -6.53 as against -6.91 got by BS of same beam width. While the performance of DBS is guaranteed to be better than a BS of size $\nicefrac{B}{G}$, this experimental evidence suggests that using DBS as a replacement to BS leads to better or at least comparable performance. \iffalse \fi \iffalse \fi \iffalse \fi \xhdr{Results by Image Complexity.} From Table \ref{tab:coco_quant}, we can see that as the complexity of images increases DBS outperforms standard beam search (difference shown in parentheses) and other baselines by larger margins for all values of $k$. For example, at Oracle Spice@20, DBS achieves significant improvements over BS of 0.67, 0.91, and 1.13 for \texttt{Simple}, \texttt{Average}, and \texttt{Complex} images respectively. While DBS improves over BS in all settings, complex images benefit even more from diversity-inducing inference than simple images. \newcommand{\imin}[1]{\includegraphics[width=60px, height=40px]{#1}} \newcommand{\iminb}[1]{\setlength{\fboxsep}{-2pt}\setlength{\fboxrule}{2pt}\fbox{\includegraphics[width=60px, height=40px]{#1}}} \ar{\xhdr{Human Preference by Difficulty.} To further establish the effectiveness of our method, we evaluate human preference between captions decoded using DBS and BS. In this forced-choice test, DBS captions were preferred over BS 60\% of the time by human annotators. Further, they were preferred about 50\%, 69\% and 83\% of the times for \texttt{Simple}, \texttt{Average} and \texttt{Difficult} images respectively. Furthermore, we observe a positive correlation ($\rho = 0.73$) between difficulty scores and humans preferring DBS to BS. Further details about this experiment are provided in the supplement.} \iffalse \fi \subsection{Visual Question Generation} We also report results on Visual Question Generation (VQG) \citep{mostafazadeh_arxiv16}, where a model is trained to produce questions \emph{about an image}. Generating visually focused questions requires reasoning about multiple problems that are central to vision -- \eg, object attributes, relationships between objects, and natural language. Similar to captioning, there are many sensible questions for a given image. The VQG dataset \citep{mostafazadeh_arxiv16} consists of 5 human-generated questions per image for 5000 images from COCO \citep{coco}. We use a model similar to the one used for captioning, except that it is now trained to output questions rather than captions. Similar to previous results, using beam search to sample outputs results in similarly worded question while DBS decoded questions ask about multiple details of the image (see Fig.~\ref{fig:vqg}). We show quantitative evaluations in Table \ref{tab:vqg_quant} for the VQG dataset as a whole and when partitioned by image difficulty. We find DBS significantly outperforms the baseline methods on this task both on standard metrics (SPICE) and measure of diversity. We also observe that gap between DBS and the baseline methods is more pronounced than in the captioning task and attribute this to the increased variety of possible visually grounded questions compared to captions which often describe only a few major salient objects. The general trend that more complex images benefit more from diverse decoding also persists in this setting. \subsection{Machine Translation} \xhdr{Dataset and Models.} We use the English-French parallel data from the \emph{europarl} corpus as the training set. We report results on \emph{news-test-2013} and \emph{news-test-2014} and use the \emph{newstest2012} to tune DBS parameters. We train a encoder-decoder architecture as proposed in \cite{bahdanau_arxiv14} using the \texttt{dl4mt-tutorial}\footnote{\url{https://github.com/nyu-dl/dl4mt-tutorial}} code repository. The encoder consists of a bi-directional recurrent network (Gated Recurrent Unit) with attention. We use sentence level BLEU scores \citep{papineni_acl02} to compute oracle metrics and report distinct n-grams similar to image-captioning. From \tabref{tab: mt_quant}, we see that DBS consistently outperforms standard baselines with respect to both metrics. \section*{Appendix} \subsection*{Sensivity Studies} \textbf{Number of Groups.} \figref{fig:G} presents snapshots of the transition from BS to DBS at $B=6$ and $G=\{1,3,6\}$. As beam width moves from 1 to $G$, the exploration of the method increases resulting in more diverse lists. \textbf{Diversity Strength.} As noted in \secref{div_type}, our method is robust to a wide range of values of the diversity strength ($\lambda$). \figref{fig:lambdagrid} shows a grid search of $\lambda$ for image-captioning on the PASCAL-50S dataset. \textbf{Choice of Diversity Function.} \figref{fig:divtype} shows the oracle performace of various forms of the diversity function described in \secref{div_type}. We observe that hamming diversity surprisingly performs the best. Other forms perform comparably while outperforming BS. \subsection*{Human Studies} For image-captioning, we conduct a human preference study between BS and DBS captions as explained in \secref{sec:results}. A screen shot of the interface used to collect human preferences for captions generated using DBS and BS is presented in \figref{fig:amt}. The lists were shuffled to guard the task from being gamed by a turker. As mentioned in \secref{sec:results}, we observe that \emph{difficulty score} of an image and human preference for DBS captions are positively correlated. The dataset contains more images that are less difficulty and so, we analyze the correlation by dividing the data into three bins. For each bin, we report the \% of images for which DBS captions were preferred after a majority vote (\ie at least 3/5 turkers voted in favor of DBS) in \tabref{tab:amt}. At low difficulty scores consisting mostly of iconic images -- one might expect that BS would be preferred more often than chance. However, mismatch between the statistics of the training and testing data results in a better performance of DBS. Some examples for this case are provided in \figref{fig:qual_2}. More general qualitative examples are provided in \figref{fig:qual_1}. \section{Preliminaries: Decoding RNNs with Beam Search} \label{sec:prelim} We begin with a refresher on BS, before describing our \ar{extension}, Diverse Beam Search. For notational convenience, let $[n]$ denote the set of natural numbers from $1$ to $n$ and let $\mathbf{v}_{[n]} = [v_1, v_2, \dots v_n]$ index the first $n$ elements of a vector $\mathbf{v} \in \RR^m$, where $n\leq m$. \textbf{The Decoding Problem.} RNNs are trained to estimate the likelihood of sequences of tokens from a finite dictionary $\calV$ given an input $\xb$. The RNN updates its internal state and estimates the conditional probability distribution over the next output given the input and all previous output tokens. We denote the logarithm of this conditional probability distribution over all tokens at time $t$ as $\theta(y_t) = \log \prob(y_t | y_{t-1},\ldots,y_1, \xb)$. To simplify notation, we index $\theta(\cdot)$ with a single variable $y_t$; but it should be clear that it depends on the previous outputs, $\yb_{[t-1]}$ from the context. The $\log$-probability of a partial solution (\ie the sum of $\log$-probabilities of all previous tokens decoded) can now be written as $\Theta(\yb_{[t]})=\sum_{\tau\in[t]} \theta(y_\tau)$. The decoding problem is then the task of finding a sequence $\yb$ that maximizes $\Theta(\yb)$. As each output is conditioned on all the previous outputs, decoding the optimal length-$T$ sequence in this setting can be viewed as MAP inference on $T$-order Markov chain with the $T$ nodes corresponding to output tokens. Not only does the size of the largest factor in such a graph grow as $|\calV|^T$, but also requires wastefully forwarding of the RNN repeatedly to compute entries in the factors. Thus, approximate algorithms are employed. \textbf{Beam Search.} The most prevalent method for approximate decoding is BS, which stores the top-$B$ highly scoring candidates at each time step; where $B$ is known as the \emph{beam width}. Let us denote the set of $B$ solutions held by BS at the start of time $t$ as $Y_{[t-1]} = \{\yb_{1,[t-1]},\dots,\yb_{B,[t-1]}\}$. At each time step, BS considers all possible single token extensions of these beams given by the set $\calY_t = Y_{[t-1]}\times\calV$ and selects the $B$ most likely extensions. More formally, at each step, \begin{equation} \label{eq: bs} Y_{[t]} \ \ = \argmax_{\yb_{1,[t]},\dots,\yb_{B,[t]} \in \calY_t} \sum_{b \in [B]} \Theta(\yb_{b,[t]}) ~~~~~~s.t.~~\yb_{i,[t]} \neq \yb_{j,[t]} \end{equation} The above objective can be trivially maximized by sorting all $B\times|\calV|$ members of $\calY_t$ by their $\log$-probabilities and selecting the top-$B$. This process is repeated until time $T$ and the most likely sequence is selected by ranking the B beams based on $\log$-probabilities. While this method allows for multiple sequences to be explored in parallel, most completions tend to stem from a single highly valued beam -- resulting in outputs that are typically only minor perturbations of a single sequence. \section{Diverse Beam Search: Formulation and Algorithm} \label{form} To overcome this shortcoming, we consider augmenting the objective in Eq. \ref{eq: bs} with a dissimilarity term $\Delta(Y_{[t]})$ that measures the diversity between candidate sequences. Jointly optimizing for all $B$ candidates at each time step is intractable as the number of possible solutions grows with $|\calV|^B$ (which can easily reach $10^{60}$ for typical language modeling settings). To avoid this joint optimization problem, we divide the beam budget $B$ into $G$ groups and greedily optimize each group using beam search while holding previous groups fixed. This doubly greedy approximation along both time and across groups turns $\Delta(Y_{[t]})$ into a function of only the current group's possible extensions. We detail the specifics of our approach in this section. \textbf{Diverse Beam Search.} Let $Y_{[t]}$, the set of all $B$ beams at time $t$ be partitioned into $G$ {non-empty}, {disjoint} subsets $Y_{[t]}^g, \ g{\in}[G]$. Without loss of generality, consider an equal partition such that each group contains $B'=\nicefrac{B}{G}$ groups. Beam search can be applied to each group to produce $B$ solutions; however, each group would produce identical outputs. \ar{Unlike BS, we optimize a modified version of the objective of eq. \ref{eq: bs} which adds a dissimilarity term $\Delta(\yb_{[t]}, Y_{[t]}^g)$, measuring the dissimilarity of a sequence $\yb_{[t]}$ against a group $Y_{[t]}^g$.} While $\Delta(\cdot, \cdot)$ can take various forms, for simplicity we define one broad class that decomposes across beams within each group as: \begin{equation}\label{eq:diversity-term} \Delta(\yb_{[t]}, Y_{[t]}^g) = \sum_{b=1}^{B'}\delta\left(\yb_{[t]}, \yb_{b,[t]}^g\right) \end{equation} where $\delta(\cdot, \cdot)$ is a measure of sequence dissimilarity -- \eg a negative cost for each co-occurring n-gram in two sentences or distance between distributed sentence representations. The exact form of the sequence-level dissimilarity term can vary and we discuss some choices in \secref{div_type}. As we optimize each group with the previous groups fixed, extending group $g$ at time $t$ amounts to a standard BS using dissimilarity augmented $\log$-probabilities and can be written as: \begin{eqnarray} \label{eq:divminB} Y^g_{[t]} \ \ = \argmax_{\yb^g_{1,[t]}, \dots, \yb^g_{B',[t]} \in \mathcal{Y}^g_{t}} && \sum_{b \in [B']} \Theta(\yb^g_{b,[t]}) + \sum_{h=1}^{g-1}\lambda_g\Delta\left(\yb_{b,[t]}^g, Y_{[t]}^h\right)\\ && \st \ \yb^g_{i,[t]} \neq \yb^g_{j,[t]},\ \lambda_g \geq 0 \nonumber \end{eqnarray} \tikzstyle{g1vertex}=[rounded rectangle,fill=green!25,minimum size=20pt,inner sep=3pt] \tikzstyle{g2vertex}=[rounded rectangle,fill=red!25,minimum size=20pt,inner sep=3pt] \tikzstyle{g3vertex}=[rounded rectangle,fill=blue!25,minimum size=20pt,inner sep=3pt] \tikzstyle{g3shade}=[rounded rectangle,opacity=0.65, fill=blue!25,minimum size=20pt,inner sep=3pt] \tikzstyle{edge} = [draw,thick,<-] \tikzstyle{bedge} = [draw,opacity=0.75,thick,<-,bend right=35] \tikzstyle{weight} = [font=\small] \tikzstyle{selected edge} = [draw,line width=5pt,-,red!50] \tikzstyle{ignored edge} = [draw,line width=5pt,-,black!20] \tikzstyle{group box} = [thick, rounded corners=0.25cm, minimum width=30pt, minimum height=50pt] \tikzstyle{searchbox} = [thick, fill=white, rounded corners=0.25cm, minimum width=120pt, minimum height=50pt] \newcommand{\vertspace}{-1.15} \newcommand{\inspace}{-0.45} \newcommand{\of}{1.25} \newcommand{\hz}{1.3} This approach, which we call Diverse Beam Search (DBS) is detailed in Algorithm \ref{alg:dbs}. An example run of DBS is shown in Figure \ref{fig:overview} for decoding image-captions. In the example, $B{=}6$ and $G{=}3$ and so, each group performs a smaller, diversity-augmented BS of size 2. In the snapshot shown, group 3 is being stepped forward and the diversity augmented score of all words in the dictionary is computed conditioned on previous groups. The score of all words are adjusted by their similarity to previously chosen words -- `birds', `the' and `an' (Algorithm \ref{alg:dbs}, Line \ref{alg:aug}). The optimal continuations are then found by standard BS (Algorithm \ref{alg:dbs}, Line \ref{alg:dobs}). \begin{algorithm}[h] \caption{Diverse Beam Search} \label{alg:dbs} Perform a diverse beam search with $G$ groups using a beam width of $B$ \\ \For{$t=1,\ \ldots \,T$}{ \tcp{perform one step of beam search for first group without diversity} $Y_{[t]}^1 \leftarrow \argmax_{(\yb_{1,[t]}^1, \dots, \yb_{B',[t]}^1)} \sum_{b\in[B']}\Theta(\yb_{b,[t]}^1)$ \\ \For{$g=2,\ \ldots \,G$}{ \tcp{augment log-probabilities with diversity penalty} $\Theta(\yb_{b,[t]}^g) \leftarrow \Theta(\yb_{b,[t]}^g) + \sum_h\lambda_g \Delta(\yb_{b,[t]}^g, Y_{[t]}^h) \quad$ $b \in [B'], \yb_{b,[t]}^g \in \mathcal{Y}^g$ and $\lambda_g > 0$ \label{alg:aug}\\ \tcp{perform one step of beam search for the group} $Y_{[t]}^g \leftarrow \argmax_{(\yb_{1,[t]}^g, \dots, \yb_{B',[t]}^g)} \sum_{b\in[B']}\Theta(\yb_{b,[t]}^g)$ \label{alg:dobs} \\ } } Return set of B solutions, $Y_{[T]} = \bigcup^G_{g=1} Y_{[T]}^g$ \end{algorithm} There are a number of advantages worth noting about our approach. By encouraging diversity between beams at each step (rather than just between highest ranked solutions like in \citet{gimpel_emnlp13}, our approach rewards each group for spending its beam budget to explore different parts of the output space rather than repeatedly chasing sub-optimal beams from prior groups. Furthermore, the staggered group structure enables each group beam search to be performed in parallel with a time offset. This parallel algorithm completes in $T + G$ time steps compared to $T\times G$ running time for a black-box approach of \citet{gimpel_emnlp13}. \\ \\ In summary, DBS is a task agnostic, doubly greedy algorithm that incorporates diversity in beam search with little memory or computational overhead. Moreover, as the first group is not conditioned on other groups during optimization, our method is guaranteed to be at least as good as a beam search of size $B/G$.
Pointer Sentinel Mixture Models
1609.07843
Table 2: Single model perplexity on validation and test sets for the Penn Treebank language modeling task. For our models and the models of Zaremba et al. (2014) and Gal (2015), medium and large refer to a 650 and 1500 units two layer LSTM respectively. The medium pointer sentinel-LSTM model achieves lower perplexity than the large LSTM model of Gal (2015) while using a third of the parameters and without using the computationally expensive Monte Carlo (MC) dropout averaging at test time. Parameter numbers with ‡ are estimates based upon our understanding of the model and with reference to Kim et al. (2016).
[ "[BOLD] Model", "[BOLD] Parameters", "[BOLD] Validation", "[BOLD] Test" ]
[ [ "Mikolov & Zweig ( 2012 ) - KN-5", "2M‡", "−", "141.2" ], [ "Mikolov & Zweig ( 2012 ) - KN5 + cache", "2M‡", "−", "125.7" ], [ "Mikolov & Zweig ( 2012 ) - RNN", "6M‡", "−", "124.7" ], [ "Mikolov & Zweig ( 2012 ) - RNN-LDA", "7M‡", "−", "113.7" ], [ "Mikolov & Zweig ( 2012 ) - RNN-LDA + KN-5 + cache", "9M‡", "−", "92.0" ], [ "Pascanu et al. ( 2013a ) - Deep RNN", "6M", "−", "107.5" ], [ "Cheng et al. ( 2014 ) - Sum-Prod Net", "5M‡", "−", "100.0" ], [ "Zaremba et al. ( 2014 ) - LSTM (medium)", "20M", "86.2", "82.7" ], [ "Zaremba et al. ( 2014 ) - LSTM (large)", "66M", "82.2", "78.4" ], [ "Gal ( 2015 ) - Variational LSTM (medium, untied)", "20M", "81.9±0.2", "79.7±0.1" ], [ "Gal ( 2015 ) - Variational LSTM (medium, untied, MC)", "20M", "−", "78.6±0.1" ], [ "Gal ( 2015 ) - Variational LSTM (large, untied)", "66M", "77.9±0.3", "75.2±0.2" ], [ "Gal ( 2015 ) - Variational LSTM (large, untied, MC)", "66M", "−", "73.4±0.0" ], [ "Kim et al. ( 2016 ) - CharCNN", "19M", "−", "78.9" ], [ "Zilly et al. ( 2016 ) - Variational RHN", "32M", "72.8", "71.3" ], [ "Zoneout + Variational LSTM (medium)", "20M", "84.4", "80.6" ], [ "Pointer Sentinel-LSTM (medium)", "21M", "72.4", "[BOLD] 70.9" ] ]
The medium pointer sentinel-LSTM model also achieves lower perplexity than the large LSTM models. Note that the best performing large variational LSTM model uses computationally intensive Monte Carlo (MC) dropout averaging. Monte Carlo dropout averaging is a general improvement for any sequence model that uses dropout but comes at a greatly increased test time cost. The pointer sentinel-LSTM is able to achieve these results with far fewer parameters than other models with comparable performance, specifically with less than a third the parameters used in the large variational LSTM models.
\documentclass{article} % more modern % for top, mid, bottom rule on tables % for FloatBarrier % for URLs \DeclareMathOperator*{\softmax}{softmax} \newcommand{\theHalgorithm}{\arabic{algorithm}} \usepackage[accepted]{icml2016} \icmltitlerunning{Pointer Sentinel Mixture Models} % for Language Modeling} \begin{document} \twocolumn[ \icmltitle{Pointer Sentinel Mixture Models} % for Language Modeling} \icmlauthor{Stephen Merity}{smerity@salesforce.com} \icmlauthor{Caiming Xiong}{cxiong@salesforce.com} \icmlauthor{James Bradbury}{james.bradbury@salesforce.com} \icmlauthor{Richard Socher}{rsocher@salesforce.com} \icmladdress{MetaMind - A Salesforce Company, Palo Alto, CA, USA} \icmlkeywords{deep learning, rnn, lstm, pointer networks, pointer sentinel} \vskip 0.3in ] \begin{abstract} Recent neural network sequence models with $\softmax$ classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard $\softmax$ classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard $\softmax$ LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and larger corpora we also introduce the freely available WikiText corpus.\footnote{Available for download at the \href{http://metamind.io/research/the-wikitext-long-term-dependency-language-modeling-dataset/}{WikiText dataset site}} \end{abstract} \section{Introduction} A major difficulty in language modeling is learning when to predict specific words from the immediate context. For instance, imagine a new person is introduced and two paragraphs later the context would allow one to very accurately predict this person's name as the next word. For standard neural sequence models to predict this name, they would have to encode the name, store it for many time steps in their hidden state, and then decode it when appropriate. As the hidden state is limited in capacity and the optimization of such models suffer from the vanishing gradient problem, this is a lossy operation when performed over many timesteps. This is especially true for rare words. Models with soft attention or memory components have been proposed to help deal with this challenge, aiming to allow for the retrieval and use of relevant previous hidden states, in effect increasing hidden state capacity and providing a path for gradients not tied to timesteps. Even with attention, the standard $\softmax$ classifier that is being used in these models often struggles to correctly predict rare or previously unknown words. Pointer networks \cite{Vinyals2015} provide one potential solution for rare and out of vocabulary (OoV) words as a pointer network uses attention to select an element from the input as output. This allows it to produce previously unseen input tokens. While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input. We introduce a mixture model, illustrated in Fig.~\ref{fig:ModelIllustration}, that combines the advantages of standard $\softmax$ classifiers with those of a pointer component for effective and efficient language modeling. Rather than relying on the RNN hidden state to decide when to use the pointer, as in the recent work of \citet{Gulcehre2016}, we allow the pointer component itself to decide when to use the $\softmax$ vocabulary through a sentinel. The model improves the state of the art perplexity on the Penn Treebank. Since this commonly used dataset is small and no other freely available alternative exists that allows for learning long range dependencies, we also introduce a new benchmark dataset for language modeling called WikiText. \section{The Pointer Sentinel for Language Modeling} Given a sequence of words $w_1, \ldots, w_{N-1}$, our task is to predict the next word $w_N$. \subsection{The $\softmax$-RNN Component} Recurrent neural networks (RNNs) have seen widespread use for language modeling \cite{Mikolov2010} due to their ability to, at least in theory, retain long term dependencies. RNNs employ the chain rule to factorize the joint probabilities over a sequence of tokens: $p(w_1, \ldots, w_N) = \prod^N_{i=1} p(w_i|w_1, \ldots, w_{i-1}).$ More precisely, at each time step $i$, we compute the RNN hidden state $h_i$ according to the previous hidden state $h_{i-1}$ and the input $x_i$ such that $h_i = RNN(x_i, h_{i-1})$. When all the $N-1$ words have been processed by the RNN, the final state $h_{N-1}$ is fed into a $\softmax$ layer which computes the probability over a vocabulary of possible words: \begin{equation} p_{\text{vocab}}(w) = \softmax(U h_{N-1}), \end{equation} where $p_{\text{vocab}} \in \mathbb{R}^V$, $U \in \mathbb{R}^{V \times H}$, $H$ is the hidden size, and $V$ the vocabulary size. RNNs can suffer from the vanishing gradient problem. The LSTM \cite{Hochreiter1997} architecture has been proposed to deal with this by updating the hidden state according to a set of gates. Our work focuses on the LSTM but can be applied to any RNN architecture that ends in a vocabulary $\softmax$. \subsection{The Pointer Network Component} In this section, we propose a modification to pointer networks for language modeling. To predict the next word in the sequence, a pointer network would select the member of the input sequence $p(w_1, \ldots, w_{N-1})$ with the maximal attention score as the output. The simplest way to compute an attention score for a specific hidden state is an inner product with all the past hidden states $h$, with each hidden state $h_i \in \mathbb{R}^H$. However, if we want to compute such a score for the most recent word (since this word may be repeated), we need to include the last hidden state itself in this inner product. Taking the inner product of a vector with itself results in the vector's magnitude squared, meaning the attention scores would be strongly biased towards the most recent word. Hence we project the current hidden state to a query vector $q$ first. To produce the query $q$ we compute \begin{align} q = \tanh(W h_{N-1} + b), \end{align} where $W \in \mathbb{R}^{H \times H}$, $b \in \mathbb{R}^{H}$, and $q \in \mathbb{R}^H$. To generate the pointer attention scores, we compute the match between the previous RNN output states $h_i$ and the query $q$ by taking the inner product, followed by a $\softmax$ activation function to obtain a probability distribution: \begin{eqnarray} z_i = q^T h_i, \label{eq:attn_z} \\ a = \softmax(z), \label{eq:attn_softmax} \end{eqnarray} where $z \in \mathbb{R}^L$, $a \in \mathbb{R}^L$, and $L$ is the total number of hidden states. The probability mass assigned to a given word is the sum of the probability mass given to all token positions where the given word appears: \begin{align} \label{eq:pointerprob} p_{\text{ptr}}(w) = \sum_{i \in I(w, x)} a_i, \end{align} where $I(w, x)$ results in all positions of the word $w$ in the input $x$ and $p_{\text{ptr}} \in \mathbb{R}^V$. This technique, referred to as pointer sum attention, has been used for question answering \cite{Kadlec2016}. Given the length of the documents used in language modeling, it may not be feasible for the pointer network to evaluate an attention score for all the words back to the beginning of the dataset. Instead, we may elect to maintain only a window of the $L$ most recent words for the pointer to match against. The length $L$ of the window is a hyperparameter that can be tuned on a held out dataset or by empirically analyzing how frequently a word at position $t$ appears within the last $L$ words. To illustrate the advantages of this approach, consider a long article featuring two sentences \emph{President Obama discussed the economy} and \emph{President Obama then flew to Prague}. If the query was \emph{Which President is the article about?}, probability mass could be applied to \emph{Obama} in either sentence. If the question was instead \emph{Who flew to Prague?}, only the latter occurrence of \emph{Obama} provides the proper context. The attention sum model ensures that, as long as the entire attention probability mass is distributed on the occurrences of \emph{Obama}, the pointer network can achieve zero loss. This flexibility provides supervision without forcing the model to put mass on supervision signals that may be incorrect or lack proper context. This feature becomes an important component in the pointer sentinel mixture model. \subsection{The Pointer Sentinel Mixture Model} While pointer networks have proven to be effective, they cannot predict output words that are not present in the input, a common scenario in language modeling. We propose to resolve this by using a mixture model that combines a standard $\softmax$ with a pointer. Our mixture model has two base distributions: the $\softmax$ vocabulary of the RNN output and the positional vocabulary of the pointer model. We refer to these as the RNN component and the pointer component respectively. To combine the two base distributions, we use a gating function $g = p(z_i = k|x_i)$ where $z_i$ is the latent variable stating which base distribution the data point belongs to. As we only have two base distributions, $g$ can produce a scalar in the range $[0, 1]$. A value of $0$ implies that only the pointer is used and $1$ means only the $\softmax$-RNN is used. \begin{align} \label{eq:simplemix} p(y_i|x_i) = g~p_{\text{vocab}}(y_i|x_i) + (1 - g)~p_{\text{ptr}}(y_i|x_i). \end{align} While the models could be entirely separate, we re-use many of the parameters for the $\softmax$-RNN and pointer components. This sharing minimizes the total number of parameters in the model and capitalizes on the pointer network's supervision for the RNN component. \subsection{Details of the Gating Function} To compute the new pointer sentinel gate $g$, we modify the pointer component. In particular, we add an additional element to $z$, the vector of attention scores as defined in Eq.~\ref{eq:attn_z}. This element is computed using an inner product between the query and the sentinel\footnote{A sentinel value is inserted at the end of a search space in order to ensure a search algorithm terminates if no matching item is found. Our sentinel value terminates the pointer search space and distributes the rest of the probability mass to the RNN vocabulary.} vector $s \in \mathbb{R}^{H}$. This change can be summarized by changing Eq.~\ref{eq:attn_softmax} to: \begin{equation} a = \softmax \left( \left[z ; q^Ts\right] \right). \end{equation} We define $a \in \mathbb{R}^{V+1}$ to be the attention distribution over both the words in the pointer window as well as the sentinel state. We interpret the last element of this vector to be the gate value: $g = a[V+1]$. Any probability mass assigned to $g$ is given to the standard $\softmax$ vocabulary of the RNN. The final updated, normalized pointer probability over the vocabulary in the window then becomes: \begin{equation}\label{eq:newPtr} p_{\text{ptr}}(y_i|x_i) = \frac{1}{1 - g} \; a[1:V], \end{equation} where we denoted $[1:V]$ to mean the first $V$ elements of the vector. The final mixture model is the same as Eq.~\ref{eq:simplemix} but with the updated Eq.~\ref{eq:newPtr} for the pointer probability. This setup encourages the model to have both components compete: use pointers whenever possible and back-off to the standard $\softmax$ otherwise. This competition, in particular, was crucial to obtain our best model. By integrating the gating function directly into the pointer computation, it is influenced by both the RNN hidden state and the pointer window's hidden states. \subsection{Motivation for the Sentinel as Gating Function} To make the best decision possible regarding which component to use the gating function must have as much context as possible. As we increase both the number of timesteps and the window of words for the pointer component to consider, the RNN hidden state by itself isn't guaranteed to accurately recall the identity or order of words it has recently seen \cite{Adi2016}. This is an obvious limitation of encoding a variable length sequence into a fixed dimensionality vector. In our task, where we may want a pointer window where the length $L$ is in the hundreds, accurately modeling all of this information within the RNN hidden state is impractical. The position of specific words is also a vital feature as relevant words eventually fall out of the pointer component's window. To correctly model this would require the RNN hidden state to store both the identity and position of each word in the pointer window. This is far beyond what the fixed dimensionality hidden state of an RNN is able to accurately capture. For this reason, we integrate the gating function directly into the pointer network by use of the sentinel. The decision to back-off to the $\softmax$ vocabulary is then informed by both the query $q$, generated using the RNN hidden state $h_{N-1}$, and from the contents of the hidden states in the pointer window itself. This allows the model to accurately query what hidden states are contained in the pointer window and avoid having to maintain state for when a word may have fallen out of the pointer window. \subsection{Pointer Sentinel Loss Function} We minimize the cross-entropy loss of $-\sum_{j} \hat{y_{ij}} \log p(y_{ij}|x_i)$, where $\hat{y_{i}}$ is a one hot encoding of the correct output. During training, as $\hat{y_{i}}$ is one hot, only a single mixed probability $p(y_{ij})$ must be computed for calculating the loss. This can result in a far more efficient GPU implementation. At prediction time, when we want all values for $p(y_i|x_i)$, a maximum of $L$ word probabilities must be mixed, as there is a maximum of $L$ unique words in the pointer window of length $L$. This mixing can occur on the CPU where random access indexing is more efficient than the GPU. Following the pointer sum attention network, the aim is to place probability mass from the attention mechanism on the correct output $\hat{y_{i}}$ if it exists in the input. In the case of our mixture model the pointer loss instead becomes: \begin{align} - \log \left( g + \sum_{i \in I(y, x)} a_i\right), \end{align} where $I(y, x)$ results in all positions of the correct output $y$ in the input $x$. The gate $g$ may be assigned all probability mass if, for instance, the correct output $\hat{y_{i}}$ exists only in the $\softmax$-RNN vocabulary. Furthermore, there is no penalty if the model places the entire probability mass, on any of the instances of the correct word in the input window. If the pointer component places the entirety of the probability mass on the gate $g$, the pointer network incurs no penalty and the loss is entirely determined by the loss of the $\softmax$-RNN component. \subsection{Parameters and Computation Time} The pointer sentinel-LSTM mixture model results in a relatively minor increase in parameters and computation time, especially when compared to the size of the models required to achieve similar performance using standard LSTM models. The only two additional parameters required by the model are those required for computing $q$, specifically $W \in \mathbb{R}^{H \times H}$ and $b \in \mathbb{R}^{H}$, and the sentinel vector embedding, $s \in \mathbb{R}^{H}$. This is independent of the depth of the RNN as the the pointer component only interacts with the output of the final RNN layer. The additional $H ^ 2 + 2H$ parameters are minor compared to a single LSTM layer's $8 H^2 + 4H$ parameters. Most state of the art models also require multiple LSTM layers. In terms of additional computation, a pointer sentinel-LSTM of window size $L$ only requires computing the query $q$ (a linear layer with $\tanh$ activation), a total of $L$ parallelizable inner product calculations, and the attention scores for the $L$ resulting scalars via the $\softmax$ function. \section{Related Work} Considerable research has been dedicated to the task of language modeling, from traditional machine learning techniques such as n-grams to neural sequence models in deep learning. Mixture models composed of various knowledge sources have been proposed in the past for language modeling. \citet{Rosenfeld1996} uses a maximum entropy model to combine a variety of information sources to improve language modeling on news text and speech. These information sources include complex overlapping n-gram distributions and n-gram caches that aim to capture rare words. The n-gram cache could be considered similar in some ways to our model's pointer network, where rare or contextually relevant words are stored for later use. Beyond n-grams, neural sequence models such as recurrent neural networks have been shown to achieve state of the art results \cite{Mikolov2010}. A variety of RNN regularization methods have been explored, including a number of dropout variations \cite{Zaremba2014, Gal2015} which prevent overfitting of complex LSTM language models. Other work has improved language modeling performance by modifying the RNN architecture to better handle increased recurrence depth \cite{Zilly2016}. In order to increase capacity and minimize the impact of vanishing gradients, some language and translation models have also added a soft attention or memory component \cite{Bahdanau2015, Sukhbaatar2015, Cheng2016, Kumar2016, Xiong2016, Ahn2016}. These mechanisms allow for the retrieval and use of relevant previous hidden states. Soft attention mechanisms need to first encode the relevant word into a state vector and then decode it again, even if the output word is identical to the input word used to compute that hidden state or memory. A drawback to soft attention is that if, for instance, \emph{January} and \emph{March} are both equally attended candidates, the attention mechanism may blend the two vectors, resulting in a context vector closest to \emph{February} \cite{Kadlec2016}. Even with attention, the standard $\softmax$ classifier being used in these models often struggles to correctly predict rare or previously unknown words. Attention-based pointer mechanisms were introduced in \citet{Vinyals2015} where the pointer network is able to select elements from the input as output. In the above example, only \emph{January} or \emph{March} would be available as options, as \emph{February} does not appear in the input. The use of pointer networks have been shown to help with geometric problems \cite{Vinyals2015}, code generation \cite{Ling2016}, summarization \cite{Gu2016, Gulcehre2016}, question answering \cite{Kadlec2016}. While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input. \citet{Gulcehre2016} introduce a pointer $\softmax$ model that can generate output from either the vocabulary $\softmax$ of an RNN or the location $\softmax$ of the pointer network. Not only does this allow for producing OoV words which are not in the input, the pointer $\softmax$ model is able to better deal with rare and unknown words than a model only featuring an RNN $\softmax$. Rather than constructing a mixture model as in our work, they use a switching network to decide which component to use. For neural machine translation, the switching network is conditioned on the representation of the context of the source text and the hidden state of the decoder. The pointer network is not used as a source of information for switching network as in our model. The pointer and RNN $\softmax$ are scaled according to the switching network and the word or location with the highest final attention score is selected for output. Although this approach uses both a pointer and RNN component, it is not a mixture model and does not combine the probabilities for a word if it occurs in both the pointer location $\softmax$ and the RNN vocabulary $\softmax$. In our model the word probability is a mix of both the RNN and pointer components, allowing for better predictions when the context may be ambiguous. Extending this concept further, the latent predictor network \cite{Ling2016} generates an output sequence conditioned on an arbitrary number of base models where each base model may have differing granularity. In their task of code generation, the output could be produced one character at a time using a standard $\softmax$ or instead copy entire words from referenced text fields using a pointer network. As opposed to \citet{Gulcehre2016}, all states which produce the same output are merged by summing their probabilities. Their model however requires a more complex training process involving the forward-backward algorithm for Semi-Markov models to prevent an exponential explosion in potential paths. \section{WikiText - A Benchmark for Language Modeling} We first describe the most commonly used language modeling dataset and its pre-processing in order to then motivate the need for a new benchmark dataset. \subsection{Penn Treebank} In order to compare our model to the many recent neural language models, we conduct word-level prediction experiments on the Penn Treebank (PTB) dataset \cite{Marcus1993}, pre-processed by \citet{Mikolov2010}. The dataset consists of 929k training words, 73k validation words, and 82k test words. As part of the pre-processing performed by \citet{Mikolov2010}, words were lower-cased, numbers were replaced with N, newlines were replaced with $\langle eos \rangle$, and all other punctuation was removed. The vocabulary is the most frequent 10k words with the rest of the tokens being replaced by an $\langle unk \rangle$ token. For full statistics, refer to Table \ref{table:data}. \subsection{Reasons for a New Dataset} While the processed version of the PTB above has been frequently used for language modeling, it has many limitations. The tokens in PTB are all lower case, stripped of any punctuation, and limited to a vocabulary of only 10k words. These limitations mean that the PTB is unrealistic for real language use, especially when far larger vocabularies with many rare words are involved. Fig.~\ref{fig:zipf} illustrates this using a Zipfian plot over the training partition of the PTB. The curve stops abruptly when hitting the 10k vocabulary. Given that accurately predicting rare words, such as named entities, is an important task for many applications, the lack of a long tail for the vocabulary is problematic. Other larger scale language modeling datasets exist. Unfortunately, they either have restrictive licensing which prevents widespread use or have randomized sentence ordering \cite{Chelba2013} which is unrealistic for most language use and prevents the effective learning and evaluation of longer term dependencies. Hence, we constructed a language modeling dataset using text extracted from Wikipedia and will make this available to the community. \subsection{Construction and Pre-processing} We selected articles only fitting the \emph{Good} or \emph{Featured} article criteria specified by editors on Wikipedia. These articles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutral in point of view, and stable. This resulted in 23,805 Good articles and 4,790 Featured articles. The text for each article was extracted using the Wikipedia API. Extracting the raw text from Wikipedia mark-up is nontrivial due to the large number of macros in use. These macros are used extensively and include metric conversion, abbreviations, language notation, and date handling. Once extracted, specific sections which primarily featured lists were removed by default. Other minor bugs, such as sort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and \LaTeX\ code, were replaced with $\langle formula \rangle$ tokens. Normalization and tokenization were performed using the Moses tokenizer \cite{Koehn2007}, slightly augmented to further split numbers (8,600 $\rightarrow$ 8 @,@ 600) and with some additional minor fixes. Following \citet{Chelba2013} a vocabulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to the $\langle unk \rangle$ token, also a part of the vocabulary. To ensure the dataset is immediately usable by existing language modeling tools, we have provided the dataset in the same format and following the same conventions as that of the PTB dataset above. \subsection{Statistics} The full WikiText dataset is over 103 million words in size, a hundred times larger than the PTB. It is also a tenth the size of the One Billion Word Benchmark \cite{Chelba2013}, one of the largest publicly available language modeling benchmarks, whilst consisting of articles that allow for the capture and usage of longer term dependencies as might be found in many real world tasks. The dataset is available in two different sizes: WikiText-2 and WikiText-103. Both feature punctuation, original casing, a larger vocabulary, and numbers. WikiText-2 is two times the size of the Penn Treebank dataset. WikiText-103 features all extracted articles. Both datasets use the same articles for validation and testing with the only difference being the vocabularies. For full statistics, refer to Table \ref{table:data}. \section{Experiments} \subsection{Training Details} As the pointer sentinel mixture model uses the outputs of the RNN from up to $L$ timesteps back, this presents a challenge for training. If we do not regenerate the stale historical outputs of the RNN when we update the gradients, backpropagation through these stale outputs may result in incorrect gradient updates. If we do regenerate all stale outputs of the RNN, the training process is far slower. As we can make no theoretical guarantees on the impact of stale outputs on gradient updates, we opt to regenerate the window of RNN outputs used by the pointer component after each gradient update. We also use truncated backpropagation through time (BPTT) in a different manner to many other RNN language models. Truncated BPTT allows for practical time-efficient training of RNN models but has fundamental trade-offs that are rarely discussed. \begin{algorithm}[t!] \caption{Calculate truncated BPTT where every $k_1$ timesteps we run back propagation for $k_2$ timesteps} \label{tbptt} \begin{algorithmic} \FOR{t = 1 \TO t = T} \STATE Run the RNN for one step, computing $h_t$ and $z_t$ \STATE \IF {$t$ \textbf{divides} $k_1$} \STATE Run BPTT from $t$ down to $t - k_2$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} For running truncated BPTT, BPTT is run for $k_2$ timesteps every $k_1$ timesteps, as seen in Algorithm \ref{tbptt}. For many RNN language modeling training schemes, $k_1 = k_2$, meaning that every $k$ timesteps truncated BPTT is performed for the $k$ previous timesteps. This results in only a single RNN output receiving backpropagation for $k$ timesteps, with the other extreme being that the first token receives backpropagation for $0$ timesteps. This issue is compounded by the fact that most language modeling code split the data temporally such that the boundaries are always the same. As such, most words in the training data will never experience a full backpropagation for $k$ timesteps. In our task, the pointer component always looks $L$ timesteps into the past if $L$ past timesteps are available. We select $k_1 = 1$ and $k_2 = L$ such that for each timestep we perform backpropagation for $L$ timesteps and advance one timestep at a time. Only the loss for the final predicted word is used for backpropagation through the window. \subsection{Model Details} Our experimental setup reflects that of \citet{Zaremba2014} and \citet{Gal2015}. We increased the number of timesteps used during training from 35 to 100, matching the length of the window $L$. Batch size was increased to 32 from 20. We also halve the learning rate when validation perplexity is worse than the previous iteration, stopping training when validation perplexity fails to improve for three epochs or when 64 epochs are reached. The gradients are rescaled if their global norm exceeds 1 \cite{Pascanu2013}.\footnote{The highly aggressive clipping is likely due to the increased BPTT length. Even with such clipping early batches may experience excessively high perplexity, though this settles rapidly.} We evaluate the medium model configuration which features a hidden size of $H=650$ and a two layer LSTM. We compare against the large model configuration which features a hidden size of 1500 and a two layer LSTM. We produce results for two model types, an LSTM model that uses dropout regularization and the pointer sentinel-LSTM model. The variants of dropout used were zoneout \cite{Krueger2016} and variational inference based dropout \cite{Gal2015}. Zoneout, which stochastically forces some recurrent units to maintain their previous values, was used for the recurrent connections within the LSTM. Variational inference based dropout, where the dropout mask for a layer is locked across timesteps, was used on the input to each RNN layer and also on the output of the final RNN layer. We used a value of 0.5 for both dropout connections. \subsection{Comparison over Penn Treebank} Table \ref{table:PTBresults} compares the pointer sentinel-LSTM to a variety of other models on the Penn Treebank dataset. The pointer sentinel-LSTM achieves the lowest perplexity, followed by the recent Recurrent Highway Networks \cite{Zilly2016}. The medium pointer sentinel-LSTM model also achieves lower perplexity than the large LSTM models. Note that the best performing large variational LSTM model uses computationally intensive Monte Carlo (MC) dropout averaging. Monte Carlo dropout averaging is a general improvement for any sequence model that uses dropout but comes at a greatly increased test time cost. In \citet{Gal2015} it requires rerunning the test model with $1000$ different dropout masks. The pointer sentinel-LSTM is able to achieve these results with far fewer parameters than other models with comparable performance, specifically with less than a third the parameters used in the large variational LSTM models. We also test a variational LSTM that uses zoneout, which serves as the RNN component of our pointer sentinel-LSTM mixture. This variational LSTM model performs BPTT for the same length $L$ as the pointer sentinel-LSTM, where $L = 100$ timesteps. The results for this model ablation are worse than that of \citet{Gal2015}'s variational LSTM without Monte Carlo dropout averaging. \subsection{Comparison over WikiText-2} As WikiText-2 is being introduced in this dataset, there are no existing baselines. We provide two baselines to compare the pointer sentinel-LSTM against: our variational LSTM using zoneout and the medium variational LSTM used in \citet{Gal2015}.\footnote{https://github.com/yaringal/BayesianRNN} Attempts to run the \citet{Gal2015} large model variant, a two layer LSTM with hidden size 1500, resulted in out of memory errors on a 12GB K80 GPU, likely due to the increased vocabulary size. We chose the best hyperparameters from PTB experiments for all models. Table \ref{table:WikiText2results} shows a similar gain made by the pointer sentinel-LSTM over the variational LSTM models. The variational LSTM from \citet{Gal2015} again beats out the variational LSTM used as a base for our experiments. \section{Analysis} \subsection{Impact on Rare Words} A hypothesis as to why the pointer sentinel-LSTM can outperform an LSTM is that the pointer component allows the model to effectively reproduce rare words. An RNN may be able to better use the hidden state capacity by deferring to the pointer component. The pointer component may also allow for a sharper selection of a single word than may be possible using only the $\softmax$. Figure \ref{fig:PTBdiff} shows the improvement of perplexity when comparing the LSTM to the pointer sentinel-LSTM with words split across buckets according to frequency. It shows that the pointer sentinel-LSTM has stronger improvements as words become rarer. Even on the Penn Treebank, where there is a relative absence of rare words due to only selecting the most frequent 10k words, we can see the pointer sentinel-LSTM mixture model provides a direct benefit. While the improvements are largest on rare words, we can see that the pointer sentinel-LSTM is still helpful on relatively frequent words. This may be the pointer component directly selecting the word or through the pointer supervision signal improving the RNN by allowing gradients to flow directly to other occurrences of the word in that window. \subsection{Qualitative Analysis of Pointer Usage} In a qualitative analysis, we visualized the gate use and pointer attention for a variety of examples in the validation set, focusing on predictions where the gate primarily used the pointer component. These visualizations are available in the supplementary material. As expected, the pointer component is heavily used for rare names such as \emph{Seidman} (23 times in training), \emph{Iverson} (7 times in training), and \emph{Rosenthal} (3 times in training). The pointer component was also heavily used when it came to other named entity names such as companies like \emph{Honeywell} (8 times in training) and \emph{Integrated} (41 times in training, though due to lowercasing of words this includes \emph{integrated circuits}, \emph{fully integrated}, and other generic usage). Surprisingly, the pointer component was also used for many frequent tokens. For selecting the unit of measurement (tons, kilograms, \ldots) or the short scale of numbers (thousands, millions, billions, \ldots), the pointer would refer to previous recent usage. This is to be expected, especially when phrases are of the form \emph{increased from N tons to N tons}. The model can even be found relying on a mixture of the $\softmax$ and the pointer for predicting certain frequent verbs such as \textit{said}. Finally, the pointer component can be seen pointing to words at the very end of the 100 word window (position 97), a far longer horizon than the 35 steps that most language models truncate their backpropagation training to. This illustrates why the gating function must be integrated into the pointer component. If the gating function could only use the RNN hidden state, it would need to be wary of words that were near the tail of the pointer, especially if it was not able to accurately track exactly how long it was since seeing a word. By integrating the gating function into the pointer component, we avoid the RNN hidden state having to maintain this intensive bookkeeping. \section{Conclusion} We introduced the pointer sentinel mixture model and the WikiText language modeling dataset. This model achieves state of the art results in language modeling over the Penn Treebank while using few additional parameters and little additional computational complexity at prediction time. We have also motivated the need to move from Penn Treebank to a new language modeling dataset for long range dependencies, providing WikiText-2 and WikiText-103 as potential options. We hope this new dataset can serve as a platform to improve handling of rare words and the usage of long term dependencies in language modeling. \bibliographystyle{icml2016} \clearpage \onecolumn \section*{Supplementary material} \subsection*{Pointer usage on the Penn Treebank} For a qualitative analysis, we visualize how the pointer component is used within the pointer sentinel mixture model. The gate refers to the result of the gating function, with 1 indicating the RNN component is exclusively used whilst 0 indicates the pointer component is exclusively used. We begin with predictions that are using the RNN component primarily and move to ones that use the pointer component primarily. \clearpage \subsection*{Zipfian plot over WikiText-103} \end{document}
Vocabulary Selection Strategies for Neural Machine Translation
1610.00072
Table 1: Final decoding accuracy results for WMT English-German and English-Romanian on various test sets (newstest 2010 – 2016, except 2013 our validation set). We report the average vocabulary size per sentence, coverage of the reference and decoding time in milliseconds for newstest2015 and newstest2016. The parameter column indicates the maximum number of selected candidates per source word or phrase.
[ "[BOLD] EN-DE", "[BOLD] Param.", "[BOLD] 2010", "[BOLD] 2011", "[BOLD] 2012", "[BOLD] 2014", "[BOLD] 2015", "[BOLD] Voc.", "[BOLD] Cov.", "[BOLD] Time" ]
[ [ "Full vocabulary", "–", "18.5", "16.5", "16.8", "19.0", "22.5", "100,000", "93.3%", "1,524" ], [ "Co-occurrences", "top 300", "17.2", "15.6", "15.8", "18.1", "20.6", "1,036", "81.1%", "156" ], [ "PCA", "top 100", "15.4", "13.7", "14.2", "14.5", "18.6", "966", "74.8%", "143" ], [ "Word align", "top 100", "18.5", "16.4", "16.7", "19.0", "22.2", "1,093", "88.5%", "143" ], [ "Phrase pairs", "top 200", "18.1", "16.2", "16.6", "18.9", "22.0", "857", "86.2%", "153" ], [ "SVM", "–", "18.3", "16.2", "16.6", "18.8", "21.9", "1,284", "86.6%", "-" ], [ "[BOLD] EN-RO", "[BOLD] Param.", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[BOLD] 2016", "[BOLD] Voc.", "[BOLD] Cov.", "[BOLD] Time" ], [ "Full vocabulary", "–", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "27.9", "50,000", "96.0%", "966" ], [ "Word align", "top 50", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "28.1", "691", "89.3%", "186" ] ]
What is the exact speed and accuracy trade-off when reducing the output vocabulary? For our best methods (word alignments, phrase alignments and SVMs) we pick points such that vocabularies are kept small while maintaining good accuracy compared to the full vocabulary setting. For co-occurrence counts and bilingual PCA we choose settings with comparable speed. On English-German translation we achieve more than a 10-fold speed-up over the full vocabulary setting. Accuracy for the word alignment-based selection matches the full vocabulary setting on most test sets or decreases only slightly. On English-Romanian, we achieve a speed-up of over 5 times with word alignments at 28.1 BLEU versus 27.9 BLEU for the full vocabulary baseline. The smaller speed-up on English-Romanian is due to the smaller vocab of the baseline in this setting which is 50k compared to 100k for English-German. This is because training scores the vocabulary exactly once per target position, while beam search has to score multiple hypotheses at each generation step.
\documentclass[11pt]{article} \graphicspath{ {figures/} } \newcommand\BibTeX{B{\sc ib}\TeX} \title{Instructions for EACL-2017 Proceedings} \author{First Author \\ Affiliation / Address line 1 \\ Affiliation / Address line 2 \\ Affiliation / Address line 3 \\ {\tt email@domain} \\\And Second Author \\ Affiliation / Address line 1 \\ Affiliation / Address line 2 \\ Affiliation / Address line 3 \\ {\tt email@domain} \\} \date{} \begin{document} \maketitle \begin{abstract} Vocabulary size is an important challenge of neural machine translation because scoring all tokens during decoding is expensive in both memory and computation time. We extend the approach developed in \newcite{Jean} and \newcite{IBM} which consists in dynamically choosing small subsets of vocabulary so that only relevant tokens are scored. We experiment with several strategies for selecting this vocabulary and we provide data on accuracy retention and speed gains, showing that a few hundred tokens are enough to match the accuracy of big vocabularies. We show that a simple strategy based on word alignments provides the best results with a 15-fold speed increase at decoding time with accuracy losses smaller than 0.1 BLEU. \end{abstract} \section{Introduction}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Machine translation is a sequence to sequence problem, encoder-decoder deep learning architectures are therefore particularly appropriate to solve it. Neural machine translation has gotten good enough to challenge traditional statistical methods in recent years \cite{Bahdanau}. One of the issues NMT faces is related to vocabulary sizes. The output layer of the decoder is of linear complexity with respects to target vocabulary size because to generate the next token one needs to score every possible token every time. The number of possible target words is thus generally set between 30k and 80k, which makes NMT systems produce a lot of \emph{out-of-vocabulary} (OOV) tokens to fill the gaps. One approach to deal with that issue is to use OOV tokens that contain alignment information, this allows a finer post-processing step of OOV replacement \cite{LuongUnk}. However, this is only a way of doing damage-control and does not provide a solution to train large vocabularies. Characters, whether as last resort for rare words \cite{LuongHybrid} or for everything \cite{Ling}, and sub-word units \cite{Senrich} can be used instead of words to manage open vocabularies. To use very large vocabularies, \newcite{Jean} groups training data in sets that rely on smaller sub-vocabularies, and an alignment model is used to select the relevant vocabulary at decoding time. \newcite{IBM} build on this approach by leveraging phrase-level alignments on top of the word-based dictionary and by using it at both training and decoding time. Our approach extends on the latter, making the following contributions: \begin{itemize} \item we explore several other vocabulary selection methods; \item we quantify the gains obtained by these vocabulary selection methods. \end{itemize} \section{Translation model}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The neural system we use is an in-house Torch reimplementation of the encoder-decoder architecture similar of \newcite{Bahdanau}. We have one embedding for the source language and three for the target language: one for the attention module, one as input to the decoder and one to be multiplied with the decoder output for scoring. The former embeddings are all needed in memory but the other three are look-up tables from which we use only a few columns at a time. A bidirectional LSTM is used as encoder; hidden representations of the forward and the backward passes are simply stacked. The attention module is a global dot-product style soft attention mechanism \cite{LuongAttention}: the previous decoder hidden state is projected in the source word space, added to the attentional embedding of the previous target token and dot-multiplied with every encoder output. These products are then used to compute a weighted sum of the encoder outputs, the attentional vector. The decoder is a conditional LSTM which takes as inputs the attentional vector, the decoding embedding of the previous target token and the previous hidden state to output the next hidden state. Scoring embedding vectors are then dot-multiplied with this hidden representation to obtain token-wise scores. Training samples are bucket-batched on target sentence length: we order translation pairs by target length and source length to break ties (this limits source side padding) and make batches that contain only samples of the same target length, leaving some batches not full if necessary. \section{Vocabulary selection methods}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{selection} One important challenge of current neural translation systems is the vocabulary size of the target language. In figure \ref{bleu_vs_fullvocab}, displaying BLEU scores obtained on the English-German WMT 2013 test set with varying vocabulary sizes, we see that no plateau has been reached; it would be useful to get more tokens. Some languages that are more inflected than German such as Czech may benefit from bigger vocabularies even more. However, using 100k tokens or more creates two problems: \begin{itemize} \item for every token that is added to an output sentence, we have to compute the scalar product of the hidden representation produced by the decoder with all target token embeddings for scoring purposes. This adds up linearly with the vocabulary size; \item storing target embeddings in the limited GPU memory sparks a trade-off between vocabulary size and hidden representation sizes. Best performance is obtained when vocabulary size is at 100k in our model, but it would be better to use larger vocabularies and larger hidden representations. \end{itemize} \paragraph*{} We address this output layer bottleneck by using a small dynamic vocabulary. For each batch that is presented to the network, a subset of the large global vocabulary is selected and only its tokens are scored in the output layer. To implement this, we add a look-up table from which the target vocabulary embedding matrix is extracted for each batch and multiplied to the decoder output. We next explain how these small vocabularies are obtained. \subsection{Selection methods} The selection strategies are functions that take a sentence in the source language as inputs and return a set of tokens in the target language. These functions are learned on a parallel translation corpus. Most of them build on a word level selection function $f$ of which we aggregate the selections at the sentence and batch levels. At training time, we add the reference tokens to this selection. \begin{align*} f_\text{sentence}(x) &= \bigcup_{0\leq i < n} f(x_i) \\ f_\text{batch}(X) &= \bigcup_{0\leq j < k} f_\text{sentence}(X_j) \\ f_\text{training}(X) &= f_\text{batch}(X) \cup Y\\ \end{align*} \paragraph*{Word co-occurrence} The simplest method is to count co-occurrences of each \texttt{(source token, target token)} pair in the dataset, which can be thought of as an estimation of $p(s, t)$ with $s$ and $t$ source language and target language tokens. These probabilities are then used to fetch a fixed number of target tokens for every source token (the top \emph{k}, \emph{k} being a hyperparameter). The selected vocabulary is the union of all of these selection over a sentence.\footnote{A similar approach where co-occurrence counts are normalized by target token counts (which estimates $p(s|t)$) was tried but gave poor results because of instabilities in the the very low frequencies.} \paragraph*{Principal component analysis} A PCA on the co-occurrence matrix of the joint source and target vocabulary can be used to compute a bilingual token embedding \cite{PCA}; for every source token, we retrieve and rank the closest target tokens in it and take the top \emph{k} closest. We then aggregate all top selections to define the sentence-wise subset. A PCA of dimension 1024 is used. This approach is a form of regularization of the co-occurrence method; while the latter requires exact matches between word pairs, PCA takes advantage of the co-occurrence sparsity to reduce the space to a dimension in which we can perform nearest neighbor search to retrieve semantically similar words. \paragraph*{Word alignment} We use the unsupervised word aligner \texttt{fast\_align} \cite{FastAlign} to estimate which \texttt{(source token, target token)} pairs are aligned between two parallel sentences to design another selection method. Everything works like in the co-occurrence method where we replace co-occurrence counts by alignment counts. \paragraph*{Phrase alignment} A more advanced alignment method consists in aligning n-grams directly. We consider every n-gram in the source sentence ($n <= 5$), retrieve aligned target n-grams and output the union of all the tokens that are present in the n-grams. Our alignment dictionary is thresholded at five alignments so that only relevant n-gram translations appear in it, therefore most source n-grams (which are not likely to be a general phrase) will not have any correspondence in the target language. This method aims at capturing more subtle links that might be missed by word based methods. For instance, the tokens of ``pass out'', taken individually, are unlikely to make a word-based selection function retrieve the tokens necessary to translate it as ``s'\'evanouir'' or ``tomber dans les pommes'', but a phrase-based method should. \paragraph*{Support vector machines} Taking this idea one step further, we try to leverage the whole source sentence to select target tokens by using cheap classifiers, one per target token, that take the one-hot encoded sentence as input and output a score related to the probability of presence of a that token in a valid translation. We use sparse linear support vector machines trained by stochastic gradient descent \cite{bottou-2010}. SVMs are trained separately and thus need to be calibrated by coherently defining one threshold for each SVM above which its predictions are considered positive; we do this either by using recall on the validation set or by matching token frequencies in the validation set with a given multiplier. \subsection{Common words} On top of the token set that is selected by aforementioned strategies, we can add a set of \emph{common words} that are always used --for example, the 1000 most frequent words. Doing this is interesting for two reasons: it makes sure no important tokens such as syntactic words or punctuation are omitted; it reduces the cost of embedding reordering that happens in the look-up table by keeping fixed the columns that would be frequently moved. \section{Experimental setup}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Experiments are based on the English-German WMT15 dataset, it is composed of 1.64M sentences from EU parliament, 1.81M from a web crawl and 0.2M from news commentaries. Sentences longer than 50 tokens are filtered out and the dataset is shuffled. One percent of it is used as a development set (36495 sentences) and the test set is a news dataset of 2169 sentences. We also use the 2013 test set (3000 sentences) for motivating experiments in section \ref{selection}. All models have hidden representations of size 512, they are trained on a Tesla M40 (12GB of memory) using adaptive moment estimation with a learning rate of 0.01 and gradients are clipped at 10. Training takes four days on the full 100k vocabulary model. Data processing, including vocabulary selection, is done in separates threads and do not affect reported times. Testing is made with beams of size 5. A CPU cluster was used to train the 100k SVMs. \section{Results}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \paragraph*{Co-occurrence methods} Our aim is to reduce the average vocabulary size as much as possible while retaining similar accuracy, and all methods using co-occurrences failed to do so. The method based on co-occurrences has a bias towards frequent words: For example, if ``John'' is translated half the time as ``John'' and half the time as ``Johannes'', very frequent tokens such as ``.'', ``,'' and ``die'' are likely to co-occur more often and would thus rank higher. We cannot get sufficient coverage out of this method without having vocabulary sizes nearing the original size. PCA, while based on the same co-occurrence matrix as the former method, produces slightly better results. However, coverage is still too small to get an accuracy close to the full vocabulary model. \paragraph*{Alignment-based} Word-level alignment achieves the best results, they are reported in table \ref{result_table}. Three possible trade-offs between using a fixed common words vocabulary and fetching more candidates are shown, all of which get within 0.06 BLEU points of the full vocabulary model. (SOME OF THESE ARE BLEU SCORES OBTAINED WITH PRETRAINED FULL MODEL, UPDATE WITH ACTUAL REDUCED TRAININGS) Taking this further with phrase-based alignments did not help and even performs a little worse: while we can retain satisfying accuracy with 2k common words and fetching the top 50 candidates, accuracy degrades faster than with word alignment methods when we try to reduce the vocabulary size through either common words or selection. \paragraph{} Last, various calibration heuristics were tried for the SVM method to ensure consistency across classifiers but none produced interesting results. We remark that while coverage does generally correlates with BLEU scores, the word alignment system achieves better accuracy than the phrase-alignment system at a given coverage. \paragraph{Vocabulary selection} Figure \ref{vocab_vs_candidates} shows average vocabulary sizes for various combinations of common words and candidate selection with the word alignment method on batches of size 1 and 32. We observe that having no common words when decoding produces vocabularies smaller than a thousand words, top 50 being competitive with the full vocabulary method in terms of performance. Taking common words is therefore not needed at decoding time. However, when using batches of size 32 like in our training setting, vocabulary size is dominated by subset selection sizes. It is thus interesting to use common words in order to fix the most used embeddings in memory and not having to reorder them at every batch through the look-up table. \paragraph*{Selection at decoding time} Selection can also be applied at decoding time on a model that was trained with a full vocabulary, such results are shown in figure \ref{bleu_vs_vocab}. COMMENT MORE \paragraph*{Speed-up} We measure the total time it takes to achieve one epoch over the development set in a training and in a decoding situation. (EITHER USE ACTUAL TRAINING TIMES OR BE EXPLICIT ABOUT THIS BEING FORWARD ONLY) Figure \ref{cpu1_duration} plots times obtained with the align method using top 10, 20 or 50 candidates. There is no batching here, average vocabulary sizes are thus tiny. The training setup (figure \ref{gpu32_duration}), which uses batches of size 32. Vocabulary reduction gives a smaller speedup on the GPU than on the CPU, at about a third of the duration of the full model. We see important offsets at vocabulary size zero: the output layer is not the only bottleneck in neural machine translation. The encoder, which is not affected by the vocabulary reduction strategies, makes for a significant portion of global time. \paragraph*{Memory gains} In the full vocabulary model, a large part of the limited GPU memory is used to store the scoring embeddings which are dot-multiplied with the decoder output. In our setup, training with batches of size 32, the translation model uses 3.4GiB with the full 100k scoring embedding but only 1.3GiB when using a vocabulary size of 5k tokens. The memory bottleneck that is encountered when using big vocabularies is removed; the global vocabulary size can be as big as fits RAM since it does not impact GPU memory use and hidden representation sizes can be increased significantly. For instance, doubling from 512 to 1024 the hidden size in our model with 5k tokens raises memory use to 2.7GiB. \section{Conclusion}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We have explored new strategies for vocabulary selection for addressing the neural machine translation target vocabulary bottleneck. We gather from our experiments that a simple word alignment system provides the best strategy, augmenting training speed as well as decoding speed. Vocabulary reduction makes systems faster at current vocabulary sizes, but it would also be interesting to use it as a way to handle much larger vocabularies and hidden representations. Since only a small part of the embedding is held in memory at any time, it would be possible to store larger embeddings in RAM regardless of GPU memory capacities. \bibliographystyle{eacl2017} \end{document} \documentclass[11pt]{article} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Instructions for EACL-2017 Proceedings} \author{First Author \\ Affiliation / Address line 1 \\ Affiliation / Address line 2 \\ Affiliation / Address line 3 \\ {\tt email@domain} \\\And Second Author \\ Affiliation / Address line 1 \\ Affiliation / Address line 2 \\ Affiliation / Address line 3 \\ {\tt email@domain} \\} \date{} \begin{document} \maketitle \begin{abstract} This document contains the instructions for preparing a camera-ready manuscript for the proceedings of EACL-2017. The document itself conforms to its own specifications, and is therefore an example of what your manuscript should look like. These instructions should be used for both papers submitted for review and for final versions of accepted papers. Authors are asked to conform to all the directions reported in this document. \end{abstract} \section{Credits} This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL-2016 by Yannick Versley, Hai Zhao and Yusuke Miyao, NAACL-2016 by Margaret Mitchell, ACL-2012 by Maggie Li and Michael White, those from ACL-2010 by Jing-Shing Chang and Philipp Koehn, those for ACL-2008 by Johanna D. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL-2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL-2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the {\em International Joint Conference on Artificial Intelligence} and the \emph{Conference on Computer Vision and Pattern Recognition}. This version is distributed by the EACL-2017 publication chairs, Maria Liakata and Chris Biemann. \section{Introduction} The following instructions are directed to authors of papers submitted to EACL-2017 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers for review. The proceedings are designed for printing on \textbf{A4 paper}. To be included in the final proceedings, accepted papers have to be made available as both \textbf{latex sources} and PDF. We will make more detailed instructions available at \url{http://eacl2017.org/}. Please check this website regularly. \section{General Instructions} Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection~\ref{ssec:first}). {\bf Type single-spaced.} Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section~\ref{sec:length}. Do not number the pages. By uncommenting {\small\verb|\eaclfinalcopy|} at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where {\small\verb|***|} appears in the {\small\verb|\def\eaclpaperid{***}|} definition at the top. The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The ACL 2016 \LaTeX\ style will create a titlebox space of 2.5in for you when {\small\verb|\eaclfinalcopy|} is commented out. \subsection{The Ruler} The EACL-2017 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the {\small\verb|\eaclfinalcopy|} command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $114.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Electronically-available resources} EACL provides this description in \LaTeX2e{} ({\small\tt eacl2017.tex}) and PDF format ({\small\tt eacl2017.pdf}), along with the \LaTeX2e{} style file used to format it ({\small\tt eacl2017.sty}) and an ACL bibliography style ({\small\tt eacl2017.bst}) and example bibliography ({\small\tt eacl2017.bib}). These files are all available from {\small\tt eacl2017.org}. We will need you to use these style files, which have been appropriately tailored for the EACL 2017 proceedings. We have provided templates only for latex and ask authors to use these for creating their submissions. We have made this decision for the following reasons: \begin{enumerate} \item Latex ensures a uniform machine readable format for the ACL Anthology \url {http://aclweb.org/anthology/}, which also benefits the use of the anthology as a corpus \item Most formatting issues in previous conferences were caused by papers in other formats \item Latex enables uniform and consistent references; authors are encouraged to use the bibtex entries provided by the ACL anthology. \end{enumerate} Please refrain from the adaptation of margins in the template. We will strictly enforce the formatting requirements. On the website we will also provide a link to a latex template on Overleaf that you and your colleagues can use to author the paper, produce the corresponding latex source files and convert the paper to pdf. Using the Overleaf template should facilitate collaboration and ease the burden on authors not familiar with latex. \subsection{Format of Electronic Manuscript} \label{sect:pdf} For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from \LaTeX\ using the \textit{pdflatex} command. If your version of \LaTeX\ produces Postscript files, you can convert these into PDF using \textit{ps2pdf} or \textit{dvipdf}. On Windows, you can also use Adobe Distiller to generate PDF. Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. \textbf{Before sending it, test your PDF by printing it from a computer different from the one where it was created.} Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying ``Output to a file'', then convert the file to PDF. It is of utmost importance to specify the \textbf{A4 format} (21 cm x 29.7 cm) when formatting the paper. When working with {\tt dvips}, for instance, one should specify {\tt -t a4}. Or using the command \verb|\special{papersize=210mm,297mm}| in the latex preamble (directly below the \verb|\usepackage| commands). Then using {\tt dvipdf} and/or {\tt pdflatex} which would make it easier for some. Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible. \subsection{Layout} \label{ssec:layout} Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are: \begin{itemize} \item Left and right margins: 2.5 cm \item Top margin: 2.5 cm \item Bottom margin: 2.5 cm \item Column width: 7.7 cm \item Column height: 24.7 cm \item Gap between columns: 0.6 cm \end{itemize} \noindent Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible. \subsection{Fonts} For reasons of uniformity, Adobe's {\bf Times Roman} font should be used. In \LaTeX2e{} this is accomplished by putting \begin{quote} \begin{verbatim} \end{verbatim} \end{quote} in the preamble. If Times Roman is unavailable, use {\bf Computer Modern Roman} (\LaTeX2e{}'s default). Note that the latter is about 10\% less dense than Adobe's Times Roman font. \subsection{The First Page} \label{ssec:first} Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract. {\bf Title}: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table~\ref{font-table}) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use ``Mitchell not ``MITCHELL''). Do not format title and section headings in all capitals as well except for proper names (such as ``BLEU'') that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page. The title, author names and addresses should be completely identical to those entered to the electronic paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent. {\bf Abstract}: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word {\bf Abstract} in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font. {\bf Text}: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers. {\bf Indent} when starting a new paragraph. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title. \subsection{Sections} {\bf Headings}: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections. {\bf Citations}: Citations within the text appear in parentheses as~\cite{Gusfield:97} or, if the author's name appears in the text itself, as Gusfield~\shortcite{Gusfield:97}. Using the provided \LaTeX\ style, the former is accomplished using {\small\verb|\cite|} and the latter with {\small\verb|\shortcite|} or {\small\verb|\newcite|}. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}; this is accomplished with the provided style using commas within the {\small\verb|\cite|} command, e.g., {\small\verb|\cite{Gusfield:97,Aho:72}|}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in~\cite{Aho:72}, but write as in~\cite{Chandra:81} when more than two authors are involved. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}. Also refrain from using full citations as sentence constituents. \penalty -5000 We suggest that instead of \begin{quote} ``\cite{Gusfield:97} showed that ...'' \end{quote} you use \begin{quote} ``Gusfield \shortcite{Gusfield:97} showed that ...'' \end{quote} If you are using the provided \LaTeX{} and Bib\TeX{} style files, you can use the command \verb|\newcite| to get ``author (year)'' citations. As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g., \begin{quote} ``We previously showed \cite{Gusfield:97} ...'' \end{quote} should be avoided. Instead, use citations such as \begin{quote} ``Gusfield \shortcite{Gusfield:97} previously showed ... '' \end{quote} \textbf{Please do not use anonymous citations} and do not include acknowledgements when submitting your papers. Papers that do not conform to these requirements may be rejected without review. \textbf{References}: Gather the full set of references together under the heading {\bf References}; place the section before any Appendices, unless they contain references. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. Provide as complete a citation as possible, using a consistent format, such as the one for {\em Computational Linguistics\/} or the one in the {\em Publication Manual of the American Psychological Association\/}~\cite{APA:83}. Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM {\em Computing Reviews\/}~\cite{ACM:83}. We encourage you to use ACL anthology bibtex entries for citations that are available from the ACL anthology website. The \LaTeX{} and Bib\TeX{} style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above. {\bf Appendices}: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: {\bf Appendix A. Title of Appendix}. \subsection{Footnotes} {\bf Footnotes}: Put footnotes at the bottom of the page and use 9 points text. They may be numbered or referred to by asterisks or other symbols.\footnote{This is how a footnote should appear.} Footnotes should be separated from the text by a line.\footnote{Note the line separating the footnotes from the text.} \subsection{Graphics} {\bf Illustrations}: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Colour illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink. {\bf Captions}: Provide a caption for every illustration; number each one sequentially in the form: ``Figure 1. Caption of the Figure.'' ``Table 1. Caption of the Table.'' Type the captions of the figures and tables below the body, using 11 point text. \subsection{Accessibility} \label{ssec:accessibility} In an effort to accommodate the colour-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Colour is not forbidden, but authors should ensure that tables and figures do not rely solely on colour to convey critical distinctions. Here we give a simple criterion on your coloured figures, if your paper has to be printed in black and white, then you must assure that every curves or points in your figures can be still clearly distinguished. \section{XML conversion and supported \LaTeX\ packages} Following ACL 2014 we will also attempt to automatically convert your \LaTeX\ source files to publish papers in machine-readable XML with semantic markup in the ACL Anthology, in addition to the traditional PDF format. This will allow us to create, over the next few years, a growing corpus of scientific text for our own future research, and picks up on recent initiatives on converting ACL papers from earlier years to XML. We ask you to submit a ZIP file of your \LaTeX\ sources along with the camera-ready version of your paper. We will then convert them to XML automatically, using the LaTeXML tool (\url{http://dlmf.nist.gov/LaTeXML}). LaTeXML has \emph{bindings} for a number of \LaTeX\ packages, including the EACL-2017 stylefile. These bindings allow LaTeXML to render the commands from these packages correctly in XML. For best results, we encourage you to use the packages that are officially supported by LaTeXML, listed at \url{http://dlmf.nist.gov/LaTeXML/manual/included.bindings} \section{Translation of non-English Terms} It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration ``translation''. \section{Length of Submission} \label{sec:length} The EACL-2017 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page (up to 9 pages with unlimited pages for references) so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Supplementary material in the form of appendices does not count towards the page limit. However, note that supplementary material should be supplementary (rather than central) to the paper, and that reviewers may ignore supplementary material when reviewing the paper (see Appendix \ref{sec:supplemental}). Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review. Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source. \section*{Acknowledgments} The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review. You can include your own bib file by using the following commands: \begin{quote} \begin{verbatim} \bibliographystyle{eacl2017} \end{verbatim} \end{quote} \bibliographystyle{eacl2017} \appendix \section{Supplemental Material} \label{sec:supplemental} EACL-2017 also encourages the submission of supplementary material to report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterise state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. It may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material. Appendices (i.e. supplementary material in the form of proofs, tables, or pseudo-code) should come after the references, as shown here. Use \verb|\appendix| before any appendix section to switch the section numbering over to letters. \section{Multiple Appendices} \dots can be obtained by using more than one section. We hope you won't need that. \end{document} \documentclass[11pt]{article} \graphicspath{ {figures/} } \eaclfinalcopy % Uncomment this line for the final submission \newcommand{\toggle}[1]{} \renewcommand{\toggle}[1]{#1} % uncomment to de-anonymize \newif\ifcomment\commenttrue \ifcomment \newcommand{\macomment}[1]{\marginpar{\begin{center}\textcolor{red}{#1}\end{center}}} \newcommand{\mamark}[1]{\textcolor{red}{{#1}}} \newcommand{\gurvanmark}[1]{\textcolor{blue}{{#1}}} \else \newcommand{\macomment}[1]{} %uncomment to hide comments \newcommand{\mamark}[1]{#1} %uncomment to hide marks \fi \def\textit#1{{\it #1}} \def\textbf#1{{\bf #1}} \def\textsl#1{{\sl #1}} \def\texttt#1{{\tt #1}} \newcommand{\citegen}[1]{\citeauthor{#1}'s (\citeyear{#1})} \newcommand{\citet}{\newcite} \newcommand{\citep}{\cite} \title{Vocabulary Selection Strategies for Neural Machine Translation} \author{Gurvan L'Hostis \\ \'Ecole polytechnique \\ Palaiseau, France \\\And David Grangier \\ Facebook AI Research \\ Menlo Park, CA \\\And Michael Auli \\ Facebook AI Research \\ Menlo Park, CA} \date{} \newcommand{\davidcomment}[1]{{% \color{red}~\\ \rule{\linewidth}{0.5mm} #1 ~\\\rule{\linewidth}{0.5mm}% }} \begin{document} \maketitle \begin{abstract} Classical translation models constrain the space of possible outputs by selecting a subset of translation rules based on the input sentence. Recent work on improving the efficiency of neural translation models adopted a similar strategy by restricting the output vocabulary to a subset of likely candidates given the source. In this paper we experiment with context and embedding-based selection methods and extend previous work by examining speed and accuracy trade-offs in more detail. We show that decoding time on CPUs can be reduced by up to $90$\% and training time by $25$\% on the WMT15 English-German and WMT16 English-Romanian tasks at the same or only negligible change in accuracy. This brings the time to decode with a state of the art neural translation system to just over $140$ msec per sentence on a single CPU core for English-German. \end{abstract} \section{Introduction} Neural Machine Translation (NMT) has made great progress in recent years and improved the state of the art on several benchmarks~\cite{% jean:wmt15,sennrich:wmt16,zhou:baidu}. However, neural systems are typically less efficient than traditional phrase-based translation models (PBMT; Koehn et al. 2003)\nocite{koehn:2003:naacl}, both at training and decoding time. The efficiency of neural models depends on the size of the target vocabulary and previous work has shown that vocabularies of well over 50k word types are necessary to achieve good accuracy~\cite{jean:nmt_large_vocab,zhou:baidu}. Neural translation systems compute the probability of the next target word given both the previously generated target words as well as the source sentence. Estimating this conditional distribution is linear in the size of the target vocabulary which can be very large for many language pairs~\cite{grave:softmax}. Recent work in neural translation has adopted sampling techniques from language modeling which do not leverage the input sentence~\cite{mikolov:2011:asru,jean:nmt_large_vocab,chen:lm,zhou:baidu}. On the other hand, classical translation models generate outputs in an efficient two-step \emph{selection} procedure: first, a subset of promising translation rules is chosen by matching rules to the source sentence, and by pruning them based on local scores such as translation probabilities. Second, translation hypotheses are generated that incorporate non-local scores such as language model probabilities. Recently, \citet{mi:manipulation} proposed a similar strategy for neural translation: a selection method restricts the target vocabulary to a small subset, specific to the input sentence. The subset is then scored by the neural model. Their results demonstrate that vocabulary subsets that are only about 1\% of the original size result in very little to no degradation in accuracy. This paper complements their study by experimenting with additional selection techniques and by analyzing speed and accuracy in more detail. Similar to \citet{mi:manipulation}, we consider selecting target words based either on a dictionary built from Viterbi word alignments, or by matching phrases in a traditional phrase-table, or by using the $k$ most frequent words in the target language. In addition, we investigate methods that do not rely on a traditional phrase-based translation model or alignment model to select target words. We investigate bilingual co-occurrence counts, bilingual embeddings as well as a discriminative classifier to leverage context information via features extracted from the entire source sentence (\textsection\ref{sec:strategies}). Our experiments show speed-ups in CPU decoding by up to a factor of $10$ at very little degradation in accuracy. Training speed on GPUs can be increased by a factor of $1.33$. We find that word alignments as the sole selection method is sufficient to obtain good accuracy. This is in contrast to \citet{mi:manipulation} who used a combination of the $2,000$ most frequent words, word alignments as well as phrase-pairs. Selection methods often fall short in retrieving all words of the gold standard human translation. However, we find that with a reduced vocabulary of $\sim 600$ words they can recover over $99$\% of the words that are actually chosen by a model that decodes over the entire vocabulary. Finally, the speed-ups obtained by vocabulary selection become even more significant if faster encoder models are used, since selection removes the burden of scoring large vocabularies (\textsection\ref{sec:exp}). \section{Vocabulary Selection Strategies} \label{sec:strategies} This section presents different selection strategies inspired by phrase-based translation. We improve on a simple word co-occurrence method by estimating bilingual word embeddings with Hellinger PCA and then by using word alignments instead of co-occurrence counts. Finally, we leverage richer context in the source via bilingual phrases from a phrase-table or by using the entire sentence in a Support Vector Machine classifier. \subsection{Word Co-occurrences} This is the simplest approach we consider. We estimate a co-occurrence table which counts how many times each source word $s$ co-occurs with each target word $t$ in the training bitext. The table allows us to estimate the joint distribution $P(s, t)$. Next, we create a list of the $k$ target words that co-occur most with each source word, i.e., the words $t$ which maximize $P(s, t)$ for a given $s$. Vocabulary selection then simply computes the union of the target word lists associated with each source word in the input. We were concerned that this strategy over-selects frequent target words which have higher co-occurrence counts than rare words, regardless of the source word. Therefore, we experimented with selecting target words maximizing point-wise mutual information (PMI) instead, i.e., $$ PMI(s,t) = \frac{P(s,t)}{P(s)P(t)} $$ However, this estimate was deemed too unreliable for low $P(t)$ in preliminary experiments and did not perform better than just $P(s,t)$. \subsection{Bilingual Embeddings} We build bilingual embeddings by applying Hellinger Principal Component Analysis (PCA) to the bilingual co-occurrence count matrix $M_{i,j} = P(t = i| s = j)$; this extends the work on monolingual embeddings of~\citet{lebret:pca} to the bilingual case. The resulting low rank estimate of the matrix can be more robust for rare counts. Hellinger PCA has been shown to produce embeddings which perform similarly to word2vec but at higher speed~\cite{mikolov:word2vec,gouws:bilbowa}. For selection, the estimated co-occurrence can be used instead of the raw counts as described in the above section. This strategy is equivalent to using the low rank representation of each source word (source embedding, i.e., column vectors from the PCA) and finding the target word with the closest low rank representation (target embeddings, i.e., row vectors from the PCA). \subsection{Word Alignments} This strategy uses word alignments learned from a bilingual corpus~\cite{brown:mt}. Word alignment introduces latent variables to model $P(t|s)$, the probability of source word $t$ given target word $s$. Latent variables indicate the source position corresponding to each target position in a sentence pair~\cite{koehn:smt}. We use FastAlign, a popular reparameterization of IBM Model 2~\cite{dyer:fastalign}. For each source word $s$, we build a list of the top $k$ target words maximizing $P(t|s)$. The candidate target vocabulary is the union of the lists for all source words. Compared to co-occurrence counts, this strategy avoids selecting frequent target words when conditioning on a rare source word. Word alignments will only link a frequent target word to a rare source word if no better explanation is present in the source sentence. \subsection{Phrase Pairs} This strategy relies on a phrase translation table, i.e., a table pairing source phrases with corresponding target phrases. The phrase table is constructed by reading off all bilingual phrases that are consistent with the word alignments according to an extraction heuristic~\cite{koehn:smt}. For selection, we restrict the phrase table to the phrases present in the source sentence and consider the union of the word types appearing in all corresponding target phrases~\cite{mi:manipulation}. Compared to word alignments, we hope this strategy to fetch more relevant target words as it can rely on longer source phrases to leverage richer source context. \subsection{Support Vector Machines} Support Vector Machines (SVMs) for vocabulary selection have been previously proposed in \cite{bangalore:svmmt}. The idea is to determine a target vocabulary based on the entire source sentence rather than individual words or phrases. In particular, we train one SVM per target word taking as input a sparse vector encoding the source sentence as a bag of words. The SVM then predicts whether the considered target word is present or absent from the target sentence. This classifier-based method has several advantages compared to phrase alignments: the input is not restricted to a few contiguous source words and can leverage all words in the source sentence. The model can express anti-correlation with negative weights, marking that the presence of a source word is a negative indicator for the presence of a target word. A disadvantage of this approach is that we need to feed the source sentence to all SVMs in order to get scores, instead of just reading from a pre-computed table. However, SVMs can be evaluated efficiently since (i) the features are sparse, and (ii) only features corresponding to words from the source sentence are used at each evaluation. Finally, this framework formulates the selection of each target word as an independent binary classification problem which might not favor competition between target words. \subsection{Common Words} Following \citet{mi:manipulation}, we consider adding the $k$ most frequent target words to the above selection methods. This set includes conjunctions, determiners, prepositions and frequent verbs. Pruning any such word through restrictive vocabulary selection may adversely affect the system output and is addressed by this technique. \section{Related Work} \label{sec:previous} The selection of a limited target vocabulary from the source sentence is classical topic in machine translation. It is often referred to as \emph{lexical selection}. As mentioned above, word-based and phrase-based systems perform implicit lexical selection by building a word or phrase table from alignments to constrain the possible target words. Other approaches to lexical selection include discriminative models such as SVMs and Maximum Entropy models ~\cite{bangalore:svmmt} as well as rule-based systems~\cite{tufis:lexicon,tyers:selection}. In the context of neural machine translation, vocabulary size has always been a concern. Various strategies have been proposed to improve training and decoding efficiency. Approaches inspired by importance sampling reduce the vocabulary for training~\cite{jean:nmt_large_vocab}, byte pair encoding segment words into more frequent sub-units~\cite{sennrich:bpe}, while~\cite{luong:char} proposes to segment words into characters. Related work in neural language modeling is also relevant~\cite{bengio:lm,% mnih:hsm,chen:lm}. One can refer to~\cite{sennrich:tutorial} for further references. Closer to our work, recent work~\cite{mi:manipulation} presents preliminary results on using lexical selection techniques in an NMT system. Compared to this work, we investigate more selection methods (SVM, PCA, co-occurrences) and analyze the speed/accuracy trade-offs at various operating points. We report efficiency gains and distinguish the impact of selection in training and decoding. \section{Experiments \& Results} \label{sec:exp} This section presents our experimental setup, then discusses the impact of vocabulary selection at decoding time and then during training time. \subsection{Experimental Setup} \label{sec:expsetup} We use a an encoder-decoder style neural machine translation system based on Torch.\footnote{\parbox[t][4em][s]{0.42\textwidth}{We will release the code with the camera ready.}} Our encoder is a bidirectional recurrent neural network (Long Short Term Memory, LSTM) and an LSTM decoder with attention. The resulting context vector is fed to an LSTM decoder which generates the output~\cite{bahdanau:iclr15,luong:attention}. We use a single layer setup both in the encoder and as well as the decoder, each with $512$ hidden units. Decoding experiments are run on a CPU since this is the most common type of hardware for inference. For training we use GPUs which are the most common hardware for neural network fitting. Specifically, we rely on $2.5$GHz Intel Xeon 5 CPUs and Nvidia Tesla M40 GPUs. Decoding times are based on a single CPU core and training times are based on a single GPU card. Word alignments are computed with FastAlign~\cite{dyer:fastalign} in both language directions and then symmetrized with 'grow-diag-final-and'. Phrase tables are computed with Moses~\cite{koehn:moses} and we train support vector machines with SvmSgd~\cite{bottou:svmsgd}. We also use the Moses preprocessing scripts to tokenize the training data. We experiment on two language pairs. The majority of experiments are on WMT-15 English to German data~\cite{wmt:2015}; we use newstest2013 for validation and newstest2010-2012 as well as newstest2014,2015 to present final test results. Training is restricted to sentences of no more than $50$ words which results in $3.6$m sentence pairs. We chose the vocabulary sizes following the same methodology. We use the $100$k most frequent words both for the source and target vocabulary. At decoding time we use a standard beam search with a beam width of $5$ in all experiments. Unknown output words are simply replaced with the source word whose attention score is largest~\cite{luong:unk}. We also experiment with WMT-16 English to Romanian data using a similar setting but allowing sentences of up to $125$ words~\cite{wmt:2016}. Since the training set provided by WMT is limited to $600$k sentence pairs, we add the synthetic training data provided by \citet{sennrich:wmt16}. This results in a total of $2.4$m sentence pairs. Our source vocabulary comprises the $200$k most frequent words and the target vocabulary contains $50$k words. \subsection{Selection for Efficient Decoding} Decoding efficiency of neural machine translation is still much lower than for traditional phrase-based translation. For NMT, the running time of beam search on a CPU is dominated by the last linear layer that computes a score for each target word. Vocabulary selection can therefore have a large impact on decoding speed. Figure~\ref{fig:decoding_time_vs_vocab} shows that a reduced vocabulary of $\sim460$ types ($144$ msec) can achieve a $10$X speedup over using the full 100k-vocabulary ($\sim1,600$ msec). Next we investigate the impact of reduced vocabularies on accuracy. Figure~\ref{fig:BLEU_vs_vocab} compares BLEU for the various selection strategies on a wide range of vocabulary sizes. Broadly, there are two groups of techniques: first, co-occurrence counts and bilingual embeddings (PCA) are not able to match the baseline performance (Full $100$k) even with over $5$k candidate words per sentence. Second, even with fewer than $1,000$ candidates per sentence, word alignments, phrase pairs and SVMs nearly match the full vocabulary accuracy. Although co-occurrence counts and PCA have shown useful to measure semantic relatedness~\cite{brown:clustering,lebret:pca}, it seems that considering the whole source sentence as the explanation of a target word without latent alignment variables undermines their selection ability. Overall, word alignments work as well or better than the other techniques relying on a wider input context (phrase pairs and SVMs). Querying a word table is also more efficient than querying a phrase-table or evaluating SVMs. We therefore use word alignment-based selection for the rest of our analysis. \citet{mi:manipulation} suggest that adding common words to a selection technique could have a positive impact. We therefore consider adding the most frequent $k$ words to our word alignment-based selection. Figure~\ref{fig:BLEU_common_vs_vocab} shows that this actually has little impact on BLEU in our setting. In fact, the overlap of the results for $n=0$ and $50$ indicates that most of the top 50 words are already selected, even with small candidate sets. Next we try to get a better sense of how precise selection is with respect to the words used by a human translator or with respect to the translations generated by the full vocabulary model. We use word alignments for this experiment. Figure~\ref{fig:coverage_vs_vocab} shows coverage with respect to the reference (left) and with respect to the output of the full vocabulary system (right). We do not count unknown words (UNK) in all settings, even if they may later be replaced by a source word (\textsection\ref{sec:expsetup}). Not counting UNKs is the reason why the full vocabulary models do not achieve $100$\% coverage in either setting. The two graphs show different trends: On the left, coverage with respect to the reference for the full vocabulary is $95.1$\%, while selection achieves $87.5$\% with a vocabulary of 614 words (3rd point on graph). However, when coverage is measured with respect to the full vocabulary system output, the coverage of selection is very close to the full vocabulary model with respect to itself, i.e., when unknown words are not counted. In fact, the selection model covers over $99$\% of the non-UNK words in the full vocabulary output. This result shows that selection can recover almost all of the words which are effectively selected by a full vocabulary model while discarding many words which are not chosen by the full model. What is the exact speed and accuracy trade-off when reducing the output vocabulary? Figure~\ref{fig:bleu_vs_decoding_time} plots BLEU against decoding speed. We pick a number of operating points from this graph for our final test set experiments (Table~\ref{tbl:test_bleu}). For our best methods (word alignments, phrase alignments and SVMs) we pick points such that vocabularies are kept small while maintaining good accuracy compared to the full vocabulary setting. For co-occurrence counts and bilingual PCA we choose settings with comparable speed. Our test results (Table~\ref{tbl:test_bleu}) confirm the validation experiments. On English-German translation we achieve more than a $10$-fold speed-up over the full vocabulary setting. Accuracy for the word alignment-based selection matches the full vocabulary setting on most test sets or decreases only slightly. For example with word alignment selection, the largest drop is on newstest2015 which achieves $22.2$ BLEU compared to $22.5$ BLEU for the full setting on English-German; the best single-system neural setup at WMT15 achieved $22.4$ BLEU on this dataset~\cite{jean:wmt15}. On English-Romanian, we achieve a speed-up of over $5$ times with word alignments at $28.1$ BLEU versus $27.9$ BLEU for the full vocabulary baseline. This matches the state-of-the-art on this dataset~\cite{sennrich:wmt16} from WTM16. The smaller speed-up on English-Romanian is due to the smaller vocab of the baseline in this setting which is $50$k compared to $100$k for English-German. \subsection{Selection for Better Training} So far our evaluation focused on vocabulary selection for decoding, relying on a model trained with the full vocabulary. Next we address the question of whether the efficiency advantages observed for decoding translate to training as well. Selection at training may impact generalization performance either way: it assimilates training and testing conditions which could positively impact accuracy. However, training with a reduced vocabulary could result in worse parameter estimates, especially for rare words which would receive much fewer updates because they would be selected less often. We run training experiments on WMT English to German with word alignment-based selection. In addition to the selected words, we include the target words of the reference and train a batch with the union of the sentence-specific vocabularies of all samples~\cite{mi:manipulation}. Figure~\ref{fig:bleu_vs_vocab_train} compares validation accuracy of models trained with selection or with the full vocabulary. Selection in both training and decoding gives a small accuracy improvement. However, this improvements disappears for vocabulary sizes of $500$ and larger; we found the same pattern on other test sets. Similar to our decoding experiments, adding common words during training did not improve accuracy. Table~\ref{tbl:train_time} shows the impact of selection on training speed. Our bi-directional LSTM model (BLSTM) can process the same number of samples in 25\% less time on a GPU with a batch size of 32 sentences. We do not observe changes in the number of epochs required to obtain the best validation BLEU. The speed-ups for training are significantly smaller than for decoding (Table~\ref{tbl:test_bleu}). This is because training scores the vocabulary exactly once per target position, while beam search has to score multiple hypotheses at each generation step. We suspect that training is now dominated by the bi-directional LSTM encoder. To confirm this, we replaced the encoder with a simple average pooling model which encodes source words as the mean of word and position embeddings over a local context \cite{ranzato:2015:iclr}. Table~\ref{tbl:train_time} shows that in this setting the efficiency gains of vocabulary selection are more substantial ($40\%$ less time per epoch). This model is not as accurate and achieves only $18.5$ BLEU on newstest2015 compared to $22.5$ for the bi-directional LSTM encoder. However, it shows that improving the efficiency of the encoder is a promising future work direction. \section{Conclusions} \label{sec:ccl} This paper presents a comprehensive analysis of vocabulary selection techniques for neural machine translation. Vocabulary selection constrains the output words to be scored to a small subset relevant to the current source sentence. The idea is to avoid scoring a high number of unlikely candidates with the full model which can be ruled out by simpler means. We extend previous work by considering a wide range of simple and complex selection techniques including bilingual word co-occurrence counts, bilingual embeddings built with Hellinger PCA, word alignments, phrase pairs, and discriminative SVM classifiers. We explore the trade-off between speed and accuracy for different vocabulary sizes and validate results on two language pairs and several test sets. Our experiments show that decoding speed-up can be reduced by up to $90$\% without compromising accuracy. Word alignments, bilingual phrases and SVMs can achieve high accuracy, even when considering fewer than $1,000$ word types per sentence. At training time, we achieve a speed-up of up to $1.33$ with a bi-directional LSTM encoder and $1.66$ with a faster alternative. Efficiency increases are less pronounced during training because of two combined factors. First, vocabulary scoring at the final layer of the model is a smaller part of the computation compared to beam search. Second, state-of-the-art bi-directional LSTM encoders~\cite{bahdanau:iclr15} are relatively costly compared to scoring the vocabulary on GPU hardware. Efficiency gains from vocabulary selection highlight the importance of progress towards efficient, accurate encoder and decoder architectures. \bibliographystyle{eacl2017} \end{document}
Learning a Neural Semantic Parser from User Feedback
1704.08760
Table 7: Percentage of examples that required annotation (i.e., where the model initially made an incorrect prediction) on Geo880 vs. batch size.
[ "[BOLD] Batch Size", "150", "100", "50" ]
[ [ "[BOLD] % Wrong", "70.2", "60.4", "54.3" ] ]
As in the live experiment, accuracy improves with successive batches. Data augmentation using templates helps in the initial stages of Geo880, but its advantage is reduced as more labeled data is obtained. Templates did not improve accuracy on ATIS, possibly because most ATIS queries involve two entities, i.e., a source city and a destination city, whereas our templates only generate questions with a single entity type. Nevertheless, templates are important in a live system to motivate users to interact with it in early stages. As observed before, paraphrasing improves performance at all stages. Smaller batch sizes reduce annotation effort, with a batch size of 50 requiring only 54.3% of the examples to be annotated. This result demonstrates that more frequent deployments of improved models leads to fewer mistakes.
In this work, we demonstrate how a semantic parser can be learned from scratch for a new domain of academic papers to map utterances directly to SQL queries. We release all data gathered from this experiment. Additionally, we compare t he performance of our model with two standard semantic parsing benchmarks i.e. Geo880 and ATIS \subsection{Benchmark Datasets} \documentclass[11pt,a4paper]{article} \pdfoutput=1 \usepackage[hyperref]{acl2017} \usepackage[skip=5pt]{caption} \usepackage[linesnumbered]{algorithm2e} \setlength{\belowcaptionskip}{-1pt} \newcommand\luke[1]{[\textcolor{magenta}{LZ: {#1}}]} \newcommand\alvin[1]{[\textcolor{blue}{AC: {#1}}]} \newcommand\srini[1]{[\textcolor{red}{SI: {#1}}]} \newcommand\jayant[1]{[\textcolor{orange}{JK: {#1}}]} \DeclareMathOperator*{\argminB}{argmin} % Jan Hlavacek \DeclareMathOperator*{\argmaxB}{argmax} % Jan Hlavacek \newcommand{\cs}[1]{\texttt{\symbol{`\\}#1}} \newcommand\Tstrut{\rule{0pt}{2.6ex}} % "top" strut \newcommand\Bstrut{\rule[-0.9ex]{0pt}{0pt}} % "bottom" strut \newcommand{\TBstrut}{\Tstrut\Bstrut} % top&bottom struts \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{mygray}{rgb}{0.5,0.5,0.5} \definecolor{mymauve}{rgb}{0.58,0,0.82} \lstset{ % escapeinside={\%*}{*)}, % if you want to add LaTeX within keywordstyle=\color{blue}, % keyword style language=Sql, % the language of the code basicstyle=\ttfamily\scriptsize, frame=single, showstringspaces=false } \definecolor{javaGreen}{RGB}{63,127,95} \lstdefinestyle{mysql}{ language=Sql, stringstyle=\color{javaGreen}, deletekeywords={year} } \makeatletter \def\blfootnote{\xdef\@thefnmark{}\@footnotetext} \makeatother \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{726} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Learning a Neural Semantic Parser from User Feedback} \author{Srinivasan Iyer$^{\dagger\diamond}$, Ioannis Konstas$^{\dagger}$, Alvin Cheung$^{\dagger}$\\ \textbf{Jayant Krishnamurthy$^{\ddagger}$ \and Luke Zettlemoyer$^{\dagger}$$^{\ddagger}$}\\ \\ $^{\dagger}$Paul G. Allen School of Computer Science \& Engineering, Univ. of Washington, Seattle, WA \\ \texttt{\{sviyer,ikonstas,akcheung,lsz\}@cs.washington.edu} \\\\ $^{\ddagger}$Allen Institute for Artificial Intelligence, Seattle, WA\\ \texttt{\{jayantk,lukez\}@allenai.org} \\ } \date{} \begin{document} \setlength{\abovedisplayskip}{4pt} \setlength{\belowdisplayskip}{4pt} \maketitle \begin{abstract} \blfootnote{\hspace{-4pt}$^{\diamond}$Work done partly during an internship at the Allen Institute for Artificial Intelligence.} \subfile{abstract.tex} \end{abstract} \section{Introduction} \subfile{intro.tex} \section{Related Work} \subfile{related.tex} \subfile{interactive_learning.tex} \section{Semantic Parsing to SQL} \label{sec:model} \subfile{model.tex} \section{Benchmark Experiments} \subfile{benchmark_experiments.tex} \section{Interactive Learning Experiments} \label{sec:interactive_learning_experiments} \subfile{results.tex} \section{Conclusion} \subfile{conclusion.tex} \section*{Acknowledgments} The research was supported in part by DARPA, under the DEFT program through the AFRL (FA8750-13-2-0019), the ARO (W911NF-16-1-0121), the NSF (IIS-1252835, IIS-1562364, IIS-1546083, IIS-1651489, CNS-1563788), the DOE (DE-SC0016260), an Allen Distinguished Investigator Award, and gifts from NVIDIA, Adobe, and Google. The authors thank Rik Koncel-Kedziorski and the anonymous reviewers for their helpful comments. \bibliographystyle{acl_natbib} \end{document} Existing semantic parsing approaches for building natural language interfaces to databases (NLIDBs) either use special-purpose intermediate meaning representations that lack the full expressivity of database query languages or require extensive feature engineering, making it difficult to deploy them in new domains. We present a robust approach to quickly and easily learn and deploy semantic parsers from scratch, whose performance improves over time based on user feedback, and requires very little expert intervention. To learn these semantic parsers, we (1) adapt neural sequence models to map utterances directly to SQL thereby bypassing intermediate representations and taking full advantage of SQL's querying capabilities, (2) immediately deploy the model online to solicit questions and user feedback on results to reduce SQL annotation efforts, and (3) use crowd workers from skilled markets to provide SQL annotations that can directly be used for model improvement, in addition to being easier and cheaper to obtain than logical meaning representations. We demonstrate the effectiveness of the complete approach by successfully learning a semantic parser for an academic domain by simply deploying it online for three days. \subfile{example} This type of interactive learning is related to a number of recent ideas in semantic parsing, including batch learning of models that directly produce programs (e.g., regular expressions~\cite{locascio-EtAl:2016:EMNLP2016}), learning from paraphrases (often gathered through crowdsourcing~\cite{wang2015}), data augmentation (e.g. based on manually engineered semantic grammars~\cite{jia2016}) and learning through direct interaction with users (e.g., where a single user teaches the model new concepts~\cite{wang2016games}). However, there are unique advantages to our approach, including showing (1) that non-linguists can write SQL to encode complex, compositional computations (see Fig~\ref{fig:example} for an example), (2) that external paraphrase resources and the structure of facts from the target database itself can be used for effective data augmentation, and (3) that actual database users can effectively drive the overall learning by simply providing feedback about what the model is currently getting correct. Our experiments measure the performance of these learning advances, both in batch on existing datasets and through a simple online experiment for the full interactive setting. For the batch evaluation, we use sentences from the benchmark GeoQuery and ATIS domains, converted to contain SQL meaning representations. Our neural learning with data augmentation achieves reasonably high accuracies, despite the extra complexities of mapping directly to SQL. We also perform simulated interactive learning on this data, showing that with perfect user feedback our full approach could learn high quality parsers with only 55\% of the data. Finally, we do a small scale online experiment for a new domain, academic paper metadata search, demonstrating that actual users can provide useful feedback and our full approach is an effective method for learning a high quality parser that continues to improve over time as it is used. We describe an approach to rapidly train a semantic parser as a NLIDB that iteratively improves parser accuracy over time while requiring minimal intervention. Our approach uses an attention-based neural sequence-to-sequence model, with data augmentation from the target database and paraphrasing, to parse utterances to SQL. This model is deployed in an online system, where user feedback on its predictions is used to select utterances to send for crowd worker annotation. We find that the semantic parsing model is comparable in performance to previous systems that either map from utterances to logical forms, or generate SQL, on two benchmark datasets, \textsc{Geo880} and \textsc{ATIS}. We further demonstrate the effectiveness of our online system by learning a semantic parser from scratch for an academic domain. A key advantage of our approach is that it is not language-specific, and can easily be ported to other commonly used query languages, such as SPARQL or ElasticSearch. Finally, we also release a new dataset of utterances and SQL queries for an academic domain. We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention. To achieve this, we adapt neural sequence models to map utterances directly to SQL with its full expressivity, bypassing any intermediate meaning representations. These models are immediately deployed online to solicit feedback from real users to flag incorrect queries. Finally, the popularity of SQL facilitates gathering annotations for incorrect predictions using the crowd, which is directly used to improve our models. This complete feedback loop, without intermediate representations or database specific engineering, opens up new ways of building high quality semantic parsers. Experiments suggest that this approach can be deployed quickly for any new target domain, as we show by learning a semantic parser for an online academic database from scratch. Although diverse meaning representation languages have been used with semantic parsers -- such as regular expressions \cite{kushman2013,locascio-EtAl:2016:EMNLP2016}, Abstract Meaning Representations (AMR) \cite{artzi2015,D16-1183}, and systems of equations \cite{kushman2014,roy2016} -- parsers for querying databases have typically used either logic programs \cite{zelle1996}, lambda calculus \cite{zettlemoyer05}, or $\lambda$-DCS \cite{liang2011learning} as the meaning representation language. All three of these languages are modeled after natural language to simplify parsing. However, none of them is used to query databases outside of the semantic parsing literature; therefore, they are understood by few people and not supported by standard database implementations. In contrast, we parse directly to SQL, which is a popular database query language with wide usage and support. Learning parsers directly from SQL queries has the added benefit that we can potentially hire programmers on skilled-labor crowd markets to provide labeled examples, such as UpWork\footnote{\url{http://www.upwork.com}}, which we demonstrate in this work. A few systems have been developed to directly generate SQL queries from natural language \cite{popescu2003towards,giordani-moschitti:2012:POSTERS,poon2013}. However, all of these systems make strong assumptions on the structure of queries: they use manually engineered rules that can only generate a subset of SQL, require lexical matches between question tokens and \text{table/column} names, or require questions to have a certain syntactic structure. In contrast, our approach can generate arbitrary SQL queries, only uses lexical matching for entity names, and does not depend on syntactic parsing. We use a neural sequence-to-sequence model to directly generate SQL queries from natural language questions. This approach builds on recent work demonstrating that such models are effective for tasks such as machine translation \cite{bahdanau2014neural} and natural language generation \cite{kiddon-zettlemoyer-choi:2016:EMNLP2016}. Recently, neural models have been successfully applied to semantic parsing with simpler meaning representation languages \cite{dong2016,jia2016} and short regular expressions \cite{locascio-EtAl:2016:EMNLP2016}. Our work extends these results to the task of SQL generation. Finally, \newcite{ling2016} generate Java/Python code for trading cards given a natural language description; however, this system suffers from low overall accuracy. A final direction of related work studies methods for reducing the annotation effort required to train a semantic parser. Semantic parsers have been trained from various kinds of annotations, including labeled queries \cite{zelle1996,wong2007generation,zettlemoyer05}, question/answer pairs \cite{liang2011learning,kwiatkowski-EtAl:2013:EMNLP,berant2013semantic}, distant supervision \cite{krishnamurthy2012weakly,choi2015}, and binary correct/incorrect feedback signals \cite{clarke2010driving,artzi-zettlemoyer:2013:TACL}. Each of these schemes presents a particular trade-off between annotation effort and parser accuracy; however, recent work has suggested that labeled queries are the most effective \cite{yih2016}. Our approach trains on fully labeled SQL queries to maximize accuracy, but uses binary feedback from users to reduce the number of queries that need to be labeled. Annotation effort can also be reduced by using crowd workers to paraphrase automatically generated questions \cite{wang2015}; however, this approach may not generate the questions that users actually want to ask the database -- an experiment in this paper demonstrated that 48\% of users' questions in a calendar domain could not be generated. In this section, we learn a semantic parser for an academic domain from scratch by deploying an online system using our interactive learning algorithm (Section \ref{sec:interactive_learning}). After three train-deploy cycles, the system correctly answered 63.51\% of user's questions. To our knowledge, this is the first effort to learn a semantic parser using a live system, and is enabled by our models that can directly parse language to SQL without manual intervention. \subsection{User Interface} We developed a web interface for accepting natural language questions to an academic database from users, using our model to generate a SQL query, and displaying the results after execution. Several example utterances are also displayed to help users understand the domain. Together with the results of the generated SQL query, users are prompted to provide feedback which is used for interactive learning. Screenshots of our interface are included in our Supplementary Materials. Collecting accurate user feedback on predicted queries is a key challenge in the interactive learning setting for two reasons. First, the system's results can be incorrect due to poor entity identification or incompleteness in the database, neither of which are under the semantic parser's control. Second, it can be difficult for users to determine if the presented results are in fact correct. This determination is especially challenging if the system responds with the correct type of result, for example, if the user requests ``papers at ACL 2016'' and the system responds with all ACL papers. We address this challenge by providing users with two assists for understanding the system's behavior, and allowing users to provide more granular feedback than simply correct/incorrect. The first assist is \textbf{type highlighting}, which highlights entities identified in the utterance, for example, ``paper by \textit{Michael I. Jordan (AUTHOR)} in \textit{ICRA (VENUE)} in \textit{2016 (YEAR)}.'' This assist is especially helpful because the academic database contains noisy keyword and dataset tables that were automatically extracted from the papers. The second assist is \textbf{utterance paraphrasing}, which shows the user another utterance that maps to the same SQL query. For example, for the above query, the system may show ``what papers does \textit{Michael I. Jordan (AUTHOR)} have in \textit{ICRA (VENUE)} in \textit{2016 (YEAR)}.'' This assist only appears if a matching query (after entity anonymization) exists in the model's training set. Using these assists and the predicted results, users are asked to select from five feedback options: \textit{Correct}, \textit{Wrong Types}, \textit{Incomplete Result}, \textit{Wrong Result} and \textit{Can't Tell}. The \textit{Correct} and \textit{Wrong Result} options represent scenarios when the user is satisfied with the result, or the result is identifiably wrong, respectively. \textit{Wrong Types} indicates incorrect entity identification, which can be determined from type highlighting. \textit{Incomplete Result} indicates that the query is correct but the result is not; this outcome can occur because the database is incomplete. \textit{Can't Tell} indicates that the user is unsure about the feedback to provide. \subsection{Three-Stage Online Experiment} In this experiment, using our developed user interface, we use Algorithm \ref{alg:interactive} to learn a semantic parser from scratch. The experiment had three stages; in each stage, we recruited 10 new users (computer science graduate students) and asked them to issue at least 10 utterances each to the system and to provide feedback on the results. We considered results marked as either \textit{Correct} or \textit{Incomplete Result} as correct queries for learning. The remaining incorrect utterances were sent to a crowd worker for annotation and were used to retrain the system for the next stage. The crowd worker had prior experience in writing SQL queries and was hired from Upwork after completing a short SQL test. The worker was also given access to the database to be able to execute the queries and ensure that they are correct. For the first stage, the system was trained using 640 examples generated using templates, that were augmented to 1746 examples using paraphrasing (see Section \ref{sec:data_augmentation}). The complexity of the utterances issued in each of the three phases were comparable, in that, the average length of the correct SQL query for the utterances, and the number of tables required to be queried, were similar. Table \ref{tab:live} shows the percent of utterances judged by users as either \textit{Correct} or \textit{Incomplete Result} in each stage. In the first stage, we do not have any labeled examples, and the model is trained using only synthetically generated data from schema templates and paraphrases (see Section \ref{sec:data_augmentation}). Despite the lack of real examples, the system correctly answers 25\% of questions. The system's accuracy increases and annotation effort decreases in each successive stage as additional utterances are contributed and incorrect utterances are labeled. This result demonstrates that we can successfully build semantic parsers for new domains by using neural models to generate SQL with crowd-sourced annotations driven by user feedback. We analyzed the feedback signals provided by the users in the final stage of the experiment to measure the quality of feedback. We found that 22.3\% of the generated queries did not execute (and hence were incorrect). 6.1\% of correctly generated queries were marked wrong by users (see Table \ref{tab:liveablation}). This erroneous feedback results in redundant annotation of already correct examples. The main cause of this erroneous feedback was incomplete data for aggregation queries, where users chose \textit{Wrong} instead of \textit{Incomplete}. 6.3\% of incorrect queries were erroneously deemed correct by users. It is important that this fraction be low, as these queries become incorrectly-labeled examples in the training set that may contribute to the deterioration of model accuracy over time. This quality of feedback is already sufficient for our neural models to improve with usage, and creating better interfaces to make feedback more accurate is an important task for future work. \subsection{SCHOLAR dataset} \label{sec:scholar} We release a new semantic parsing dataset for academic database search using the utterances gathered in the user study. We augment these labeled utterances with additional utterances labeled by crowd workers. (Note that these additional utterances were not used in the online experiment). The final dataset comprises 816 natural language utterances labeled with SQL, divided into a 600/216 train/test split. We also provide a database on which to execute these queries containing academic papers with their authors, citations, journals, keywords and datasets used. Table \ref{tab:stats} shows statistics of this dataset. Our parser achieves an accuracy of 67\% on this train/test split in the fully supervised setting. In comparison, a nearest neighbor strategy that uses the cosine similarity metric using a TF-IDF representation for the utterances yields an accuracy of 52.75\%. We found that 15\% of the predicted queries did not execute, predominantly owing to (1) accessing table columns without joining with those tables, and (2) generating incorrect types that could not be deanonymized using the utterance. The main types of errors in the remaining well-formed queries that produced incorrect results were (1) portions of the utterance (such as `top' and `cited by both') were ignored, and (2) some types from the utterance were not transferred to the SQL query. \subsection{Simulated Interactive Experiments} We conducted additional simulated interactive learning experiments using \textsc{Geo880} and \textsc{ATIS} to better understand the behavior of our train-deploy feedback loop, the effects of our data augmentation approaches, and the annotation effort required. We randomly divide each training set into $K$ batches and present these batches sequentially to our interactive learning algorithm. Correctness feedback is provided by comparing the result of the predicted query to the gold query, i.e., we assume that users are able to perfectly distinguish correct results from incorrect ones. Figure \ref{fig:geoatisint} shows accuracies on \textsc{Geo880} and \textsc{ATIS} respectively of each batch when the model is trained on all previous batches. As in the live experiment, accuracy improves with successive batches. Data augmentation using templates helps in the initial stages of \textsc{Geo880}, but its advantage is reduced as more labeled data is obtained. Templates did not improve accuracy on \textsc{ATIS}, possibly because most \textsc{ATIS} queries involve two entities, i.e., a source city and a destination city, whereas our templates only generate questions with a single entity type. Nevertheless, templates are important in a live system to motivate users to interact with it in early stages. As observed before, paraphrasing improves performance at all stages. Table \ref{savings} shows the percent of examples that require annotation using various batch sizes for \textsc{Geo880}. Smaller batch sizes reduce annotation effort, with a batch size of $50$ requiring only $54.3\%$ of the examples to be annotated. This result demonstrates that more frequent deployments of improved models leads to fewer mistakes. Our first set of experiments demonstrates that our semantic parsing model has comparable accuracy to previous work, despite the increased difficulty of directly producing SQL. We demonstrate this result by running our model on two benchmark datasets for semantic parsing, \textsc{Geo880} and \textsc{ATIS}. \subsection{Data sets} \textsc{Geo880} is a collection of 880 utterances issued to a database of US geographical facts (Geobase), originally in Prolog format. \newcite{popescu2003towards} created a relational database schema for Geobase together with SQL queries for a subset of 700 utterances. To compare against prior work on the full corpus, we annotated the remaining utterances and used the standard 600/280 training/test split \cite{zettlemoyer05}. \textsc{ATIS} is a collection of 5,418 utterances to a flight booking system, accompanied by a relational database and SQL queries to answer the questions. We use 4,473 utterances for training, 497 for development and 448 for test, following \newcite{kwiatkowski2011lexical}. The original SQL queries were very inefficient to execute due to the use of \texttt{IN} clauses, so we converted them to joins \cite{Ramakrishnan:2002:DMS:560733} while verifying that the output of the queries was unchanged. Table \ref{tab:stats} shows characteristics of both data sets. \textsc{Geo880} has shorter queries but is more compositional: almost 40\% of the SQL queries have at least one nested subquery. \textsc{ATIS} has the longest utterances and queries, with an average utterance length of 11 words and an average SQL query length of 67 tokens. They also operate on approximately 6 tables per query on average. We will release our processed versions of both datasets. \subfile{datastats} \subsection{Experimental Methodology} We follow a standard train/dev/test methodology for our experiments. The training set is augmented using schema templates and 3 paraphrases per training example, as described in Section \ref{sec:model}. Utterances were anonymized by replacing them with their corresponding types and all words that occur only once were replaced by UNK symbols. The development set is used for hyperparameter tuning and early stopping. For \textsc{Geo880}, we use cross validation on the training set to tune hyperparameters. We used a minibatch size of 100 and used Adam \cite{kingma2014adam} with a learning rate of 0.001 for 70 epochs for all our experiments. We used a beam size of 5 for decoding. We report test set accuracy of our SQL query predictions by executing them on the target database and comparing the result with the true result. \subsection{Results} Tables \ref{geofull} and \ref{atisfull} show test accuracies based on denotations for our model on \textsc{Geo880} and \textsc{ATIS} respectively, compared with previous work.\footnote{Note that 2.8\% of \textsc{Geo880} and 5\% \textsc{ATIS} gold test set SQL queries (before any processing) produced empty results.} To our knowledge, this is the first result on directly parsing to SQL to achieve comparable performance to prior work without using any database-specific feature engineering. \newcite{popescu2003towards} and \newcite{giordani-moschitti:2012:POSTERS} also directly produce SQL queries but on a subset of 700 examples from \textsc{Geo880}. The former only works on semantically tractable utterances where words can be unambiguously mapped to schema elements, while the latter uses a reranking approach that also limits the complexity of SQL queries that can be handled. GUSP \cite{poon2013} creates an intermediate representation that is then deterministically converted to SQL to obtain an accuracy of $74.8\%$ on \textsc{ATIS}, which is boosted to 83.5\% using manually introduced disambiguation rules. However, it requires a lot of SQL specific engineering (for example, special nodes for argmax) and is hard to extend to more complex SQL queries. On both datasets, our SQL model achieves reasonably high accuracies approaching that of the best non-SQL results. Most relevant to this work are the neural sequence based approaches of \newcite{dong2016} and \newcite{jia2016}. We note that \newcite{jia2016} use a data recombination technique that boosts accuracy from 85.0 on \textsc{geo880} and 76.3 on \textsc{ATIS}; this technique is also compatible with our model and we hope to experiment with this in future work. Our results demonstrate that these models are powerful enough to directly produce SQL queries. Thus, our methods enable us to utilize the full expressivity of the SQL language without any extensions that certain logical representations require to answer more complex queries. More importantly, it can be immediately deployed for users in new domains, with a large programming community available for annotation, and thus, fits effectively into a framework for interactive learning. We perform ablation studies on the development sets (see Table \ref{ablation}) and find that paraphrasing using PPDB consistently helps boost performance. However, unlike in the interactive experiments (Section \ref{sec:interactive_learning_experiments}), data augmentation using schema templates does not improve performance in the fully supervised setting. \section{Feedback-based Learning} \label{sec:interactive_learning} Our feedback-based learning approach can be used to quickly deploy semantic parsers to create NLIDBs for any new domain. It is a simple interactive learning algorithm that deploys a preliminary semantic parser, then iteratively improves this parser using user feedback and selective query annotation. A key requirement of this algorithm is the ability to cheaply and efficiently annotate queries for chosen user utterances. We address this requirement by developing a model that directly outputs SQL queries (Section \ref{sec:model}), which can also be produced by crowd workers. Our algorithm alternates between stages of training the model and making predictions to gather user feedback, with the goal of improving performance in each successive stage. The procedure is described in Algorithm \ref{alg:interactive}. Our neural model $\mathscr{N}$ is initially trained on synthetic data $T$ generated by domain-independent schema templates (see Section \ref{sec:model}), and is then ready to answer new user questions, $n$. The results $\mathscr{R}$ of executing the predicted SQL query $q$ are presented to the user who provides a binary correct/incorrect feedback signal. If the user marks the result correct, the pair $(n, q)$ is added to the training set. If the user marks the result incorrect, the algorithm asks a crowd worker to annotate the utterance with the correct query, $\hat{q}$, and adds $(n, \hat{q})$ to the training set. This procedure can be repeated indefinitely, ideally increasing parser accuracy and requesting fewer annotations in each successive stage. \begin{algorithm}[h] \SetKwProg{myproc}{Procedure}{}{end} \myproc{LEARN(schema)}{ $ T \gets$ initial\_data(\textit{schema}) \While {true}{ $\mathscr{T} \gets T\ \cup$ paraphrase($T$)\\ $\mathscr{N} \gets $ train\_model($\mathscr{T}$)\\ \For {$n \in $ new utterances}{ $q \gets $ predict($\mathscr{N}, n$)\\ $\mathscr{R} \gets $ execute($q$)\\ $f \gets $ feedback($\mathscr{R}$)\\ \uIf {$f = $ correct}{ $T \gets T \cup (n, q)$ } \ElseIf {$f = $ wrong}{ $\hat{q} \gets $ annotate($n$)\\ $T \gets T \cup (n, \hat{q})$ } } } } \caption{Feedback-based learning.}% - Learning is divided into several stages. Our models are initially trained using synthetically generated data. When users issue new utterances, the SQL query predicted by our model is executed, and its results are presented to the user. Based on feedback from the user, either the generated utterance-query pair is used for subsequent training, or, the utterance is sent to a crowd worker for annotation. The model is retrained at the end of the phase.} \label{alg:interactive} \end{algorithm} We use a neural sequence-to-sequence model for mapping natural language questions directly to SQL queries and this allows us to scale our feedback-based learning approach, by easily crowdsourcing labels when necessary. We further present two data augmentation techniques which use content from the database schema and external paraphrase resources. \subsection{Model} We use an encoder-decoder model with global attention, similar to \newcite{luong-pham-manning:2015:EMNLP}, where the anonymized utterance (see Section \ref{sec:anonymization}) is encoded using a bidirectional LSTM network, then decoded to directly predict SQL query tokens. Fixed pre-trained word embeddings from word2vec \cite{mikolov2013distributed} are concatenated to the embeddings that are learned for source tokens from the training data. The decoder predicts a conditional probability distribution over possible values for the next SQL token given the previous tokens using a combination of the previous SQL token embedding, attention over the hidden states of the encoder network, and an attention signal from the previous time step. Formally, if $\mathbf{q_i}$ represents an embedding for the i$^{th}$ SQL token $q_i$, the decoder distribution is \begin{align*} p(q_i | q_{1},\dots, q_{i - 1}) &\propto \exp{(\mathbf{W} ~ \text{tanh} (\mathbf{\hat{W}} [\mathbf{h_i} : \mathbf{c_i}] ))} \end{align*} where $\mathbf{h_i}$ represents the hidden state output of the decoder LSTM at the $i^{th}$ timestep, $\mathbf{c_i}$ represents the context vector generated using an attention weighted sum of encoder hidden states based on $\mathbf{h_i}$, and, $\mathbf{W}$ and $\mathbf{\hat{W}}$ are linear transformations. If $\mathbf{s_j}$ is the hidden representation generated by the encoder for the j$^{th}$ word in the utterance ($k$ words long), then the context vectors are defined to be: \begin{align*} \mathbf{c_i} = \sum_{j=1}^k \alpha_{i, j} \cdot \mathbf{s_j} \end{align*} The attention weights $\alpha_{i, j}$ are computed using an inner product between the decoder hidden state for the current timestep $\mathbf{h_i}$, and the hidden representation of the j$^{th}$ source token $\mathbf{s_j}$: \begin{align*} \alpha_{i, j} &= \frac{\text{exp}(\mathbf{h_i}^\text{T} \mathbf{F} \mathbf{s_j})}{\sum_{j=1}^k \text{exp}(\mathbf{h_i}^\text{T} \mathbf{F} \mathbf{s_j})} \end{align*} where $\mathbf{F}$ is a linear transformation. The decoder LSTM cell $f$ computes the next hidden state $\mathbf{h_i}$, and cell state, $\mathbf{m_i}$, based on the previous hidden and cell states, $\mathbf{h_{i-1}}, \mathbf{m_{i-1}}$, the embeddings of the previous SQL token $\mathbf{q_{i-1}}$ and the context vector of the previous timestep, $\mathbf{c_{i-1}}$ \begin{align*} \mathbf{h_i}, \mathbf{m_i} &= f(\mathbf{h_{i-1}}, \mathbf{m_{i-1}}, \mathbf{q_{i-1}}, \mathbf{c_{i-1}}) \end{align*} We apply dropout on non-recurrent connections for regularization, as suggested by \newcite{pham2014}. Beam search is used for decoding the SQL queries after learning. \subsection{Entity Anonymization} \label{sec:anonymization} We handle entities in the utterances and SQL by replacing them with their types, using incremental numbering to model multiple entities of the same type (e.g., \texttt{CITY\_NAME\_1}). During training, when the SQL is available, we infer the type from the associated column name; for example, Boston is a city in \texttt{city.city\_name} $=$ \texttt{'Boston'}. To recognize entities in the utterances at test time, we build a search engine on all entities from the target database. For every span of words (starting with a high span size and progressively reducing it), we query the search engine using a TF-IDF scheme to retrieve the entity that most closely matches the span, then replace the span with the entity's type. We store these mappings and apply them to the generated SQL to fill in the entity names. TF-IDF matching allows some flexibility in matching entity names in utterances, for example, a user could say \textit{Donald Knuth} instead of \textit{Donald E. Knuth}. \subsection{Data Augmentation} \label{sec:data_augmentation} We present two data augmentation strategies that either (1) provide the initial training data to start the interactive learning, before more labeled examples become available, or (2) use external paraphrase resources to improve generalization. \paragraph{Schema Templates} To bootstrap the model to answer simple questions initially, we defined 22 language/SQL templates that are schema-agnostic, so they can be applied to any database. These templates contain slots whose values are populated given a database schema. An example template is shown in Figure \ref{fig:aug1}. The \texttt{<ENT>} types represent tables in the database schema, \texttt{<ENT>.<COL>} represents a column in the particular table and \texttt{<ENT>.<COL>.<TYPE>} represents the type associated with the particular column. A template is instantiated by first choosing the entities and attributes. Next, join conditions, i.e., \texttt{JOIN\_FROM} and \texttt{JOIN\_WHERE} clauses, are generated from the tables on the shortest path between the chosen tables in the database schema graph, which connects tables (graph nodes) using foreign key constraints. Figure \ref{fig:aug3} shows an instantiation of a template using the path author - writes - paper - paperdataset - dataset. SQL queries generated in this manner are guaranteed to be executable on the target database. On the language side, an English name of each entity is plugged into the template to generate an utterance for the query. \paragraph{Paraphrasing} The second data augmentation strategy uses the Paraphrase Database (PPDB) \cite{ganitkevitch2013ppdb} to automatically generate paraphrases of training utterances. Such methods have been recently used to improve performance for parsing to logical forms \cite{chen-EtAl:2016:P16-12}. PPDB contains over 220 million paraphrase pairs divided into 6 sets (small to XXXL) based on precision of the paraphrases. We use the one-one and one-many paraphrases from the large version of PPDB. To paraphrase a training utterance, we pick a random word in the utterance that is not a stop word or entity and replace it with a random paraphrase. We perform paraphrase expansion on all examples labeled during learning, as well as the initial seed examples from schema templates. %expansion in the last paragraph.
Quasi-Recurrent Neural Networks
1611.01576
Table 2: Single model perplexity on validation and test sets for the Penn Treebank language modeling task. Lower is better. “Medium” refers to a two-layer network with 640 or 650 hidden units per layer. All QRNN models include dropout of 0.5 on embeddings and between layers. MC refers to Monte Carlo dropout averaging at test time.
[ "[BOLD] Model", "[BOLD] Parameters", "[BOLD] Validation", "[BOLD] Test" ]
[ [ "LSTM (medium) (Zaremba et al., 2014 )", "20M", "86.2", "82.7" ], [ "Variational LSTM (medium, MC) (Gal & Ghahramani, 2016 )", "20M", "81.9", "79.7" ], [ "LSTM with CharCNN embeddings (Kim et al., 2016 )", "19M", "−", "78.9" ], [ "Zoneout + Variational LSTM (medium) (Merity et al., 2016 )", "20M", "84.4", "80.6" ], [ "[ITALIC] Our models", "[EMPTY]", "[EMPTY]", "[EMPTY]" ], [ "LSTM (medium)", "20M", "85.7", "82.0" ], [ "QRNN (medium)", "18M", "82.9", "79.9" ], [ "QRNN + zoneout ( [ITALIC] p=0.1) (medium)", "18M", "82.1", "78.3" ] ]
The QRNN without zoneout strongly outperforms both our medium LSTM and the medium LSTM of Zaremba et al. This may be due to the limited computational capacity that the QRNN’s pooling layer has relative to the LSTM’s recurrent weights, providing structural regularization over the recurrence.
\documentclass{article} % For LaTeX2e \usepackage[bookmarks=false]{hyperref} % allow newline in caption % toprule etc for tables \usepackage[pdftex]{graphicx} % images \DeclareMathOperator*{\softmax}{softmax} \DeclareMathOperator*{\sigmoid}{\sigma} \hypersetup{ colorlinks, linkcolor={red!50!black}, citecolor={blue!50!black}, urlcolor={blue!80!black} } \title{Quasi-Recurrent Neural Networks} \author{James Bradbury\thanks{Equal contribution}, Stephen Merity\footnotemark[1], Caiming Xiong \& Richard Socher \\ Salesforce Research\\ Palo Alto, California \\ \texttt{\{james.bradbury,smerity,cxiong,rsocher\}@salesforce.com}} \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \def\rot#1{\rotatebox{90}{#1}} \begin{document} \maketitle \begin{abstract} Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks. \end{abstract} \section{Introduction} Recurrent neural networks (RNNs), including gated variants such as the long short-term memory (LSTM) \citep{Hochreiter1997} have become the standard model architecture for deep learning approaches to sequence modeling tasks. RNNs repeatedly apply a function with trainable parameters to a hidden state. Recurrent layers can also be stacked, increasing network depth, representational power and often accuracy. RNN applications in the natural language domain range from sentence classification \citep{Wang2015} to word- and character-level language modeling \citep{Zaremba2014}. RNNs are also commonly the basic building block for more complex models for tasks such as machine translation \citep{Bahdanau2015,Luong2015,Bradbury2016} or question answering \citep{Kumar2016,Xiong2016}. Unfortunately standard RNNs, including LSTMs, are limited in their capability to handle tasks involving very long sequences, such as document classification or character-level machine translation, as the computation of features or states for different parts of the document cannot occur in parallel. Convolutional neural networks (CNNs) \citep{Krizhevsky2012}, though more popular on tasks involving image data, have also been applied to sequence encoding tasks \citep{Zhang2015}. Such models apply time-invariant filter functions in parallel to windows along the input sequence. CNNs possess several advantages over recurrent models, including increased parallelism and better scaling to long sequences such as those often seen with character-level language data. Convolutional models for sequence processing have been more successful when combined with RNN layers in a hybrid architecture \citep{Lee2016}, because traditional max- and average-pooling approaches to combining convolutional features across timesteps assume time invariance and hence cannot make full use of large-scale sequence order information. We present quasi-recurrent neural networks for neural sequence modeling. QRNNs address both drawbacks of standard models: like CNNs, QRNNs allow for parallel computation across both timestep and minibatch dimensions, enabling high throughput and good scaling to long sequences. Like RNNs, QRNNs allow the output to depend on the overall order of elements in the sequence. We describe QRNN variants tailored to several natural language tasks, including document-level sentiment classification, language modeling, and character-level machine translation. These models outperform strong LSTM baselines on all three tasks while dramatically reducing computation time. \section{Model} Each layer of a quasi-recurrent neural network consists of two kinds of subcomponents, analogous to convolution and pooling layers in CNNs. The convolutional component, like convolutional layers in CNNs, allows fully parallel computation across both minibatches and spatial dimensions, in this case the sequence dimension. The pooling component, like pooling layers in CNNs, lacks trainable parameters and allows fully parallel computation across minibatch and feature dimensions. Given an input sequence $\mathbf{X}\in\mathbb{R}^{T\times n}$ of $T$ $n$-dimensional vectors $\mathbf{x}_1\ldots\mathbf{x}_T$, the convolutional subcomponent of a QRNN performs convolutions in the timestep dimension with a bank of $m$ filters, producing a sequence $\mathbf{Z}\in\mathbb{R}^{T\times m}$ of $m$-dimensional candidate vectors $\mathbf{z}_t$. In order to be useful for tasks that include prediction of the next token, the filters must not allow the computation for any given timestep to access information from future timesteps. That is, with filters of width $k$, each $\mathbf{z}_t$ depends only on $\mathbf{x}_{t-k+1}$ through $\mathbf{x}_t$. This concept, known as a masked convolution \citep{vandenOord2016}, is implemented by padding the input to the left by the convolution's filter size minus one. We apply additional convolutions with separate filter banks to obtain sequences of vectors for the elementwise gates that are needed for the pooling function. While the candidate vectors are passed through a $\tanh$ nonlinearity, the gates use an elementwise sigmoid. If the pooling function requires a forget gate $\mathbf{f}_t$ and an output gate $\mathbf{o}_t$ at each timestep, the full set of computations in the convolutional component is then: \begin{align} \begin{split}\label{conv} \mathbf{Z}&=\tanh(\mathbf{W}_z*\mathbf{X})\\ \mathbf{F}&=\sigmoid(\mathbf{W}_f*\mathbf{X})\\ \mathbf{O}&=\sigmoid(\mathbf{W}_o*\mathbf{X}), \end{split} \end{align} where $\mathbf{W}_z$,$\mathbf{W}_f$, and $\mathbf{W}_o$, each in $\mathbb{R}^{k\times n\times m}$, are the convolutional filter banks and $*$ denotes a masked convolution along the timestep dimension. Note that if the filter width is 2, these equations reduce to the LSTM-like \begin{align} \begin{split}\label{lstm-like} \mathbf{z}_t&=\tanh(\mathbf{W}^1_z\mathbf{x}_{t-1}+\mathbf{W}^2_z\mathbf{x}_t)\\ \mathbf{f}_t&=\sigmoid(\mathbf{W}^1_f\mathbf{x}_{t-1}+\mathbf{W}^2_f\mathbf{x}_t)\\ \mathbf{o}_t&=\sigmoid(\mathbf{W}^1_o\mathbf{x}_{t-1}+\mathbf{W}^2_o\mathbf{x}_t). \end{split} \end{align} Convolution filters of larger width effectively compute higher $n$-gram features at each timestep; thus larger widths are especially important for character-level tasks. Suitable functions for the pooling subcomponent can be constructed from the familiar elementwise gates of the traditional LSTM cell. We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option, which \cite{Balduzzi2016} term ``dynamic average pooling'', uses only a forget gate: \begin{align} \begin{split}\label{f-pool} \mathbf{h}_t&=\mathbf{f}_t\odot \mathbf{h}_{t-1}+(1-\mathbf{f}_t)\odot \mathbf{z}_t, \end{split} \intertext{where $\odot$ denotes elementwise multiplication. The function may also include an output gate:} \begin{split}\label{fo-pool} \mathbf{c}_t&=\mathbf{f}_t\odot \mathbf{c}_{t-1}+(1-\mathbf{f}_t)\odot \mathbf{z}_t\\ \mathbf{h}_t&=\mathbf{o}_t\odot \mathbf{c}_t. \end{split} \intertext{Or the recurrence relation may include an independent input and forget gate:} \begin{split}\label{ifo-pool} \mathbf{c}_t&=\mathbf{f}_t\odot \mathbf{c}_{t-1}+\mathbf{i}_t\odot \mathbf{z}_t\\ \mathbf{h}_t&=\mathbf{o}_t\odot \mathbf{c}_t. \end{split}\end{align} We term these three options \emph{f}-pooling, \emph{fo}-pooling, and \emph{ifo}-pooling respectively; in each case we initialize $\mathbf{h}$ or $\mathbf{c}$ to zero. Although the recurrent parts of these functions must be calculated for each timestep in sequence, their simplicity and parallelism along feature dimensions means that, in practice, evaluating them over even long sequences requires a negligible amount of computation time. A single QRNN layer thus performs an input-dependent pooling, followed by a gated linear combination of convolutional features. As with convolutional neural networks, two or more QRNN layers should be stacked to create a model with the capacity to approximate more complex functions. \subsection{Variants} Motivated by several common natural language tasks, and the long history of work on related architectures, we introduce several extensions to the stacked QRNN described above. Notably, many extensions to both recurrent and convolutional models can be applied directly to the QRNN as it combines elements of both model types. \textbf{Regularization} \label{sec:qrnn_reg} An important extension to the stacked QRNN is a robust regularization scheme inspired by recent work in regularizing LSTMs. The need for an effective regularization method for LSTMs, and dropout's relative lack of efficacy when applied to recurrent connections, led to the development of recurrent dropout schemes, including variational inference--based dropout \citep{Gal2015} and zoneout \citep{Krueger2016}. These schemes extend dropout to the recurrent setting by taking advantage of the repeating structure of recurrent networks, providing more powerful and less destructive regularization. Variational inference--based dropout locks the dropout mask used for the recurrent connections across timesteps, so a single RNN pass uses a single stochastic subset of the recurrent weights. Zoneout stochastically chooses a new subset of channels to ``zone out'' at each timestep; for these channels the network copies states from one timestep to the next without modification. As QRNNs lack recurrent weights, the variational inference approach does not apply. Thus we extended zoneout to the QRNN architecture by modifying the pooling function to keep the previous pooling state for a stochastic subset of channels. Conveniently, this is equivalent to stochastically setting a subset of the QRNN's $f$ gate channels to 1, or applying dropout on $1-f$: \begin{align} \begin{split}\label{qrnn_dropout} \mathbf{F}&=1-\text{dropout}(1-\sigma(\mathbf{W}_f*\mathbf{X})))\\ \end{split} \end{align} Thus the pooling function itself need not be modified at all. We note that when using an off-the-shelf dropout layer in this context, it is important to remove automatic rescaling functionality from the implementation if it is present. In many experiments, we also apply ordinary dropout between layers, including between word embeddings and the first QRNN layer. \textbf{Densely-Connected Layers} We can also extend the QRNN architecture using techniques introduced for convolutional networks. For sequence classification tasks, we found it helpful to use skip-connections between every QRNN layer, a technique termed ``dense convolution'' by \cite{Huang2016}. Where traditional feed-forward or convolutional networks have connections only between subsequent layers, a ``DenseNet'' with $L$ layers has feed-forward or convolutional connections between every pair of layers, for a total of $L(L-1)$. This can improve gradient flow and convergence properties, especially in deeper networks, although it requires a parameter count that is quadratic in the number of layers. When applying this technique to the QRNN, we include connections between the input embeddings and every QRNN layer and between every pair of QRNN layers. This is equivalent to concatenating each QRNN layer's input to its output along the channel dimension before feeding the state into the next layer. The output of the last layer alone is then used as the overall encoding result. \textbf{Encoder--Decoder Models} \label{sec:seq2seq} To demonstrate the generality of QRNNs, we extend the model architecture to sequence-to-sequence tasks, such as machine translation, by using a QRNN as encoder and a modified QRNN, enhanced with attention, as decoder. The motivation for modifying the decoder is that simply feeding the last encoder hidden state (the output of the encoder's pooling layer) into the decoder's recurrent pooling layer, analogously to conventional recurrent encoder--decoder architectures, would not allow the encoder state to affect the gate or update values that are provided to the decoder's pooling layer. This would substantially limit the representational power of the decoder. Instead, the output of each decoder QRNN layer's convolution functions is supplemented at every timestep with the final encoder hidden state. This is accomplished by adding the result of the convolution for layer $\ell$ (e.g., $\mathbf{W}_z^\ell*\mathbf{X}^\ell$, in $\mathbb{R}^{T\times m}$) with broadcasting to a linearly projected copy of layer $\ell$'s last encoder state (e.g., $\mathbf{V}_z^\ell\mathbf{\tilde{h}}_T^\ell$, in $\mathbb{R}^m$): \begin{align} \begin{split}\label{decoder-conv} \mathbf{Z}^\ell&=\tanh(\mathbf{W}_z^\ell*\mathbf{X}^\ell+\mathbf{V}_z^\ell\mathbf{\tilde{h}}^\ell_T)\\ \mathbf{F}^\ell&=\sigmoid(\mathbf{W}_f^\ell*\mathbf{X}^\ell+\mathbf{V}_f^\ell\mathbf{\tilde{h}}^\ell_T)\\ \mathbf{O}^\ell&=\sigmoid(\mathbf{W}_o^\ell*\mathbf{X}^\ell+\mathbf{V}_o^\ell\mathbf{\tilde{h}}^\ell_T), \end{split} \end{align} where the tilde denotes that $\mathbf{\tilde{h}}$ is an encoder variable. Encoder--decoder models which operate on long sequences are made significantly more powerful with the addition of soft attention \citep{Bahdanau2015}, which removes the need for the entire input representation to fit into a fixed-length encoding vector. In our experiments, we computed an attentional sum of the encoder's last layer's hidden states. We used the dot products of these encoder hidden states with the decoder's last layer's un-gated hidden states, applying a $\softmax$ along the encoder timesteps, to weight the encoder states into an attentional sum $\mathbf{k}_t$ for each decoder timestep. This context, and the decoder state, are then fed into a linear layer followed by the output gate: \begin{align} \begin{split}\label{attn} \alpha_{st}&=\softmax_\text{all s}(\mathbf{c}^L_t\cdot\mathbf{\tilde{h}}^L_s)\\ \mathbf{k}_t&=\sum_s\alpha_{st}\mathbf{\tilde{h}}^L_s\\ \mathbf{h}^L_t&=\mathbf{o}_t\odot(\mathbf{W}_k\mathbf{k}_t+\mathbf{W}_c \mathbf{c}^L_t), \end{split} \end{align} where $L$ is the last layer. While the first step of this attention procedure is quadratic in the sequence length, in practice it takes significantly less computation time than the model's linear and convolutional layers due to the simple and highly parallel dot-product scoring function. \section{Experiments} We evaluate the performance of the QRNN on three different natural language tasks: document-level sentiment classification, language modeling, and character-based neural machine translation. Our QRNN models outperform LSTM-based models of equal hidden size on all three tasks while dramatically improving computation speed. Experiments were implemented in Chainer \citep{Tokui2015}. \subsection{Sentiment Classification} We evaluate the QRNN architecture on a popular document-level sentiment classification benchmark, the IMDb movie review dataset \citep{Maas2011}. The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words \citep{Wang2012}. We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., \citet{Miyato2016}). Our best performance on a held-out development set was achieved using a four-layer densely-connected QRNN with 256 units per layer and word vectors initialized using 300-dimensional cased GloVe embeddings \citep{Pennington2014}. Dropout of 0.3 was applied between layers, and we used $L^2$ regularization of $4\times 10^{-6}$. Optimization was performed on minibatches of 24 examples using RMSprop \citep{Tieleman2012} with learning rate of $0.001$, $\alpha=0.9$, and $\epsilon=10^{-8}$. Small batch sizes and long sequence lengths provide an ideal situation for demonstrating the QRNN's performance advantages over traditional recurrent architectures. We observed a speedup of 3.2x on IMDb train time per epoch compared to the optimized LSTM implementation provided in NVIDIA's cuDNN library. For specific batch sizes and sequence lengths, a 16x speed gain is possible. Figure \ref{fig:QRNNspeed} provides extensive speed comparisons. In Figure \ref{fig:IMDBviz}, we visualize the hidden state vectors $\mathbf{c}^L_t$ of the final QRNN layer on part of an example from the IMDb dataset. Even without any post-processing, changes in the hidden state are visible and interpretable in regards to the input. This is a consequence of the elementwise nature of the recurrent pooling function, which delays direct interaction between different channels of the hidden state until the computation of the next QRNN layer. \subsection{Language Modeling} We replicate the language modeling experiment of \citet{Zaremba2014} and \citet{Gal2015} to benchmark the QRNN architecture for natural language sequence prediction. The experiment uses a standard preprocessed version of the Penn Treebank (PTB) by \citet{Mikolov2010}. We implemented a gated QRNN model with medium hidden size: 2 layers with 640 units in each layer. Both QRNN layers use a convolutional filter width $k$ of two timesteps. While the ``medium'' models used in other work \citep{Zaremba2014,Gal2015} consist of 650 units in each layer, it was more computationally convenient to use a multiple of 32. As the Penn Treebank is a relatively small dataset, preventing overfitting is of considerable importance and a major focus of recent research. It is not obvious in advance which of the many RNN regularization schemes would perform well when applied to the QRNN. Our tests showed encouraging results from zoneout applied to the QRNN's recurrent pooling layer, implemented as described in Section \ref{sec:qrnn_reg}. The experimental settings largely followed the ``medium'' setup of \citet{Zaremba2014}. Optimization was performed by stochastic gradient descent (SGD) without momentum. The learning rate was set at 1 for six epochs, then decayed by 0.95 for each subsequent epoch, for a total of 72 epochs. We additionally used $L^2$ regularization of $2\times 10^{-4}$ and rescaled gradients with norm above 10. Zoneout was applied by performing dropout with ratio 0.1 on the forget gates of the QRNN, without rescaling the output of the dropout function. Batches consist of 20 examples, each 105 timesteps. Comparing our results on the gated QRNN with zoneout to the results of LSTMs with both ordinary and variational dropout in Table \ref{table:PTBresults}, we see that the QRNN is highly competitive. The QRNN without zoneout strongly outperforms both our medium LSTM and the medium LSTM of \citet{Zaremba2014} which do not use recurrent dropout and is even competitive with variational LSTMs. This may be due to the limited computational capacity that the QRNN's pooling layer has relative to the LSTM's recurrent weights, providing structural regularization over the recurrence. Without zoneout, early stopping based upon validation loss was required as the QRNN would begin overfitting. By applying a small amount of zoneout ($p=0.1$), no early stopping is required and the QRNN achieves competitive levels of perplexity to the variational LSTM of \citet{Gal2015}, which had variational inference based dropout of 0.2 applied recurrently. Their best performing variation also used Monte Carlo (MC) dropout averaging at test time of 1000 different masks, making it computationally more expensive to run. When training on the PTB dataset with an NVIDIA K40 GPU, we found that the QRNN is substantially faster than a standard LSTM, even when comparing against the optimized cuDNN LSTM. In Figure \ref{fig:QRNNspeed} we provide a breakdown of the time taken for Chainer's default LSTM, the cuDNN LSTM, and QRNN to perform a full forward and backward pass on a single batch during training of the RNN LM on PTB. For both LSTM implementations, running time was dominated by the RNN computations, even with the highly optimized cuDNN implementation. For the QRNN implementation, however, the ``RNN'' layers are no longer the bottleneck. Indeed, there are diminishing returns from further optimization of the QRNN itself as the softmax and optimization overhead take equal or greater time. Note that the softmax, over a vocabulary size of only 10,000 words, is relatively small; for tasks with larger vocabularies, the softmax would likely dominate computation time. It is also important to note that the cuDNN library's RNN primitives do not natively support any form of recurrent dropout. That is, running an LSTM that uses a state-of-the-art regularization scheme at cuDNN-like speeds would likely require an entirely custom kernel. \subsection{Character-level Neural Machine Translation} We evaluate the sequence-to-sequence QRNN architecture described in \ref{sec:seq2seq} on a challenging neural machine translation task, IWSLT German--English spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a unified vocabulary of 187 Unicode code points. Our best performance on a development set (TED.tst2013) was achieved using a four-layer encoder--decoder QRNN with 320 units per layer, no dropout or $L^2$ regularization, and gradient rescaling to a maximum magnitude of 5. Inputs were supplied to the encoder reversed, while the encoder convolutions were not masked. The first encoder layer used convolutional filter width $k=6$, while the other encoder layers used $k=2$. Optimization was performed for 10 epochs on minibatches of 16 examples using Adam \citep{kingma2014adam} with $\alpha=0.001$, $\beta_1=0.9$, $\beta_2=0.999$, and $\epsilon=10^{-8}$. Decoding was performed using beam search with beam width 8 and length normalization $\alpha=0.6$. The modified log-probability ranking criterion is provided in the appendix. Results using this architecture were compared to an equal-sized four-layer encoder--decoder LSTM with attention, applying dropout of 0.2. We again optimized using Adam; other hyperparameters were equal to their values for the QRNN and the same beam search procedure was applied. Table \ref{table:MTresults} shows that the QRNN outperformed the character-level LSTM, almost matching the performance of a word-level attentional baseline. \section{Related Work} Exploring alternatives to traditional RNNs for sequence tasks is a major area of current research. Quasi-recurrent neural networks are related to several such recently described models, especially the strongly-typed recurrent neural networks (T-RNN) introduced by \cite{Balduzzi2016}. While the motivation and constraints described in that work are different, \cite{Balduzzi2016}'s concepts of ``learnware'' and ``firmware'' parallel our discussion of convolution-like and pooling-like subcomponents. As the use of a fully connected layer for recurrent connections violates the constraint of ``strong typing'', all strongly-typed RNN architectures (including the T-RNN, T-GRU, and T-LSTM) are also quasi-recurrent. However, some QRNN models (including those with attention or skip-connections) are not ``strongly typed''. In particular, a T-RNN differs from a QRNN as described in this paper with filter size 1 and \emph{f}-pooling only in the absence of an activation function on $\mathbf{z}$. Similarly, T-GRUs and T-LSTMs differ from QRNNs with filter size 2 and \emph{fo}- or \emph{ifo}-pooling respectively in that they lack $\tanh$ on $\mathbf{z}$ and use $\tanh$ rather than sigmoid on $\mathbf{o}$. The QRNN is also related to work in hybrid convolutional--recurrent models. \citet{Zhou2015b} apply CNNs at the word level to generate $n$-gram features used by an LSTM for text classification. \citet{Xiao2016} also tackle text classification by applying convolutions at the character level, with a stride to reduce sequence length, then feeding these features into a bidirectional LSTM. A similar approach was taken by \citet{Lee2016} for character-level machine translation. Their model's encoder uses a convolutional layer followed by max-pooling to reduce sequence length, a four-layer highway network, and a bidirectional GRU. The parallelism of the convolutional, pooling, and highway layers allows training speed comparable to subword-level models without hard-coded text segmentation. The QRNN encoder--decoder model shares the favorable parallelism and path-length properties exhibited by the ByteNet \citep{Kalchbrenner2016}, an architecture for character-level machine translation based on residual convolutions over binary trees. Their model was constructed to achieve three desired properties: parallelism, linear-time computational complexity, and short paths between any pair of words in order to better propagate gradient signals. \section{Conclusion} Intuitively, many aspects of the semantics of long sequences are context-invariant and can be computed in parallel (e.g., convolutionally), but some aspects require long-distance context and must be computed recurrently. Many existing neural network architectures either fail to take advantage of the contextual information or fail to take advantage of the parallelism. QRNNs exploit both parallelism and context, exhibiting advantages from both convolutional and recurrent neural networks. QRNNs have better predictive accuracy than LSTM-based models of equal hidden size, even though they use fewer parameters and run substantially faster. Our experiments show that the speed and accuracy advantages remain consistent across tasks and at both word and character levels. Extensions to both CNNs and RNNs are often directly applicable to the QRNN, while the model's hidden states are more interpretable than those of other recurrent architectures as its channels maintain their independence across timesteps. We believe that QRNNs can serve as a building block for long-sequence tasks that were previously impractical with traditional RNNs. \bibliographystyle{iclr2017_conference} \setcounter{figure}{0} \renewcommand{\thefigure}{A\arabic{figure}} \clearpage \section*{Appendix} \subsection*{Beam search ranking criterion} The modified log-probability ranking criterion we used in beam search for translation experiments is: \begin{align} \begin{split}\label{length-bonus} \log(P_{\rm cand})&=\frac{T+\alpha}{T}\hdots\frac{T_{\rm trg}+\alpha}{T_{\rm trg}}\sum_{i=1}^T \log(p(w_i|w_1\hdots w_{i-1})), \end{split} \end{align} where $\alpha$ is a length normalization parameter \citep{Wu2016}, $w_i$ is the $i$\textsuperscript{th} output character, and $T_{\rm trg}$ is a ``target length'' equal to the source sentence length plus five characters. This reduces at $\alpha=0$ to ordinary beam search with probabilities: \begin{align} \begin{split}\label{alpha-zero} \log(P_{\rm cand})&=\sum_{i=1}^T \log(p(w_i|w_1\hdots w_{i-1})), \end{split} \end{align} and at $\alpha=1$ to beam search with probabilities normalized by length (up to the target length): \begin{align} \begin{split}\label{alpha-one} \log(P_{\rm cand})&\sim \frac{1}{T}\sum_{i=1}^T \log(p(w_i|w_1\hdots w_{i-1})). \end{split} \end{align} Conveniently, this ranking criterion can be computed at intermediate beam-search timesteps, obviating the need to apply a separate reranking on complete hypotheses. \end{document}
Quasi-Recurrent Neural Networks
1611.01576
Table 3: Translation performance, measured by BLEU, and train speed in hours per epoch, for the IWSLT German-English spoken language translation task. All models were trained on in-domain data only, and use negative log-likelihood as the training criterion. Our models were trained for 10 epochs. The QRNN model uses k=2 for all layers other than the first encoder layer.
[ "[BOLD] Model", "[BOLD] Train Time", "[BOLD] BLEU (TED.tst2014)" ]
[ [ "Word-level LSTM w/attn (Ranzato et al., 2016 )", "−", "20.2" ], [ "Word-level CNN w/attn, input feeding (Wiseman & Rush, 2016 )", "−", "24.0" ], [ "[ITALIC] Our models", "[EMPTY]", "[EMPTY]" ], [ "Char-level 4-layer LSTM", "4.2 hrs/epoch", "16.53" ], [ "Char-level 4-layer QRNN with [ITALIC] k=6", "1.0 hrs/epoch", "19.41" ] ]
Results using this architecture were compared to an equal-sized four-layer encoder–decoder LSTM with attention, applying dropout of 0.2. We again optimized using Adam; other hyperparameters were equal to their values for the QRNN and the same beam search procedure was applied.
\documentclass{article} % For LaTeX2e \usepackage[bookmarks=false]{hyperref} % allow newline in caption % toprule etc for tables \usepackage[pdftex]{graphicx} % images \DeclareMathOperator*{\softmax}{softmax} \DeclareMathOperator*{\sigmoid}{\sigma} \hypersetup{ colorlinks, linkcolor={red!50!black}, citecolor={blue!50!black}, urlcolor={blue!80!black} } \title{Quasi-Recurrent Neural Networks} \author{James Bradbury\thanks{Equal contribution}, Stephen Merity\footnotemark[1], Caiming Xiong \& Richard Socher \\ Salesforce Research\\ Palo Alto, California \\ \texttt{\{james.bradbury,smerity,cxiong,rsocher\}@salesforce.com}} \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \def\rot#1{\rotatebox{90}{#1}} \begin{document} \maketitle \begin{abstract} Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks. \end{abstract} \section{Introduction} Recurrent neural networks (RNNs), including gated variants such as the long short-term memory (LSTM) \citep{Hochreiter1997} have become the standard model architecture for deep learning approaches to sequence modeling tasks. RNNs repeatedly apply a function with trainable parameters to a hidden state. Recurrent layers can also be stacked, increasing network depth, representational power and often accuracy. RNN applications in the natural language domain range from sentence classification \citep{Wang2015} to word- and character-level language modeling \citep{Zaremba2014}. RNNs are also commonly the basic building block for more complex models for tasks such as machine translation \citep{Bahdanau2015,Luong2015,Bradbury2016} or question answering \citep{Kumar2016,Xiong2016}. Unfortunately standard RNNs, including LSTMs, are limited in their capability to handle tasks involving very long sequences, such as document classification or character-level machine translation, as the computation of features or states for different parts of the document cannot occur in parallel. Convolutional neural networks (CNNs) \citep{Krizhevsky2012}, though more popular on tasks involving image data, have also been applied to sequence encoding tasks \citep{Zhang2015}. Such models apply time-invariant filter functions in parallel to windows along the input sequence. CNNs possess several advantages over recurrent models, including increased parallelism and better scaling to long sequences such as those often seen with character-level language data. Convolutional models for sequence processing have been more successful when combined with RNN layers in a hybrid architecture \citep{Lee2016}, because traditional max- and average-pooling approaches to combining convolutional features across timesteps assume time invariance and hence cannot make full use of large-scale sequence order information. We present quasi-recurrent neural networks for neural sequence modeling. QRNNs address both drawbacks of standard models: like CNNs, QRNNs allow for parallel computation across both timestep and minibatch dimensions, enabling high throughput and good scaling to long sequences. Like RNNs, QRNNs allow the output to depend on the overall order of elements in the sequence. We describe QRNN variants tailored to several natural language tasks, including document-level sentiment classification, language modeling, and character-level machine translation. These models outperform strong LSTM baselines on all three tasks while dramatically reducing computation time. \section{Model} Each layer of a quasi-recurrent neural network consists of two kinds of subcomponents, analogous to convolution and pooling layers in CNNs. The convolutional component, like convolutional layers in CNNs, allows fully parallel computation across both minibatches and spatial dimensions, in this case the sequence dimension. The pooling component, like pooling layers in CNNs, lacks trainable parameters and allows fully parallel computation across minibatch and feature dimensions. Given an input sequence $\mathbf{X}\in\mathbb{R}^{T\times n}$ of $T$ $n$-dimensional vectors $\mathbf{x}_1\ldots\mathbf{x}_T$, the convolutional subcomponent of a QRNN performs convolutions in the timestep dimension with a bank of $m$ filters, producing a sequence $\mathbf{Z}\in\mathbb{R}^{T\times m}$ of $m$-dimensional candidate vectors $\mathbf{z}_t$. In order to be useful for tasks that include prediction of the next token, the filters must not allow the computation for any given timestep to access information from future timesteps. That is, with filters of width $k$, each $\mathbf{z}_t$ depends only on $\mathbf{x}_{t-k+1}$ through $\mathbf{x}_t$. This concept, known as a masked convolution \citep{vandenOord2016}, is implemented by padding the input to the left by the convolution's filter size minus one. We apply additional convolutions with separate filter banks to obtain sequences of vectors for the elementwise gates that are needed for the pooling function. While the candidate vectors are passed through a $\tanh$ nonlinearity, the gates use an elementwise sigmoid. If the pooling function requires a forget gate $\mathbf{f}_t$ and an output gate $\mathbf{o}_t$ at each timestep, the full set of computations in the convolutional component is then: \begin{align} \begin{split}\label{conv} \mathbf{Z}&=\tanh(\mathbf{W}_z*\mathbf{X})\\ \mathbf{F}&=\sigmoid(\mathbf{W}_f*\mathbf{X})\\ \mathbf{O}&=\sigmoid(\mathbf{W}_o*\mathbf{X}), \end{split} \end{align} where $\mathbf{W}_z$,$\mathbf{W}_f$, and $\mathbf{W}_o$, each in $\mathbb{R}^{k\times n\times m}$, are the convolutional filter banks and $*$ denotes a masked convolution along the timestep dimension. Note that if the filter width is 2, these equations reduce to the LSTM-like \begin{align} \begin{split}\label{lstm-like} \mathbf{z}_t&=\tanh(\mathbf{W}^1_z\mathbf{x}_{t-1}+\mathbf{W}^2_z\mathbf{x}_t)\\ \mathbf{f}_t&=\sigmoid(\mathbf{W}^1_f\mathbf{x}_{t-1}+\mathbf{W}^2_f\mathbf{x}_t)\\ \mathbf{o}_t&=\sigmoid(\mathbf{W}^1_o\mathbf{x}_{t-1}+\mathbf{W}^2_o\mathbf{x}_t). \end{split} \end{align} Convolution filters of larger width effectively compute higher $n$-gram features at each timestep; thus larger widths are especially important for character-level tasks. Suitable functions for the pooling subcomponent can be constructed from the familiar elementwise gates of the traditional LSTM cell. We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option, which \cite{Balduzzi2016} term ``dynamic average pooling'', uses only a forget gate: \begin{align} \begin{split}\label{f-pool} \mathbf{h}_t&=\mathbf{f}_t\odot \mathbf{h}_{t-1}+(1-\mathbf{f}_t)\odot \mathbf{z}_t, \end{split} \intertext{where $\odot$ denotes elementwise multiplication. The function may also include an output gate:} \begin{split}\label{fo-pool} \mathbf{c}_t&=\mathbf{f}_t\odot \mathbf{c}_{t-1}+(1-\mathbf{f}_t)\odot \mathbf{z}_t\\ \mathbf{h}_t&=\mathbf{o}_t\odot \mathbf{c}_t. \end{split} \intertext{Or the recurrence relation may include an independent input and forget gate:} \begin{split}\label{ifo-pool} \mathbf{c}_t&=\mathbf{f}_t\odot \mathbf{c}_{t-1}+\mathbf{i}_t\odot \mathbf{z}_t\\ \mathbf{h}_t&=\mathbf{o}_t\odot \mathbf{c}_t. \end{split}\end{align} We term these three options \emph{f}-pooling, \emph{fo}-pooling, and \emph{ifo}-pooling respectively; in each case we initialize $\mathbf{h}$ or $\mathbf{c}$ to zero. Although the recurrent parts of these functions must be calculated for each timestep in sequence, their simplicity and parallelism along feature dimensions means that, in practice, evaluating them over even long sequences requires a negligible amount of computation time. A single QRNN layer thus performs an input-dependent pooling, followed by a gated linear combination of convolutional features. As with convolutional neural networks, two or more QRNN layers should be stacked to create a model with the capacity to approximate more complex functions. \subsection{Variants} Motivated by several common natural language tasks, and the long history of work on related architectures, we introduce several extensions to the stacked QRNN described above. Notably, many extensions to both recurrent and convolutional models can be applied directly to the QRNN as it combines elements of both model types. \textbf{Regularization} \label{sec:qrnn_reg} An important extension to the stacked QRNN is a robust regularization scheme inspired by recent work in regularizing LSTMs. The need for an effective regularization method for LSTMs, and dropout's relative lack of efficacy when applied to recurrent connections, led to the development of recurrent dropout schemes, including variational inference--based dropout \citep{Gal2015} and zoneout \citep{Krueger2016}. These schemes extend dropout to the recurrent setting by taking advantage of the repeating structure of recurrent networks, providing more powerful and less destructive regularization. Variational inference--based dropout locks the dropout mask used for the recurrent connections across timesteps, so a single RNN pass uses a single stochastic subset of the recurrent weights. Zoneout stochastically chooses a new subset of channels to ``zone out'' at each timestep; for these channels the network copies states from one timestep to the next without modification. As QRNNs lack recurrent weights, the variational inference approach does not apply. Thus we extended zoneout to the QRNN architecture by modifying the pooling function to keep the previous pooling state for a stochastic subset of channels. Conveniently, this is equivalent to stochastically setting a subset of the QRNN's $f$ gate channels to 1, or applying dropout on $1-f$: \begin{align} \begin{split}\label{qrnn_dropout} \mathbf{F}&=1-\text{dropout}(1-\sigma(\mathbf{W}_f*\mathbf{X})))\\ \end{split} \end{align} Thus the pooling function itself need not be modified at all. We note that when using an off-the-shelf dropout layer in this context, it is important to remove automatic rescaling functionality from the implementation if it is present. In many experiments, we also apply ordinary dropout between layers, including between word embeddings and the first QRNN layer. \textbf{Densely-Connected Layers} We can also extend the QRNN architecture using techniques introduced for convolutional networks. For sequence classification tasks, we found it helpful to use skip-connections between every QRNN layer, a technique termed ``dense convolution'' by \cite{Huang2016}. Where traditional feed-forward or convolutional networks have connections only between subsequent layers, a ``DenseNet'' with $L$ layers has feed-forward or convolutional connections between every pair of layers, for a total of $L(L-1)$. This can improve gradient flow and convergence properties, especially in deeper networks, although it requires a parameter count that is quadratic in the number of layers. When applying this technique to the QRNN, we include connections between the input embeddings and every QRNN layer and between every pair of QRNN layers. This is equivalent to concatenating each QRNN layer's input to its output along the channel dimension before feeding the state into the next layer. The output of the last layer alone is then used as the overall encoding result. \textbf{Encoder--Decoder Models} \label{sec:seq2seq} To demonstrate the generality of QRNNs, we extend the model architecture to sequence-to-sequence tasks, such as machine translation, by using a QRNN as encoder and a modified QRNN, enhanced with attention, as decoder. The motivation for modifying the decoder is that simply feeding the last encoder hidden state (the output of the encoder's pooling layer) into the decoder's recurrent pooling layer, analogously to conventional recurrent encoder--decoder architectures, would not allow the encoder state to affect the gate or update values that are provided to the decoder's pooling layer. This would substantially limit the representational power of the decoder. Instead, the output of each decoder QRNN layer's convolution functions is supplemented at every timestep with the final encoder hidden state. This is accomplished by adding the result of the convolution for layer $\ell$ (e.g., $\mathbf{W}_z^\ell*\mathbf{X}^\ell$, in $\mathbb{R}^{T\times m}$) with broadcasting to a linearly projected copy of layer $\ell$'s last encoder state (e.g., $\mathbf{V}_z^\ell\mathbf{\tilde{h}}_T^\ell$, in $\mathbb{R}^m$): \begin{align} \begin{split}\label{decoder-conv} \mathbf{Z}^\ell&=\tanh(\mathbf{W}_z^\ell*\mathbf{X}^\ell+\mathbf{V}_z^\ell\mathbf{\tilde{h}}^\ell_T)\\ \mathbf{F}^\ell&=\sigmoid(\mathbf{W}_f^\ell*\mathbf{X}^\ell+\mathbf{V}_f^\ell\mathbf{\tilde{h}}^\ell_T)\\ \mathbf{O}^\ell&=\sigmoid(\mathbf{W}_o^\ell*\mathbf{X}^\ell+\mathbf{V}_o^\ell\mathbf{\tilde{h}}^\ell_T), \end{split} \end{align} where the tilde denotes that $\mathbf{\tilde{h}}$ is an encoder variable. Encoder--decoder models which operate on long sequences are made significantly more powerful with the addition of soft attention \citep{Bahdanau2015}, which removes the need for the entire input representation to fit into a fixed-length encoding vector. In our experiments, we computed an attentional sum of the encoder's last layer's hidden states. We used the dot products of these encoder hidden states with the decoder's last layer's un-gated hidden states, applying a $\softmax$ along the encoder timesteps, to weight the encoder states into an attentional sum $\mathbf{k}_t$ for each decoder timestep. This context, and the decoder state, are then fed into a linear layer followed by the output gate: \begin{align} \begin{split}\label{attn} \alpha_{st}&=\softmax_\text{all s}(\mathbf{c}^L_t\cdot\mathbf{\tilde{h}}^L_s)\\ \mathbf{k}_t&=\sum_s\alpha_{st}\mathbf{\tilde{h}}^L_s\\ \mathbf{h}^L_t&=\mathbf{o}_t\odot(\mathbf{W}_k\mathbf{k}_t+\mathbf{W}_c \mathbf{c}^L_t), \end{split} \end{align} where $L$ is the last layer. While the first step of this attention procedure is quadratic in the sequence length, in practice it takes significantly less computation time than the model's linear and convolutional layers due to the simple and highly parallel dot-product scoring function. \section{Experiments} We evaluate the performance of the QRNN on three different natural language tasks: document-level sentiment classification, language modeling, and character-based neural machine translation. Our QRNN models outperform LSTM-based models of equal hidden size on all three tasks while dramatically improving computation speed. Experiments were implemented in Chainer \citep{Tokui2015}. \subsection{Sentiment Classification} We evaluate the QRNN architecture on a popular document-level sentiment classification benchmark, the IMDb movie review dataset \citep{Maas2011}. The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words \citep{Wang2012}. We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., \citet{Miyato2016}). Our best performance on a held-out development set was achieved using a four-layer densely-connected QRNN with 256 units per layer and word vectors initialized using 300-dimensional cased GloVe embeddings \citep{Pennington2014}. Dropout of 0.3 was applied between layers, and we used $L^2$ regularization of $4\times 10^{-6}$. Optimization was performed on minibatches of 24 examples using RMSprop \citep{Tieleman2012} with learning rate of $0.001$, $\alpha=0.9$, and $\epsilon=10^{-8}$. Small batch sizes and long sequence lengths provide an ideal situation for demonstrating the QRNN's performance advantages over traditional recurrent architectures. We observed a speedup of 3.2x on IMDb train time per epoch compared to the optimized LSTM implementation provided in NVIDIA's cuDNN library. For specific batch sizes and sequence lengths, a 16x speed gain is possible. Figure \ref{fig:QRNNspeed} provides extensive speed comparisons. In Figure \ref{fig:IMDBviz}, we visualize the hidden state vectors $\mathbf{c}^L_t$ of the final QRNN layer on part of an example from the IMDb dataset. Even without any post-processing, changes in the hidden state are visible and interpretable in regards to the input. This is a consequence of the elementwise nature of the recurrent pooling function, which delays direct interaction between different channels of the hidden state until the computation of the next QRNN layer. \subsection{Language Modeling} We replicate the language modeling experiment of \citet{Zaremba2014} and \citet{Gal2015} to benchmark the QRNN architecture for natural language sequence prediction. The experiment uses a standard preprocessed version of the Penn Treebank (PTB) by \citet{Mikolov2010}. We implemented a gated QRNN model with medium hidden size: 2 layers with 640 units in each layer. Both QRNN layers use a convolutional filter width $k$ of two timesteps. While the ``medium'' models used in other work \citep{Zaremba2014,Gal2015} consist of 650 units in each layer, it was more computationally convenient to use a multiple of 32. As the Penn Treebank is a relatively small dataset, preventing overfitting is of considerable importance and a major focus of recent research. It is not obvious in advance which of the many RNN regularization schemes would perform well when applied to the QRNN. Our tests showed encouraging results from zoneout applied to the QRNN's recurrent pooling layer, implemented as described in Section \ref{sec:qrnn_reg}. The experimental settings largely followed the ``medium'' setup of \citet{Zaremba2014}. Optimization was performed by stochastic gradient descent (SGD) without momentum. The learning rate was set at 1 for six epochs, then decayed by 0.95 for each subsequent epoch, for a total of 72 epochs. We additionally used $L^2$ regularization of $2\times 10^{-4}$ and rescaled gradients with norm above 10. Zoneout was applied by performing dropout with ratio 0.1 on the forget gates of the QRNN, without rescaling the output of the dropout function. Batches consist of 20 examples, each 105 timesteps. Comparing our results on the gated QRNN with zoneout to the results of LSTMs with both ordinary and variational dropout in Table \ref{table:PTBresults}, we see that the QRNN is highly competitive. The QRNN without zoneout strongly outperforms both our medium LSTM and the medium LSTM of \citet{Zaremba2014} which do not use recurrent dropout and is even competitive with variational LSTMs. This may be due to the limited computational capacity that the QRNN's pooling layer has relative to the LSTM's recurrent weights, providing structural regularization over the recurrence. Without zoneout, early stopping based upon validation loss was required as the QRNN would begin overfitting. By applying a small amount of zoneout ($p=0.1$), no early stopping is required and the QRNN achieves competitive levels of perplexity to the variational LSTM of \citet{Gal2015}, which had variational inference based dropout of 0.2 applied recurrently. Their best performing variation also used Monte Carlo (MC) dropout averaging at test time of 1000 different masks, making it computationally more expensive to run. When training on the PTB dataset with an NVIDIA K40 GPU, we found that the QRNN is substantially faster than a standard LSTM, even when comparing against the optimized cuDNN LSTM. In Figure \ref{fig:QRNNspeed} we provide a breakdown of the time taken for Chainer's default LSTM, the cuDNN LSTM, and QRNN to perform a full forward and backward pass on a single batch during training of the RNN LM on PTB. For both LSTM implementations, running time was dominated by the RNN computations, even with the highly optimized cuDNN implementation. For the QRNN implementation, however, the ``RNN'' layers are no longer the bottleneck. Indeed, there are diminishing returns from further optimization of the QRNN itself as the softmax and optimization overhead take equal or greater time. Note that the softmax, over a vocabulary size of only 10,000 words, is relatively small; for tasks with larger vocabularies, the softmax would likely dominate computation time. It is also important to note that the cuDNN library's RNN primitives do not natively support any form of recurrent dropout. That is, running an LSTM that uses a state-of-the-art regularization scheme at cuDNN-like speeds would likely require an entirely custom kernel. \subsection{Character-level Neural Machine Translation} We evaluate the sequence-to-sequence QRNN architecture described in \ref{sec:seq2seq} on a challenging neural machine translation task, IWSLT German--English spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a unified vocabulary of 187 Unicode code points. Our best performance on a development set (TED.tst2013) was achieved using a four-layer encoder--decoder QRNN with 320 units per layer, no dropout or $L^2$ regularization, and gradient rescaling to a maximum magnitude of 5. Inputs were supplied to the encoder reversed, while the encoder convolutions were not masked. The first encoder layer used convolutional filter width $k=6$, while the other encoder layers used $k=2$. Optimization was performed for 10 epochs on minibatches of 16 examples using Adam \citep{kingma2014adam} with $\alpha=0.001$, $\beta_1=0.9$, $\beta_2=0.999$, and $\epsilon=10^{-8}$. Decoding was performed using beam search with beam width 8 and length normalization $\alpha=0.6$. The modified log-probability ranking criterion is provided in the appendix. Results using this architecture were compared to an equal-sized four-layer encoder--decoder LSTM with attention, applying dropout of 0.2. We again optimized using Adam; other hyperparameters were equal to their values for the QRNN and the same beam search procedure was applied. Table \ref{table:MTresults} shows that the QRNN outperformed the character-level LSTM, almost matching the performance of a word-level attentional baseline. \section{Related Work} Exploring alternatives to traditional RNNs for sequence tasks is a major area of current research. Quasi-recurrent neural networks are related to several such recently described models, especially the strongly-typed recurrent neural networks (T-RNN) introduced by \cite{Balduzzi2016}. While the motivation and constraints described in that work are different, \cite{Balduzzi2016}'s concepts of ``learnware'' and ``firmware'' parallel our discussion of convolution-like and pooling-like subcomponents. As the use of a fully connected layer for recurrent connections violates the constraint of ``strong typing'', all strongly-typed RNN architectures (including the T-RNN, T-GRU, and T-LSTM) are also quasi-recurrent. However, some QRNN models (including those with attention or skip-connections) are not ``strongly typed''. In particular, a T-RNN differs from a QRNN as described in this paper with filter size 1 and \emph{f}-pooling only in the absence of an activation function on $\mathbf{z}$. Similarly, T-GRUs and T-LSTMs differ from QRNNs with filter size 2 and \emph{fo}- or \emph{ifo}-pooling respectively in that they lack $\tanh$ on $\mathbf{z}$ and use $\tanh$ rather than sigmoid on $\mathbf{o}$. The QRNN is also related to work in hybrid convolutional--recurrent models. \citet{Zhou2015b} apply CNNs at the word level to generate $n$-gram features used by an LSTM for text classification. \citet{Xiao2016} also tackle text classification by applying convolutions at the character level, with a stride to reduce sequence length, then feeding these features into a bidirectional LSTM. A similar approach was taken by \citet{Lee2016} for character-level machine translation. Their model's encoder uses a convolutional layer followed by max-pooling to reduce sequence length, a four-layer highway network, and a bidirectional GRU. The parallelism of the convolutional, pooling, and highway layers allows training speed comparable to subword-level models without hard-coded text segmentation. The QRNN encoder--decoder model shares the favorable parallelism and path-length properties exhibited by the ByteNet \citep{Kalchbrenner2016}, an architecture for character-level machine translation based on residual convolutions over binary trees. Their model was constructed to achieve three desired properties: parallelism, linear-time computational complexity, and short paths between any pair of words in order to better propagate gradient signals. \section{Conclusion} Intuitively, many aspects of the semantics of long sequences are context-invariant and can be computed in parallel (e.g., convolutionally), but some aspects require long-distance context and must be computed recurrently. Many existing neural network architectures either fail to take advantage of the contextual information or fail to take advantage of the parallelism. QRNNs exploit both parallelism and context, exhibiting advantages from both convolutional and recurrent neural networks. QRNNs have better predictive accuracy than LSTM-based models of equal hidden size, even though they use fewer parameters and run substantially faster. Our experiments show that the speed and accuracy advantages remain consistent across tasks and at both word and character levels. Extensions to both CNNs and RNNs are often directly applicable to the QRNN, while the model's hidden states are more interpretable than those of other recurrent architectures as its channels maintain their independence across timesteps. We believe that QRNNs can serve as a building block for long-sequence tasks that were previously impractical with traditional RNNs. \bibliographystyle{iclr2017_conference} \setcounter{figure}{0} \renewcommand{\thefigure}{A\arabic{figure}} \clearpage \section*{Appendix} \subsection*{Beam search ranking criterion} The modified log-probability ranking criterion we used in beam search for translation experiments is: \begin{align} \begin{split}\label{length-bonus} \log(P_{\rm cand})&=\frac{T+\alpha}{T}\hdots\frac{T_{\rm trg}+\alpha}{T_{\rm trg}}\sum_{i=1}^T \log(p(w_i|w_1\hdots w_{i-1})), \end{split} \end{align} where $\alpha$ is a length normalization parameter \citep{Wu2016}, $w_i$ is the $i$\textsuperscript{th} output character, and $T_{\rm trg}$ is a ``target length'' equal to the source sentence length plus five characters. This reduces at $\alpha=0$ to ordinary beam search with probabilities: \begin{align} \begin{split}\label{alpha-zero} \log(P_{\rm cand})&=\sum_{i=1}^T \log(p(w_i|w_1\hdots w_{i-1})), \end{split} \end{align} and at $\alpha=1$ to beam search with probabilities normalized by length (up to the target length): \begin{align} \begin{split}\label{alpha-one} \log(P_{\rm cand})&\sim \frac{1}{T}\sum_{i=1}^T \log(p(w_i|w_1\hdots w_{i-1})). \end{split} \end{align} Conveniently, this ranking criterion can be computed at intermediate beam-search timesteps, obviating the need to apply a separate reranking on complete hypotheses. \end{document}
Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task
1612.07833
Table 4: The impact of model sizes on MCIC-COCOaccuracy for the FFNN model.
[ "dim", "hidden-1", "hidden-2", "Dev", "Test" ]
[ [ "FFNN", "FFNN", "FFNN", "FFNN", "FFNN" ], [ "64", "64", "16", "[BOLD] 56.5", "[BOLD] 53.9 ±0.5" ], [ "256", "64", "16", "56.3", "55.1 ±0.5" ], [ "256", "256", "64", "55.8", "54.3 ±0.5" ], [ "512", "512", "128", "54.1", "52.5 ±0.5" ], [ "1024", "1024", "256", "52.2", "51.3 ±0.5" ], [ "2048", "2048", "512", "50.7", "50.7 ±0.5" ], [ "Vec2seq+FFNN (with default [ITALIC] λgen=1.0)", "Vec2seq+FFNN (with default [ITALIC] λgen=1.0)", "Vec2seq+FFNN (with default [ITALIC] λgen=1.0)", "Vec2seq+FFNN (with default [ITALIC] λgen=1.0)", "Vec2seq+FFNN (with default [ITALIC] λgen=1.0)" ], [ "64", "64", "16", "55.3", "54.0 ±0.5" ], [ "256", "64", "16", "60.5", "59.0 ±0.5" ], [ "256", "256", "64", "61.2", "58.8 ±0.5" ], [ "512", "512", "128", "61.6", "59.6 ±0.5" ], [ "1024", "1024", "256", "62.5", "60.8 ±0.5" ], [ "2048", "2048", "512", "[BOLD] 63.4", "[BOLD] 60.8 ±0.5" ] ]
For the FFNN models, contrary to expectations, bigger network sizes leads to decreasing accuracy. On the other hand, for Vec2seq+FFNNmodels, accuracy increases with increased size in model parameters, up until the embedding dimension of the RNN model matches the embedding dimension of the Inception model, at 2048.
\documentclass[10pt,twocolumn,letterpaper]{article} \usepackage[ruled,vlined]{algorithm2e} \DeclareMathOperator{\Ob}{\mathbf{O}} \DeclareMathOperator{\ab}{\mathbf{a}} \DeclareMathOperator{\cbb}{\mathbf{c}} \DeclareMathOperator{\cpb}{\mathbf{c}^\prime} \DeclareMathOperator{\ib}{\mathbf{i}} \DeclareMathOperator{\tb}{\mathbf{t}} \DeclareMathOperator{\tpb}{\mathbf{t}^\prime} \DeclareMathOperator{\ub}{\mathbf{u}} \DeclareMathOperator{\wb}{\mathbf{w}} \DeclareMathOperator{\thetab}{\mathbf{\Theta}} \DeclareMathOperator*{\argmax}{\mathop{\mathrm{argmax}}} \newcommand{\cbr}[1]{\left\{#1\right\}} \def\sm{\small} \def\pairci{\langle\mbox{\it image}, \mbox{\it\{caption(s)\} }\rangle} \def\spaircpc{\langle \cpb, \cbb \rangle} \def\spairci{\langle \ib, \cbb \rangle} \def\spaircti{\langle \ib, \cpb \rangle} \def\spaircpi{\langle \ib, \cpb \rangle} \def\mcic{\textsc{MC}$_{\mbox{\scriptsize IC}}\;$} \def\mciccoco{\textsc{MCIC-COCO}$\;$} \def\ffnnb{FFNN$_{\mbox{\scriptsize 2-class}}^{\mbox{\scriptsize argmax 1..5}}\;$} \def\seqff{Vec2seq+FFNN$\;$} \cvprfinalcopy % *** Uncomment this line for the final submission \def\cvprPaperID{3121} % *** Enter the CVPR Paper ID here \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \begin{document} \title{Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task} \author{Nan Ding \\ Google\\ {\tt\small dingnan@google.com} \and Sebastian Goodman \\ Google\\ {\tt\small seabass@google.com}\\ \and Fei Sha \\ Google\\ {\tt\small fsha@google.com}\\ \and Radu Soricut \\ Google\\ {\tt\small rsoricut@google.com} } \maketitle \begin{abstract} We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable \emph{text} describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing ``keywords'' (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded ``understanding'' of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and well-studied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning. \end{abstract} \section{Introduction} There has been a great deal of interest in multi-modal artificial intelligence research recently, bringing together the fields of Computer Vision and Natural Language Processing. This interest has been fueled in part by the availability of many large-scale image datasets with textual annotations. Several vision+language tasks have been proposed around these datasets~\cite{hodosh13framing,karpathy2014deep,coco,antol15vqa}. Image Captioning~\cite{hodosh13framing,donahue2014long,karpathy2014deep,fang2014captions,kiros2014unifying,vinyals2014show,mao15mrnn,xu2015show} and Visual Question Answering~\cite{malinoski14qa,malinowski15neural,tu14video,antol15vqa,yu15madlib,wu16external,ren15visual,gao15machine,yang16san,zhu16visual7w,lin16leverage} have in particular attracted a lot of attention. The performances on these tasks have been steadily improving, owing much to the wide use of deep learning architectures~\cite{bengio2009book}. A central theme underlying these efforts is the use of natural language to identify how much visual information is perceived and understood by a computer system. Presumably, a system that understands a visual scene well enough ought to be able to describe what the scene is about (thus ``captioning'') or provide correct and visually-grounded answers when queried (thus ``question-answering''). In this paper, we argue for directly measuring how well the semantic representations of the visual and linguistic modalities align (in some abstract semantic space). For instance, given an image and two captions -- a correct one and an incorrect yet-cunningly-similar one -- can we both qualitatively and quantitatively measure the extent to which humans can dismiss the incorrect one but computer systems blunder? Arguably, the degree of the modal alignment is a strong indicator of task-specific performance on any vision+language task. Consequentially, computer systems that can learn to maximize and exploit such alignment should outperform those that do not. We take a two-pronged approach for addressing this issue. First, we introduce a new and challenging Dual Machine Comprehension (DMC) task, in which a computer system must identify the most suitable textual description from several options: one being the target and the others being ``adversarialy''-chosen decoys. All options are free-form, coherent, and fluent sentences with \emph{high degrees of semantic similarity} (hence, they are ``cunningly similar''). A successful computer system has to demonstrate comprehension beyond just recognizing ``keywords'' (or key phrases) and their corresponding visual concepts; they must arrive at a coinciding and visually-grounded understanding of various linguistic elements and their dependencies. What makes the DMC task even more appealing is that it admits an easy-to-compute and well-studied performance metric: the accuracy in detecting the true target among the decoys. Second, we illustrate how solving the DMC task benefits related vision+language tasks. To this end, we render the DMC task as a classification problem, and incorporate it in a multi-task learning framework for end-to-end training of joint objectives. Our work makes the following contributions: (1) an effective and extensible algorithm for generating decoys from human-created image captions (Section~\ref{sec:creation}); (2) an instantiation of applying this algorithm to the COCO dataset~\cite{coco}, resulting in a large-scale dual machine-comprehension dataset that we make publicly available (Section~\ref{sec:mcic-coco}); (3) a human evaluation on this dataset, which provides an upper-bound on performance (Section~\ref{sec:human_eval}); (4) a benchmark study of baseline and competitive learning approaches (Section~\ref{sec:results}), which underperform humans by a substantial gap (about 20\% absolute); and (5) a novel multi-task learning model that simultaneously learns to solve the DMC task and the Image Captioning task (Sections~\ref{sec:seq+ffnn} and \ref{sec:lambda_gen}). Our empirical study shows that performance on the DMC task positively correlates with performance on the Image Captioning task. Therefore, besides acting as a standalone benchmark, the new DMC task can be useful in improving other complex vision+language tasks. Both suggest the DMC task as a fruitful direction for future research. \section{Related work} Image understanding is a long-standing challenge in computer vision. There has recently been a great deal of interest in bringing together vision and language understanding. Particularly relevant to our work are image captioning (IC) and visual question-answering (VQA). Both have instigated a large body of publications, a detailed exposition of which is beyond the scope of this paper. Interested readers should refer to two recent surveys~\cite{bernardi16survey,wu16survey}. In IC tasks, systems attempt to generate a fluent and correct sentence describing an input image. IC systems are usually evaluated on how well the generated descriptions align with human-created captions (ground-truth). The language generation model of an IC system plays a crucial role; it is often trained such that the probabilities of the ground-truth captions are maximized (MLE training), though more advanced methods based on techniques borrowed from Reinforcement Learning have been proposed~\cite{mixer15}. To provide visual grounding, image features are extracted and injected into the language model. Note that language generation models need to both decipher the information encoded in the visual features, and model natural language generation. In VQA tasks, the aim is to answer an input question correctly with respect to a given input image. In many variations of this task, answers are limited to single words or a binary response (``yes'' or ``no'')~\cite{antol15vqa}. The Visual7W dataset~\cite{zhu16visual7w} contains anaswers in a richer format such as phrases, but limits questions to ``wh-''style (what, where, who, etc). The Visual Genome dataset~\cite{krishnavisualgenome}, on the other hand, can potentially define more complex questions and answers due to its extensive textual annotations. Our DMC task is related but significantly different. In our task, systems attempt to discriminate the best caption for an input image from a set of captions --- all but one are decoys. Arguably, it is a form of VQA task, where the same default (thus uninformative) question is asked: \emph{Which of the following sentences best describes this image?} However, unlike current VQA tasks, choosing the correct answer in our task entails a deeper ``understanding'' of the available answers. Thus, to perform well, a computer system needs to understand both complex scenes (visual understanding) and complex sentences (language understanding), \emph{and} be able to reconcile them. The DMC task admits a simple classification-based evaluation metric: the accuracy of selecting the true target. This is a clear advantage over the IC tasks, which often rely on imperfect metrics such as BLEU~\cite{papineni-etal:2002}, ROUGE~\cite{lin-och:2004}, METEOR~\cite{meteor}, CIDEr~\cite{cider}, or SPICE~\cite{spice}. Related to our proposal is the work in~\cite{hodosh13framing}, which frames image captioning as a ranking problem. While both share the idea of selecting captions from a large set, our framework has some important and distinctive components. First, we devise an algorithm for smart selection of candidate decoys, with the goal of selecting those that are sufficiently similar to the true targets to be challenging, and yet still be reliably identifiable by human raters. Second, we have conducted a thorough human evaluation in order to establish a performance ceiling, while also quantifying the level to which current learning systems underperform. Lastly, we show that there exists a positive correlation between the performance on the DMC task and the performance on related vision+language tasks by proposing and experimenting with a multi-task learning model. Our work is also substantially different from their more recent work~\cite{hodosh16eval}, where only one decoy is considered and its generation is either random, or focusing on visual concept similarity (``switching people or scenes'') instead of our focus on both linguistic surface and paragraph vector embedding similarity. \section{The Dual Machine Comprehension Task} \subsection{Design overview} We propose a new multi-modal machine comprehension task to examine how well visual and textual semantic understanding are aligned. Given an image, human evaluators or machines must accurately identify the best sentence describing the scene from several decoy sentences. Accuracy on this task is defined as the percentage that the true targets are identified. It seems straightforward to construct a dataset for this task, as there are several existing datasets which are composed of images and their (multiple) ground-truth captions, including the popular COCO dataset~\cite{coco}. Thus, for any given image, it appears that one just needs to use the captions corresponding to other images as decoys. However, this na\"{i}ve approach could be overly simplistic as it is provides no control over the properties of the decoys. Specifically, our desideratum is to recruit \emph{challenging} decoys that are sufficiently similar to the targets. However, for a small number of decoys, e.g. 4-5, randomly selected captions could be significantly different from the target. The resulting dataset would be too ``easy'' to shed any insight on the task. Since we are also interested in human performance on this task, it is thus impractical to increase the number of decoys to raise the difficulty level of the task at the expense of demanding humans to examine tediously and unreliably a large number of decoys. In short, we need an \emph{automatic procedure to reliably create difficult sets of decoy captions} that are sufficiently similar to the targets. We describe such a procedure in the following. While it focuses on identifying decoy captions, the main idea is potentially adaptable to other settings. The algorithm is flexible in that the ``difficulty" of the dataset can be controlled to some extent through the algorithm's parameters. \subsection{Algorithm to create an MC-IC dataset} \label{sec:creation} The main idea behind our algorithm is to carefully define a ``good decoy''. The algorithm exploits recent advances in paragraph vector (PV) models~\cite{le-mikolov:2014}, while also using linguistic surface analysis to define similarity between two sentences. Due to space limits, we omit a detailed introduction of the PV model. It suffices to note that the model outputs a continuously-valued embedding for a sentence, a paragraph, or even a document. The pseudo-code is given in Algorithm~\ref{aMCIC} (the name \textsc{MC-IC} stands for ``Machine-Comprehension for Image \& Captions''). As input, the algorithm takes a set $C$ of $\pairci$ pairs\footnote{On the order of at least hundreds of thousands of examples; smaller sets result in less challenging datasets.}, as those extracted from a variety of publicly-available corpora, including the COCO dataset~\cite{coco}. The output of the algorithm is the \mcic set. \begin{algorithm}[t] \caption{\textsc{MC-IC}($C$, $N$, $\mbox{\it Score}$)} \label{aMCIC} \SetAlgoLined \KwResult{Dataset \mcic} $\textsf{PV} \gets \textsc{Optimize-PV}(C)$ \\ $\lambda \gets \textsc{Optimize-Score}(\textsf{PV}, C, \mbox{\it Score})$ \\ \mcic$\gets \emptyset$ \\ $nr\_decoys = 4$ \\ \For{$\langle \ib_i, \cbb_i \rangle \in C$}{ $A\gets []$ \\ $T_{\cbb_i} \gets \textsf{PV}(\cbb_i)[1..N]$ \\ \For{$\cbb_d\in T_{\cbb_i}$}{ $score \gets \mbox{\it Score}(\textsf{PV}, \lambda, \cbb_d, \cbb_i)$ \\ \If{$score > 0$}{ $A.\mbox{\bf append}(\langle score, \cbb_d\rangle)$ } } \If{$|A| \geq nr\_decoys$}{ $R\gets \mbox{\bf descending-sort}(A)$ \\ \For{$l \in [1..nr\_decoys]$}{ $\langle score, \cbb_d\rangle\gets R[l]$ \\ \mcic$\gets$ \mcic$\cup\{(\langle \ib_i, \cbb_d\rangle, \mbox{\bf false})\}$ \\ } \mcic$\gets$ \mcic$\cup\{(\langle \ib_i, \cbb_i \rangle, \mbox{\bf true})\}$ \\ } } \end{algorithm} Concretely, the \textsc{MC-IC} Algorithm has three main arguments: a dataset $C = \{ \langle \ib_i, \cbb_i \rangle | 1 \leq i \leq m\}$ where $\ib_i$ is an image and $\cbb_i$ is its ground-truth caption\footnote{For an image with multiple ground-truth captions, we split it to multiple instances with the same image for each one of the ground-truth captions; the train/dev/test splits are done such that they contain disjoint {\em image} sets, as opposed to disjoint {\em instance} sets.}; an integer $N$ which controls the size of $\cbb_i$'s neighborhood in the embedding space defined by the paragraph vector model \textsf{PV}; and a function \textsf{Score} which is used to score the $N$ items in each such neighborhood. The first two steps of the algorithm tune several hyperparameters. The first step finds optimal settings for the \textsf{PV} model given the dataset $C$. The second finds a weight parameter $\lambda$ given \textsf{PV}, dataset $C$, and the \textsf{Score} function. These hyperparameters are dataset-specific. Details are discussed in the next section. The main body of the algorithm, the outer $\textbf{for}$ loop, generates a set of $nr\_decoys$ (4 here) decoys for each ground-truth caption. It accomplishes this by first extracting $N$ candidates from the \textsf{PV} neighborhood of the ground-truth caption, excluding those that belong to the same image. In the inner $\textbf{for}$ loop, it computes the similarity of each candidate to the ground-truth and stores them in a list $A$. If enough candidates are generated, the list is sorted in descending order of score. The top $nr\_decoys$ captions are marked as ``decoys'' (\ie \textbf{false}), while the ground-truth caption is marked as ``target'' (\ie \textbf{true}). The score function $\textsf{Score}(\textsf{PV}, \lambda, \cpb, \cbb)$ is a crucial component of the decoy selection mechanism. Its definition leverages our linguistic intuition by combining linguistic surface similarity, $\textsf{sim}_{\textsc{surf}}(\cpb, \cbb)$, with the similarity suggested by the embedding model, $\mbox{\textsf{sim}}_{\textsf{PV}}(\cpb, \cbb)$: \begin{equation} \textsf{Score}\!=\! \left\{\!\! \arraycolsep=1.5pt \begin{array}{ll} 0 & \! \mbox{if \textsf{sim}}_{\textsc{surf}}\!\ge\!L \\ \lambda\, \mbox{\textsf{sim}}_{\textsf{PV}}\!+\!(1\!-\!\lambda)\, \mbox{\textsf{sim}}_{\textsc{surf}} & \text{otherwise} \end{array} \right. \label{eq:score} \end{equation} where the common argument $(\cpb, \cbb)$ is omitted. The higher the similarity score, the more likely that $\cpb$ is a good decoy for $\cbb$. Note that if the surface similarity is above the threshold $L$, the function returns 0, flagging that the two captions are too similar to be used as a pair of target and decoy. In this work, $\mbox{\textsf{sim}}_{\textsc{surf}}$ is computed as the BLEU score between the inputs~\cite{papineni-etal:2002} (with the brevity penalty set to 1). The embedding similarity, $\mbox{\textsf{sim}}_{\textsf{PV}}$, is computed as the cosine similarity between the two in the PV embedding space. \subsection{The \mciccoco dataset} \label{sec:mcic-coco} We applied the \textsc{MC-IC} Algorithm to the COCO dataset~\cite{coco} to generate a dataset for the visual-language dual machine comprehension task. The dataset is called \mciccoco and it is made publicly available\footnote{\tt http://www.github.com/google/mcic-coco}. We describe the details of this dataset below. We set the neighborhood size at $N=500$, and the threshold at $L=0.5$ (see Eq.~\ref{eq:score}). As the COCO dataset has a large body of images (thus captions) focusing on a few categories (such as sports activities), this threshold is important in discarding significantly similar captions to be decoys -- otherwise, even human annotators will experience difficulty in selecting the ground-truth captions. The hyperparameters of the \textsf{PV} model, {\tt dim} (embedding dimension) and {\tt epochs} (number of training epochs), are optimized in the $\textsc{Optimize-PV}$ step of the \textsc{MC-IC} Algorithm. The main idea is to learn embeddings such that ground-truth captions from the same image have similar embeddings. Concretely, the optimization step is a grid-search over the hyper-parameters of the PV-DBOW model~\cite{le-mikolov:2014}, which we train using a softmax loss. Since there are multiple ground-truth captions associated with each image, the dataset is denoted by $C = \{ \langle \ib_{r_c}, \cbb_{r_c} \rangle | 1 \leq r \leq n, 1 \leq c \leq s_r \}$, where $r$ is the index for each unique image ($\ib_{r_c} \equiv \ib_r$), $n$ is the total number images and $s_r > 1$ is the number of unique captions for image $r$. The total number of data examples $m = \sum_{r=1}^n s_r$. Here the hyper-parameters are searched on a grid to minimize ``multiple ground-truth score'' rank (mgs-rank): the average rank (under the cosine-distance score) between $\cbb_{r_c}$ and $\{ \cbb_{r_l} | 1 \leq l \leq s_r, l\not= c\}$. The lower the mgs-rank, the better the resulting paragraph vector model is at modeling multiple ground-truths for a given image as being similar. As such, our grid-search over the \mciccoco dev dataset yields a minimum mgs-rank at {\tt dim}=1024 and {\tt epochs}=5. Similarly, the $\textsc{Optimize-Score}(\textsf{PV}, \textsf{Score})$ step is a grid-search over the $\lambda$ parameter of the \textsf{Score} function, given a paragraph vector embedding model \textsf{PV} and a dataset $C$ of captions and images, as before. A well-chosen $\lambda$ will ensure the multiple ground-truth captions for the same image will be measured with high degree of similarity with the \textsf{Score} function. The $\lambda\in [0,1]$ parameter is searched on a grid to minimize the ``weighted multiple ground-truths score'' rank (wmgs-rank): the average rank (under the \textsf{Score}) between $\cbb_{r_c}$ and $\{ \cbb_{r_l} | 1 \leq l \leq s_r, l\not= c\}$, relative to the top $N$-best closest-cosine neighbors in \textsf{PV}. For example, if given five ground-truths for image $\ib_r$, and when considering $\cbb_{r_1}$, ground-truths $\cbb_{r_2}$ to $\cbb_{r_5}$ are ranking at \#4, \#10, \#16, and \#22 (in top-500 closest-cosine neighbors in \textsf{PV}), then wmgs-rank$(\cbb_{r_1})=13$ (the average of these ranks). Our grid-search over the \mciccoco dev dataset yields a minimum wmgs-rank at $\lambda$=0.3. The resulting \mciccoco dataset has 574,315 instances that are in the format of $\{i: (\langle \ib_i, \cbb_i^j \rangle , \mbox{label}_i^j), j = 1 \ldots 5\}$ where $\mbox{label}_i^j\in \{\mbox{\bf true}, \mbox{\bf false}\}$. For each such instance, there is one and only one $j$ such that the label is \textbf{true}. We have created a train/dev/test split such that all of the instances for the same image occur in the same split. Table~\ref{table:mcic_splits} reports the basic statistics for the dataset. \subsection{Human performance on \mciccoco} \label{sec:human_eval} \noindent \textbf{Setup}\ To measure how well humans can perform on the DMC task, we randomly drew 1,000 instances from the \mciccoco dev set and submitted those instances to human ``raters''\footnote{Raters are vetted, screened and tested before working on any tasks; requirements include native-language proficiency level.} via a crowd-sourcing platform. Three independent responses from 3 different rates were gathered for each instance, for a total of 3,000 responses. To ensure diversity, raters were prohibited from evaluating more than six instances or from responding to the same task instance twice. In total, 807 distinct raters were employed. Raters were shown one instance at a time. They were shown the image and the five caption choices (ground-truth and four decoys, in randomized order) and were instructed to choose the best caption for the image. Before starting evaluation, the raters were trained with sample instances from the \emph{train} dataset, disjoint from the \emph{dev} dataset on which their performance data were collected. The training process presents an image and five sentences, of which the ground-truth caption is highlighted. In addition, specific instructions and clarification were given to the raters on how to choose the best caption for the image. In Figure~\ref{fig:rater-training}, we present three instances on how the rater instructions were presented for rater training. \noindent \textbf{Quantitative results}\ We assessed human performance in two metrics: (1) Percentage of correct rater responses (1-human system): \textbf{81.1\%} (2432 out of 3000); (2) Percentage of instances with at least $50\%$ (\ie 2) correct responses (3-human system): \textbf{82.8\%} (828 out of 1000). Table~\ref{table:human_eval_instances} gives a detailed breakdown on the statistics related to the inter-rater (dis)agreement. The first row, with accuracy at 67.3\%, suggests that this is the level at which the correct answer is obvious (\ie, percentage of ``easy'' instances). The second row, at 82.8\%, indicates that this is the performance ceiling in terms of accuracy that can be expected for the \mciccoco dataset; at the same time, it suggests that the difference between 67.3\% and 82.8\% (i.e., about 15\% of instances) is caused by ``difficult'' instances. Finally, the third row, at 93.1\%, indicates that the level of ``unanswerable'' instances is somewhere in the 10\%-15\% range (combining the increase from 82.8\% to 93.1\% and the remaining 6.9\% that no one gets right). We will investigate those instances in detail in the future. The COCO dataset has a significant number of captions that fit more than one image in the dataset, given the biased concentration on certain categories. Thus, we suspect that even with our threshold-check (cf. the introduction of $L$ in Eq.~\ref{eq:score}), our procedure might have failed to filter out some impossible-to-distinguish decoys. \noindent \textbf{Qualitative examples}\ We present in Figure~\ref{fig:examples} several example instances from the \mciccoco dataset. The first example illustrates how certain aspects of VQA are subsumed by the DMC task: in order to correctly choose answer 3, a system needs to implicitly answer questions like ``how many people are in the image?'' (answer: three, thus choices 4. and 5. are wrong), and ``what are the people doing?'' (answer: riding horses, thus choices 1. and 2. are wrong). The second example illustrates the extent to which a successful computer system needs to be able to differentiate between ``standing'' and ``rolling'' in a visually-grounded way, presumably via a pose model~\cite{yao-li:2010} combined with a translation model between poses and their verbal correspondents. Last but not least, the third examples illustrates a difficult case, which led to human annotator disagreement in our annotation process (both choice 3. and 5. were selected by different annotators). \section{Learning Methods} \label{sLearning} We describe several learning methods for the dual machine comprehension (DMC) task with the \mcic dataset. We start with linear models which will be used as baselines. We then present several neural-network based models. In particular, we describe a novel, hybrid neural network model that combines the feedforward architecture and the seq2seq architecture~\cite{sutskever-etal:2014} for multi-task learning of the DMC task and the image captioning task. This new model achieves the best performance in both tasks. \subsection{Linear models as baselines} \noindent \textbf{Regression}\ To examine how well the two embeddings are aligned in ``semantic understanding space'', a simple approach is to assume that the learners do not have access to the decoys. Instead, by accessing the ground-truth captions only, the models learn a linear regressor from the image embeddings to the target captions' embeddings (``forward regression''), or from the captions to the images (``backward regression''). With the former approach, referred as \textsf{Baseline-I2C}, we check whether the predicted caption for any given image is closest to its true caption. With the latter, referred as \textsf{Baseline-C2I}, we check whether the predicted image embedding by the ground-truth caption is the closest among predicted ones by decoy captions to the real image embeddings. \noindent \textbf{Linear classifier}\ Our next approach \textsf{Baseline-LinM} is a linear classifier learned to discriminate true targets from the decoys. Specifically, we learn a linear discriminant function $f(\ib, \cbb; \thetab) = \ib^{\top}\thetab \cbb$ where $\thetab$ is a matrix measuring the compatibility between two types of embeddings, cf. ~\cite{frome2013devise}. The loss function is then given by \begin{equation} L(\thetab) = \sum_i [ \max_{j \ne j^*} f(\ib_i, \cbb_i^j; \thetab) - f(\ib_i, \cbb_i^{j^*}; \thetab) ]_+ \end{equation} where $[\ ]_{+}$ is the hinge function and $j$ indexes over all the available decoys and $i$ indexes over all training instances. The optimization tries to increase the gap between the target $\cbb_i^{j^*}$ and the worst ``offending'' decoy. We use stochastic (sub)gradient methods to optimize $\thetab$, and select the best model in terms of accuracy on the \mciccoco development set. \subsection{Feedforward Neural Network (FFNN) models} \label{sec:ffnn} To present our neural-network--based models, we use the following notations. Each training instance pair is a tuple $\langle \ib_i, \cbb_i^j\rangle$, where $\ib$ denotes the image, and $\cbb_i^j$ denotes the caption options, which can either be the target or the decoys. We use a binary variable $y_{ijk} \in \{0,1\}$ to denote whether $j$-th caption of the instance $i$ is labeled as $k$, and $\sum_k y_{ijk} = 1$. We first employ the standard feedforward neural-network models to solve the DMC task on the \mciccoco dataset. For each instance pair $\langle \ib_i, \cbb_i^j\rangle$, the input to the neural network is an embedding tuple $\langle \text{DNN}(\ib_i; \Gamma), \text{Emb}(\cbb_i^j; \Omega) \rangle$, where $\Gamma$ denotes the parameters of a deep convolutional neural network $\text{DNN}$. $\text{DNN}$ takes an image and outputs an image embedding vector. $\Omega$ is the embedding matrix, and $\text{Emb}(.)$ denotes the mapping from a list of word IDs to a list of embedding vectors using $\Omega$. The loss function for our FFNN is given by: \begin{dmath} L(\Gamma, \Omega,\ub)\!=\! \sum_{i, j, k} y_{ijk} \log \; \text{FN}_k(\text{DNN}(\ib_i; \Gamma), \text{Emb}(\cbb_i^j; \Omega); \ub) \label{eq:closs} \end{dmath} \noindent where $\text{FN}_k$ denotes the $k$-th output of a feedforward neural network, and $\sum_k \text{FN}_k(.) = 1$. Our architecture uses a two hidden-layer fully connected network with Rectified Linear hidden units, and a softmax layer on top. The formula in Eq.~\ref{eq:closs} is generic with respect to the number of classes. In particular, we consider a 2-class--classifier ($k\in\{0, 1\}$, 1 for 'yes', this is a correct answer; 0 for 'no', this is an incorrect answer), applied independently on all the $\langle \ib_i, \cbb_i^j\rangle$ pairs and apply one FFNN-based binary classifier for each; the final prediction is the caption with the highest 'yes' probability among all instance pairs belonging to instance $i$. \subsection{Vec2seq + FFNN Model} \label{sec:seq+ffnn} We describe here a hybrid neural-network model that combines a recurrent neural-network with a feedforward one. We encode the image into a single-cell RNN encoder, and the caption into an RNN decoder. Because the first sequence only contains one cell, we call this model a vector-to-sequence (Vec2seq) model as a special case of Seq2seq model as in ~\cite{sutskever-etal:2014,bahdanau-etal:2015}. The output of each unit cell of a Vec2seq model (both on the encoding side and the decoding side) can be fed into an FFNN architecture for binary classification. See Figure~\ref{agmc_diagram} for an illustration of the Vec2seq + FFNN model architecture. \paragraph{Multi-task learning} In addition to the classification loss (Eq.~\ref{eq:closs}), we also include a loss for generating an output sequence $\cbb_i^j$ based on an input $\ib_i$ image. We define a binary variable $z_{ijlv} \in \{0,1\}$ to indicate whether the $l$th word of $\cbb_i^j$ is equal to word $v$. $\Ob^d_{ijl}$ denotes the $l$-th output of the decoder of instance pair $\langle \ib_i, \cbb_i^j\rangle$, $\Ob^e_{ij}$ denotes the output of the encoder, and $\Ob^d_{ij:}$ denotes the concatenation of decoder outputs. With these definitions, the loss function for the Vec2seq + FFNN model is: \begin{dmath} L(\thetab, \wb, \ub) = \sum_{i, j, k} y_{ijk} \log \; \text{FN}_k(\Ob^e_{ij}(\ib_i, \cbb_i^j; \thetab), \Ob^d_{ij:}(\ib_i, \cbb_i^j; \thetab); \ub) + \lambda_{gen} \sum_{i,j, l, v} y_{ij1} z_{ijlv} \log \;\text{softmax}_v(\Ob^d_{ijl}(\ib_i, \cbb_i^j; \thetab); \wb) \label{eq:mloss} \end{dmath} \noindent where $\sum_v \text{softmax}_v(.) = 1$; $\thetab$ are the parameters of the Vec2seq model, which include the parameters within each unit cell, as well as the elements in the embedding matrices for images and target sequences; $\wb$ are the output projection parameters that transform the output space of the decoder to the vocabulary space. $\ub$ are the parameters of the FFNN model (Eq.~\ref{eq:closs}); $\lambda_{gen}$ is the weight assigned to the sequence-to-sequence generation loss. Only the true target candidates (the ones with $y_{ij1} = 1$) are included in this loss, as we do not want the decoy target options to affect this computation. The Vec2seq model we use here is an instantiation of the attention-enhanced models proposed in~\cite{bahdanau-etal:2015,chen-etal:2016}. However, our current model does not support location-wise attention, as in the Show-Attend-and-Tell~\cite{xu-etal:2016} model. In this sense, our model is an extension of the Show-and-Tell model with a single attention state representing the entire image, used as image memory representation for all decoder decisions. We apply Gated Recurrent Unit (GRU) as the unit cell~\cite{cho-etal:2014}. We also compare the influence on performance of the $\lambda_{gen}$ parameter. \section{Experiments} \label{sec:results} \subsection{Experimental Setup} \noindent \textbf{Baseline models}\ For the baseline models, we use the 2048-dimensional outputs of Google-Inception-v3~\cite{inception-v3} (pre-trained on ImageNet ILSSVR 2012) to represent the images, and 1024-dimensional paragraph-vector embeddings (section~\ref{sec:creation}) to represent captions. To reduce computation time, both are reduced to 256-dimensional vectors using random projections. \noindent \textbf{Neural-nets based models}\ The experiments with these models are done using the Tensorflow package~\cite{tensorflow2015-whitepaper}. The hyper-parameter choices are decided using the hold-out development portion of the \mciccoco set. For modeling the input tokens, we use a vocabulary size of 8,855 types, selected as the most frequent tokens over the captions from the COCO training set (words occurring at least 5 times). The models are optimized using ADAGRAD with an initial learning rate of 0.01, and clipped gradients (maximum norm 4). We run the training procedures for $3,000,000$ steps, with a mini-batch size of 20. We use 40 workers for computing the updates, and 10 parameter servers for model storing and (asynchronous and distributed) updating. We use the following notations to refer to the neural network models: \ffnnb refers to the version of feedforward neural network architecture with a 2-class--classifier ('yes' or 'no' for answer correctness), over which an $\argmax$ function computes a 5-way decision (i.e., the choice with the highest 'yes' probability); we henceforth refer to this model simply as FFNN. The \seqff refers to the hybrid model described in Section~\ref{sec:seq+ffnn}, combining Vec2seq and \ffnnb. The RNN part of the model uses a two-hidden--layer GRU unit-cell~\cite{cho-etal:2014} configuration, while the FFNN part uses a two-hidden--layer architecture. The $\lambda_{gen}$ hyper-parameter from the loss-function $L(\thetab, \wb, \ub)$ (Eq.~\ref{eq:mloss}) is by default set to 1.0 (except for Section~\ref{sec:lambda_gen} where we directly measure its effect on performance). \noindent \textbf{Evaluation metrics}\ The metrics we use to measure performance come in two flavors. First, the accuracy in detecting (the index of) the true target among the decoys provides a direct way of measuring the performance level on the comprehension task. We use this metric as the main indicator of comprehension performance. Second, because our \seqff models are multi-task models, they can also generate new captions given the input image. The performance level for the generation task is measured using the standard scripts measuring ROUGE-L~\cite{lin-och:2004} and CIDEr~\cite{cider}, using as reference the available captions from the COCO data (around 5 for most of the images). Code for these metrics is available as part of the COCO evaluation toolkit~\footnote{\tt https://github.com/tylin/coco-caption}. As usual, both the hypothesis strings and the reference strings are preprocessed: remove all the non-alphabetic characters; transform all letters to lowercase, and tokenize using white space; replace all words occurring less than 5 times with an unknown token $\langle\mbox{UNK}\rangle$ (total vocabulary of 8,855 types); truncate to the first 30 tokens. \subsection{Results} \label{sDMCResults} Table~\ref{table:baselines} summarizes our main results on the comprehension task. We report the accuracies (and their standard deviations) for random choice, baselines, and neural network-based models. Interestingly, the \textsf{Baseline-I2C} model performs at the level of random choice, and much worse than the \textsf{Baseline-C2I model}. This discrepancy reflects the inherent difficulty in vision-Language tasks: for each image, there are several possible equally good descriptions, thus a linear mapping from the image embeddings to the captions might not be enough -- statistically, the \emph{linear} model will just predict the mean of those captions. However, for the reverse direction where the captions are the independent variables, the learned model does not have to capture the variability in image embeddings corresponding to the different but equally good captions -- there is only one such image embedding. Nonlinear neural networks overcome these modeling limitations. The results clearly indicate their superiority over the baselines. The \seqff model obtains the best results, with accuracies of 60.5\% (dev) and 59.0\% (test); the accuracy numbers indicate that the \seqff architecture is superior to the non-recursive fully-connected FFNN architecture (at 55.1\% accuracy on test). We show next the impact on performance of the embedding dimension and neural-network sizes, for both the feedforward and the recurrent architectures. \subsection{Analysis: embedding dimension and neural-network sizes} \label{sec:impact_size} In this section, we compare neural networks models of different sizes. Specifically, we compare embedding dimensions of $\cbr{64, 256, 512, 1024, 2048}$, and two hidden-layer architectures with sizes of $\cbr{(64, 16), (256, 64), (512, 128), (1024, 256), (2048, 512)}$. The results in Table~\ref{table:ffnn_sizes} illustrate an interesting behavior for the neural-network architectures. For the FFNN models, contrary to expectations, bigger network sizes leads to decreasing accuracy. On the other hand, for \seqff models, accuracy increases with increased size in model parameters, up until the embedding dimension of the RNN model matches the embedding dimension of the Inception model, at 2048. At accuracy levels of 63.4\% (dev) and 60.8\% (test), this performance establishes a high-bar for a computer model performance on the DMC task using the \mciccoco dataset. According to the estimate from Table~\ref{table:human_eval_instances}, this level of performance is still \emph{significantly} below the 82.8\% accuracy achievable by humans, which makes \mciccoco a challenging testbed for future models of Vision-Language machine comprehension. \subsection{Multi-task learning for DMC and Image Captioning} \label{sec:lambda_gen} In this section, we compare models with different values of $\lambda_{gen}$ in Eq.~\ref{eq:mloss}. This parameter allows for a natural progression from learning for the DMC task only ($\lambda_{gen} = 0$) to focusing on the image captioning loss ($\lambda_{gen} \rightarrow +\infty$). In between the two extremes, we have a multi-task learning objective for jointly learning related tasks. The results in Table~\ref{table:lambda_gen} illustrate one of the main points of this paper. That is, the ability to perform the comprehension task (as measured by the accuracy metric) positively correlates with the ability to perform other tasks that require machine comprehension, such as caption generation. At $\lambda_{gen} = 4$, the \seqff model not only has a high accuracy of detecting the ground-truth option, but it also generates its own captions given the input image, with an accuracy measured on \mciccoco at 0.9890 (dev) and 0.9380 (test) CIDEr scores. On the other hand, at an accuracy level of about 59\% (on test, at $\lambda_{gen} = 0.1$), the generation performance is at only 0.9010 (dev) and 0.8650 (test) CIDEr scores. We note that there is an inherent trade-off between prediction accuracy and generation performance, as seen for $\lambda_{gen}$ values above 4.0. This agrees with the intuition that training a \seqff model using a loss $L(\thetab, \wb, \ub)$ with a larger $\lambda_{gen}$ means that the ground-truth detection loss (the first term of the loss in Eq.\ref{eq:mloss}) may get overwhelmed by the word-generation loss (the second term). However, our empirical results suggest that there is value in training models with a multi-task setup, in which both the comprehension side as well as the generation side are carefully tuned to maximize performance. \section{Discussion} We have proposed and described in detail a new multi-modal machine comprehension task (DMC), combining the challenges of understanding visual scenes and complex language constructs simultaneously. The underlying hypothesis for this work is that computer systems that can be shown to perform increasingly well on this task will do so by constructing a visually-grounded understanding of various linguistic elements and their dependencies. This type of work can therefore benefit research in both machine visual understanding and language comprehension. The \seqff architecture that we propose for addressing this combined challenge is a generic multi-task model. It can be trained end-to-end to display both the ability to choose the most likely text associated with an image (thus enabling a direct measure of its ``comprehension'' performance), as well as the ability to generate a complex description of that image (thus enabling a direct measure of its performance in an end-to-end complex and meaningful task). The empirical results we present validate the underlying hypothesis of our work, by showing that we can measure the decisions made by such a computer system and validate that improvements in comprehension and generation happen in tandem. The experiments presented in this work are done training our systems in an end-to-end fashion, starting directly from raw pixels. We hypothesize that our framework can be fruitfully used to show that incorporating specialized vision systems (such as object detection, scene recognition, pose detection, etc.) is beneficial. More precisely, not only it can lead to a direct and measurable impact on a computer system's ability to perform image understanding, but it can express that understanding in an end-to-end complex task. \begin{thebibliography}{10} \bibitem{tensorflow2015-whitepaper} M.~Abadi, A.~Agarwal, P.~Barham, E.~Brevdo, Z.~Chen, C.~Citro, G.~Corrado, A.~Davis, J.~Dean, M.~Devin, S.~Ghemawat, I.~Goodfellow, A.~Harp, G.~Irving, M.~Isard, Y.~Jia, R.~Jozefowicz, L.~Kaiser, M.~Kudlur, J.~Levenberg, D.~Man\'{e}, R.~Monga, S.~Moore, D.~Murray, C.~Olah, M.~Schuster, J.~Shlens, B.~Steiner, I.~Sutskever, K.~Talwar, P.~Tucker, V.~Vanhoucke, V.~Vasudevan, F.~Vi\'{e}gas, O.~Vinyals, P.~Warden, M.~Wattenberg, M.~Wicke, Y.~Yu, and X.~Zheng. \newblock {TensorFlow}: Large-scale machine learning on heterogeneous systems, 2015. \newblock Software available from tensorflow.org. \bibitem{spice} P.~Anderson, B.~Fernando, M.~Johnson, and S.~Gould. \newblock {SPICE:} semantic propositional image caption evaluation. \newblock {\em CoRR}, abs/1607.08822, 2016. \bibitem{antol15vqa} S.~Antol, A.~Agrawal, J.~Lu, M.~Mitchell, D.~Batra, C.~L. Zitnick, and D.~Parikh. \newblock {VQA}: Visual question answering. \newblock In {\em International Conference on Computer Vision (ICCV)}, 2015. \bibitem{bahdanau-etal:2015} D.~Bahdanau, K.~Cho, and Y.~Bengio. \newblock Neural machine translation by jointly learning to align and translate. \newblock In {\em Proceedings of ICLR}, 2015. \bibitem{meteor} S.~Banerjee and A.~Lavie. \newblock {METEOR}: An automatic metric for {MT} evaluation with improved correlation with human judgments. \newblock In {\em Proceedings of the {ACL} {W}orkshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization}, 2005. \bibitem{bengio2009book} Y.~Bengio. \newblock Learning deep architectures for ai. \newblock {\em Found. Trends Mach. Learn.}, 2(1):1--127, Jan. 2009. \bibitem{bernardi16survey} R.~Bernardi, R.~Cakici, D.~Elliott, A.~Erdem, E.~Erdem, N.~Ikizler-Cinbis, F.~Keller, A.~Muscat, and B.~Plank. \newblock Automatic description generation from images: A survey of models, datasets, and evaluation measures. \newblock {\em JAIR}, 55, 2016. \bibitem{chen-etal:2016} D.~Chen, J.~Bolton, and C.~D. Manning. \newblock {A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task}. \newblock In {\em Proceedings of ACL}, 2016. \bibitem{cho-etal:2014} K.~Cho, B.~van Merrienboer, {\c{C}}.~G{\"{u}}l{\c{c}}ehre, D.~Bahdanau, F.~Bougares, H.~Schwenk, and Y.~Bengio. \newblock Learning phrase representations using {RNN} encoder-decoder for statistical machine translation. \newblock In {\em Proceedings of {EMNLP}, October 25-29, 2014, Doha, Qatar}, pages 1724--1734, 2014. \bibitem{donahue2014long} J.~Donahue, L.~A. Hendricks, S.~Guadarrama, M.~Rohrbach, S.~Venugopalan, K.~Saenko, and T.~Darrell. \newblock Long-term recurrent convolutional networks for visual recognition and description. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2014. \bibitem{fang2014captions} H.~Fang, S.~Gupta, F.~Iandola, R.~Srivastava, L.~Deng, P.~Doll{\'a}r, J.~Gao, X.~He, M.~Mitchell, J.~Platt, et~al. \newblock From captions to visual concepts and back. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{frome2013devise} A.~Frome, G.~S. Corrado, J.~Shlens, S.~Bengio, J.~Dean, T.~Mikolov, et~al. \newblock Devise: A deep visual-semantic embedding model. \newblock In {\em Advances in Neural Information Processing Systems (NIPS)}, 2013. \bibitem{gao15machine} H.~Gao, J.~Mao, J.~Zhou, Z.~Huang, L.~Wang, and W.~Xu. \newblock Are you talking to a machine? dataset and methods for multilingual image question answering. \newblock In {\em NIPS}, 2015. \bibitem{hodosh16eval} M.~Hodosh and J.~Hockenmaier. \newblock Focused evaluation for image description with binary forced-choice tasks. \newblock In {\em Proc. 5th Vision and Language Workshop}, 2016. \bibitem{hodosh13framing} M.~Hodosh, P.~Young, and J.~Hockenmaier. \newblock Framing image description as a ranking task: Data, models and evaluation metrics. \newblock {\em JAIR}, 2013. \bibitem{karpathy2014deep} A.~Karpathy and L.~Fei-Fei. \newblock Deep visual-semantic alignments for generating image descriptions. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{kiros2014unifying} R.~Kiros, R.~Salakhutdinov, and R.~S. Zemel. \newblock Unifying visual-semantic embeddings with multimodal neural language models. \newblock {\em Transactions of the Association for Computational Linguistics}, 2015. \bibitem{krishnavisualgenome} R.~Krishna, Y.~Zhu, O.~Groth, J.~Johnson, K.~Hata, J.~Kravitz, S.~Chen, Y.~Kalantidis, L.-J. Li, D.~A. Shamma, M.~Bernstein, and L.~Fei-Fei. \newblock {Visual Genome}: Connecting language and vision using crowdsourced dense image annotations. \newblock 2016. \bibitem{le-mikolov:2014} Q.~Le and T.~Mikolov. \newblock Distributed representations of sentences and documents. \newblock In {\em Proceedings of the 31st International Conference on Machine Learning}, Beijing, China, 2014. \bibitem{lin-och:2004} C.-Y. Lin and F.~J. Och. \newblock Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. \newblock In {\em Proceedings of ACL}, 2004. \bibitem{coco} T.~Lin, M.~Maire, S.~J. Belongie, L.~D. Bourdev, R.~B. Girshick, J.~Hays, P.~Perona, D.~Ramanan, P.~Doll{\'{a}}r, and C.~L. Zitnick. \newblock Microsoft {COCO:} common objects in context. \newblock {\em CoRR}, abs/1405.0312, 2014. \bibitem{lin16leverage} X.~Lin and D.~Parikh. \newblock Leveraging visual question answering for image-caption ranking. \newblock {\em CoRR}, abs/1605.01379, 2016. \bibitem{malinoski14qa} M.~Malinowski and M.~Fritz. \newblock A multi-world approach to question answering about real-world scenes based on uncertain input. \newblock In {\em NIPS}, 2014. \bibitem{malinowski15neural} M.~Malinowski, M.~Rohrbach, and M.~Fritz. \newblock Ask your neurons: A neural-based approach to answering questions about images. \newblock In {\em ICCV}, 2015. \bibitem{mao15mrnn} J.~Mao, W.~Xu, Y.~Yang, J.~Wang, and A.~Yuille. \newblock Deep captioning with multimodal recurrent neural networks ({mRNN}). \newblock In {\em Proc. Int. Conf. Learn. Representations}, 2015. \bibitem{papineni-etal:2002} K.~Papineni, S.~Roukos, T.~Ward, and W.-J. Zhu. \newblock Bleu: A method for automatic evaluation of machine translation. \newblock In {\em Proceedings of ACL}, pages 311--318, 2002. \bibitem{mixer15} M.~Ranzato, S.~Chopra, M.~Auli, and W.~Zaremba. \newblock Sequence level training with recurrent neural networks. \newblock {\em CoRR}, abs/1511.06732, 2015. \bibitem{ren15visual} M.~Ren, R.~Kiros, and R.~Zemel. \newblock Image question answering: A visual semantic embedding model and a new dataset. \newblock In {\em NIPS}, 2015. \bibitem{sutskever-etal:2014} I.~Sutskever, O.~Vinyals, and Q.~V.~V. Le. \newblock Sequence to sequence learning with neural networks. \newblock In {\em Advances in Neural Information Processing Systems 27}, pages 3104--3112. Curran Associates, Inc., 2014. \bibitem{inception-v3} C.~Szegedy, V.~Vanhoucke, S.~Ioffe, J.~Shlens, and Z.~Wojna. \newblock Rethinking the inception architecture for computer vision. \newblock volume abs/1512.00567, 2015. \bibitem{tu14video} K.~Tu, M.~Meng, M.~W. Lee, T.~E. Choe, and S.~C. Zhu. \newblock Joint video and text parsing for understanding events and answering queries. \newblock {\em IEEE MultiMedia}, 2014. \bibitem{cider} R.~Vedantam, C.~Lawrence~Zitnick, and D.~Parikh. \newblock Cider: Consensus-based image description evaluation. \newblock In {\em The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, June 2015. \bibitem{vinyals2014show} O.~Vinyals, A.~Toshev, S.~Bengio, and D.~Erhan. \newblock Show and tell: A neural image caption generator. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{wu16survey} Q.~Wu, D.~Teney, P.~Wang, C.~Shen, A.~R. Dick, and A.~van~den Hengel. \newblock Visual question answering: {A} survey of methods and datasets. \newblock {\em CoRR}, abs/1607.05910, 2016. \bibitem{wu16external} Q.~Wu, P.~Wang, C.~Shen, A.~Dick, and A.~van~den Hengel. \newblock Ask me anything: Free-form visual question answering based on knowledge from external sources. \newblock In {\em CVPR}, 2016. \bibitem{xu-etal:2016} K.~Xu, J.~Ba, R.~Kiros, K.~Cho, A.~C. Courville, R.~Salakhutdinov, R.~S. Zemel, and Y.~Bengio. \newblock Show, attend and tell: Neural image caption generation with visual attention. \newblock {\em CoRR}, abs/1502.03044, 2015. \bibitem{xu2015show} K.~Xu, J.~Ba, R.~Kiros, A.~Courville, R.~Salakhutdinov, R.~Zemel, and Y.~Bengio. \newblock Show, attend and tell: Neural image caption generation with visual attention. \newblock In {\em Proc. of the 32nd International Conference on Machine Learning (ICML)}, 2015. \bibitem{yang16san} Z.~Yang, X.~He, J.~Gao, L.~Deng, and A.~Smola. \newblock Stacked attention networks for image question answering. \newblock In {\em CVPR}, 2016. \bibitem{yao-li:2010} B.~Yao and F.-F. Li. \newblock Modeling mutual context of object and human pose in human-object interaction activities. \newblock In {\em Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2010. \bibitem{yu15madlib} L.~Yu, E.~Park, A.~C. Berg, and T.~L. Berg. \newblock Visual madlibs: Fill-in-theblank description generation and question answering. \newblock In {\em ICCV}, 2015. \bibitem{zhu16visual7w} Y.~Zhu, O.~Groth, M.~Bernstein, and L.~Fei-Fei. \newblock Visual7w: Grounded question answering in images. \newblock In {\em CVPR}, 2016. \end{thebibliography} \end{document}
Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task
1612.07833
Table 2: Human performance on the DMC task with the MCIC-COCOdataset. Bold denotes the performance ceiling.
[ "Correct responses", "# instances", "Accuracy%" ]
[ [ "3 out of 3", "673", "67.3" ], [ "at least 2 out of 3", "828", "[BOLD] 82.8" ], [ "at least 1 out of 3", "931", "93.1" ], [ "0 out of 3", "69", "0.0" ] ]
The first row, with accuracy at 67.3%, suggests that this is the level at which the correct answer is obvious (i.e., percentage of “easy” instances). The second row, at 82.8%, indicates that this is the performance ceiling in terms of accuracy that can be expected for the MCIC-COCOdataset; at the same time, it suggests that the difference between 67.3% and 82.8% (i.e., about 15% of instances) is caused by “difficult” instances. Finally, the third row, at 93.1%, indicates that the level of “unanswerable” instances is somewhere in the 10%-15% range (combining the increase from 82.8% to 93.1% and the remaining 6.9% that no one gets right). At accuracy levels of 63.4% (dev) and 60.8% (test), this performance establishes a high-bar for a computer model performance on the DMC task using the MCIC-COCOdataset.
\documentclass[10pt,twocolumn,letterpaper]{article} \usepackage[ruled,vlined]{algorithm2e} \DeclareMathOperator{\Ob}{\mathbf{O}} \DeclareMathOperator{\ab}{\mathbf{a}} \DeclareMathOperator{\cbb}{\mathbf{c}} \DeclareMathOperator{\cpb}{\mathbf{c}^\prime} \DeclareMathOperator{\ib}{\mathbf{i}} \DeclareMathOperator{\tb}{\mathbf{t}} \DeclareMathOperator{\tpb}{\mathbf{t}^\prime} \DeclareMathOperator{\ub}{\mathbf{u}} \DeclareMathOperator{\wb}{\mathbf{w}} \DeclareMathOperator{\thetab}{\mathbf{\Theta}} \DeclareMathOperator*{\argmax}{\mathop{\mathrm{argmax}}} \newcommand{\cbr}[1]{\left\{#1\right\}} \def\sm{\small} \def\pairci{\langle\mbox{\it image}, \mbox{\it\{caption(s)\} }\rangle} \def\spaircpc{\langle \cpb, \cbb \rangle} \def\spairci{\langle \ib, \cbb \rangle} \def\spaircti{\langle \ib, \cpb \rangle} \def\spaircpi{\langle \ib, \cpb \rangle} \def\mcic{\textsc{MC}$_{\mbox{\scriptsize IC}}\;$} \def\mciccoco{\textsc{MCIC-COCO}$\;$} \def\ffnnb{FFNN$_{\mbox{\scriptsize 2-class}}^{\mbox{\scriptsize argmax 1..5}}\;$} \def\seqff{Vec2seq+FFNN$\;$} \cvprfinalcopy % *** Uncomment this line for the final submission \def\cvprPaperID{3121} % *** Enter the CVPR Paper ID here \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \begin{document} \title{Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task} \author{Nan Ding \\ Google\\ {\tt\small dingnan@google.com} \and Sebastian Goodman \\ Google\\ {\tt\small seabass@google.com}\\ \and Fei Sha \\ Google\\ {\tt\small fsha@google.com}\\ \and Radu Soricut \\ Google\\ {\tt\small rsoricut@google.com} } \maketitle \begin{abstract} We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable \emph{text} describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing ``keywords'' (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded ``understanding'' of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and well-studied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning. \end{abstract} \section{Introduction} There has been a great deal of interest in multi-modal artificial intelligence research recently, bringing together the fields of Computer Vision and Natural Language Processing. This interest has been fueled in part by the availability of many large-scale image datasets with textual annotations. Several vision+language tasks have been proposed around these datasets~\cite{hodosh13framing,karpathy2014deep,coco,antol15vqa}. Image Captioning~\cite{hodosh13framing,donahue2014long,karpathy2014deep,fang2014captions,kiros2014unifying,vinyals2014show,mao15mrnn,xu2015show} and Visual Question Answering~\cite{malinoski14qa,malinowski15neural,tu14video,antol15vqa,yu15madlib,wu16external,ren15visual,gao15machine,yang16san,zhu16visual7w,lin16leverage} have in particular attracted a lot of attention. The performances on these tasks have been steadily improving, owing much to the wide use of deep learning architectures~\cite{bengio2009book}. A central theme underlying these efforts is the use of natural language to identify how much visual information is perceived and understood by a computer system. Presumably, a system that understands a visual scene well enough ought to be able to describe what the scene is about (thus ``captioning'') or provide correct and visually-grounded answers when queried (thus ``question-answering''). In this paper, we argue for directly measuring how well the semantic representations of the visual and linguistic modalities align (in some abstract semantic space). For instance, given an image and two captions -- a correct one and an incorrect yet-cunningly-similar one -- can we both qualitatively and quantitatively measure the extent to which humans can dismiss the incorrect one but computer systems blunder? Arguably, the degree of the modal alignment is a strong indicator of task-specific performance on any vision+language task. Consequentially, computer systems that can learn to maximize and exploit such alignment should outperform those that do not. We take a two-pronged approach for addressing this issue. First, we introduce a new and challenging Dual Machine Comprehension (DMC) task, in which a computer system must identify the most suitable textual description from several options: one being the target and the others being ``adversarialy''-chosen decoys. All options are free-form, coherent, and fluent sentences with \emph{high degrees of semantic similarity} (hence, they are ``cunningly similar''). A successful computer system has to demonstrate comprehension beyond just recognizing ``keywords'' (or key phrases) and their corresponding visual concepts; they must arrive at a coinciding and visually-grounded understanding of various linguistic elements and their dependencies. What makes the DMC task even more appealing is that it admits an easy-to-compute and well-studied performance metric: the accuracy in detecting the true target among the decoys. Second, we illustrate how solving the DMC task benefits related vision+language tasks. To this end, we render the DMC task as a classification problem, and incorporate it in a multi-task learning framework for end-to-end training of joint objectives. Our work makes the following contributions: (1) an effective and extensible algorithm for generating decoys from human-created image captions (Section~\ref{sec:creation}); (2) an instantiation of applying this algorithm to the COCO dataset~\cite{coco}, resulting in a large-scale dual machine-comprehension dataset that we make publicly available (Section~\ref{sec:mcic-coco}); (3) a human evaluation on this dataset, which provides an upper-bound on performance (Section~\ref{sec:human_eval}); (4) a benchmark study of baseline and competitive learning approaches (Section~\ref{sec:results}), which underperform humans by a substantial gap (about 20\% absolute); and (5) a novel multi-task learning model that simultaneously learns to solve the DMC task and the Image Captioning task (Sections~\ref{sec:seq+ffnn} and \ref{sec:lambda_gen}). Our empirical study shows that performance on the DMC task positively correlates with performance on the Image Captioning task. Therefore, besides acting as a standalone benchmark, the new DMC task can be useful in improving other complex vision+language tasks. Both suggest the DMC task as a fruitful direction for future research. \section{Related work} Image understanding is a long-standing challenge in computer vision. There has recently been a great deal of interest in bringing together vision and language understanding. Particularly relevant to our work are image captioning (IC) and visual question-answering (VQA). Both have instigated a large body of publications, a detailed exposition of which is beyond the scope of this paper. Interested readers should refer to two recent surveys~\cite{bernardi16survey,wu16survey}. In IC tasks, systems attempt to generate a fluent and correct sentence describing an input image. IC systems are usually evaluated on how well the generated descriptions align with human-created captions (ground-truth). The language generation model of an IC system plays a crucial role; it is often trained such that the probabilities of the ground-truth captions are maximized (MLE training), though more advanced methods based on techniques borrowed from Reinforcement Learning have been proposed~\cite{mixer15}. To provide visual grounding, image features are extracted and injected into the language model. Note that language generation models need to both decipher the information encoded in the visual features, and model natural language generation. In VQA tasks, the aim is to answer an input question correctly with respect to a given input image. In many variations of this task, answers are limited to single words or a binary response (``yes'' or ``no'')~\cite{antol15vqa}. The Visual7W dataset~\cite{zhu16visual7w} contains anaswers in a richer format such as phrases, but limits questions to ``wh-''style (what, where, who, etc). The Visual Genome dataset~\cite{krishnavisualgenome}, on the other hand, can potentially define more complex questions and answers due to its extensive textual annotations. Our DMC task is related but significantly different. In our task, systems attempt to discriminate the best caption for an input image from a set of captions --- all but one are decoys. Arguably, it is a form of VQA task, where the same default (thus uninformative) question is asked: \emph{Which of the following sentences best describes this image?} However, unlike current VQA tasks, choosing the correct answer in our task entails a deeper ``understanding'' of the available answers. Thus, to perform well, a computer system needs to understand both complex scenes (visual understanding) and complex sentences (language understanding), \emph{and} be able to reconcile them. The DMC task admits a simple classification-based evaluation metric: the accuracy of selecting the true target. This is a clear advantage over the IC tasks, which often rely on imperfect metrics such as BLEU~\cite{papineni-etal:2002}, ROUGE~\cite{lin-och:2004}, METEOR~\cite{meteor}, CIDEr~\cite{cider}, or SPICE~\cite{spice}. Related to our proposal is the work in~\cite{hodosh13framing}, which frames image captioning as a ranking problem. While both share the idea of selecting captions from a large set, our framework has some important and distinctive components. First, we devise an algorithm for smart selection of candidate decoys, with the goal of selecting those that are sufficiently similar to the true targets to be challenging, and yet still be reliably identifiable by human raters. Second, we have conducted a thorough human evaluation in order to establish a performance ceiling, while also quantifying the level to which current learning systems underperform. Lastly, we show that there exists a positive correlation between the performance on the DMC task and the performance on related vision+language tasks by proposing and experimenting with a multi-task learning model. Our work is also substantially different from their more recent work~\cite{hodosh16eval}, where only one decoy is considered and its generation is either random, or focusing on visual concept similarity (``switching people or scenes'') instead of our focus on both linguistic surface and paragraph vector embedding similarity. \section{The Dual Machine Comprehension Task} \subsection{Design overview} We propose a new multi-modal machine comprehension task to examine how well visual and textual semantic understanding are aligned. Given an image, human evaluators or machines must accurately identify the best sentence describing the scene from several decoy sentences. Accuracy on this task is defined as the percentage that the true targets are identified. It seems straightforward to construct a dataset for this task, as there are several existing datasets which are composed of images and their (multiple) ground-truth captions, including the popular COCO dataset~\cite{coco}. Thus, for any given image, it appears that one just needs to use the captions corresponding to other images as decoys. However, this na\"{i}ve approach could be overly simplistic as it is provides no control over the properties of the decoys. Specifically, our desideratum is to recruit \emph{challenging} decoys that are sufficiently similar to the targets. However, for a small number of decoys, e.g. 4-5, randomly selected captions could be significantly different from the target. The resulting dataset would be too ``easy'' to shed any insight on the task. Since we are also interested in human performance on this task, it is thus impractical to increase the number of decoys to raise the difficulty level of the task at the expense of demanding humans to examine tediously and unreliably a large number of decoys. In short, we need an \emph{automatic procedure to reliably create difficult sets of decoy captions} that are sufficiently similar to the targets. We describe such a procedure in the following. While it focuses on identifying decoy captions, the main idea is potentially adaptable to other settings. The algorithm is flexible in that the ``difficulty" of the dataset can be controlled to some extent through the algorithm's parameters. \subsection{Algorithm to create an MC-IC dataset} \label{sec:creation} The main idea behind our algorithm is to carefully define a ``good decoy''. The algorithm exploits recent advances in paragraph vector (PV) models~\cite{le-mikolov:2014}, while also using linguistic surface analysis to define similarity between two sentences. Due to space limits, we omit a detailed introduction of the PV model. It suffices to note that the model outputs a continuously-valued embedding for a sentence, a paragraph, or even a document. The pseudo-code is given in Algorithm~\ref{aMCIC} (the name \textsc{MC-IC} stands for ``Machine-Comprehension for Image \& Captions''). As input, the algorithm takes a set $C$ of $\pairci$ pairs\footnote{On the order of at least hundreds of thousands of examples; smaller sets result in less challenging datasets.}, as those extracted from a variety of publicly-available corpora, including the COCO dataset~\cite{coco}. The output of the algorithm is the \mcic set. \begin{algorithm}[t] \caption{\textsc{MC-IC}($C$, $N$, $\mbox{\it Score}$)} \label{aMCIC} \SetAlgoLined \KwResult{Dataset \mcic} $\textsf{PV} \gets \textsc{Optimize-PV}(C)$ \\ $\lambda \gets \textsc{Optimize-Score}(\textsf{PV}, C, \mbox{\it Score})$ \\ \mcic$\gets \emptyset$ \\ $nr\_decoys = 4$ \\ \For{$\langle \ib_i, \cbb_i \rangle \in C$}{ $A\gets []$ \\ $T_{\cbb_i} \gets \textsf{PV}(\cbb_i)[1..N]$ \\ \For{$\cbb_d\in T_{\cbb_i}$}{ $score \gets \mbox{\it Score}(\textsf{PV}, \lambda, \cbb_d, \cbb_i)$ \\ \If{$score > 0$}{ $A.\mbox{\bf append}(\langle score, \cbb_d\rangle)$ } } \If{$|A| \geq nr\_decoys$}{ $R\gets \mbox{\bf descending-sort}(A)$ \\ \For{$l \in [1..nr\_decoys]$}{ $\langle score, \cbb_d\rangle\gets R[l]$ \\ \mcic$\gets$ \mcic$\cup\{(\langle \ib_i, \cbb_d\rangle, \mbox{\bf false})\}$ \\ } \mcic$\gets$ \mcic$\cup\{(\langle \ib_i, \cbb_i \rangle, \mbox{\bf true})\}$ \\ } } \end{algorithm} Concretely, the \textsc{MC-IC} Algorithm has three main arguments: a dataset $C = \{ \langle \ib_i, \cbb_i \rangle | 1 \leq i \leq m\}$ where $\ib_i$ is an image and $\cbb_i$ is its ground-truth caption\footnote{For an image with multiple ground-truth captions, we split it to multiple instances with the same image for each one of the ground-truth captions; the train/dev/test splits are done such that they contain disjoint {\em image} sets, as opposed to disjoint {\em instance} sets.}; an integer $N$ which controls the size of $\cbb_i$'s neighborhood in the embedding space defined by the paragraph vector model \textsf{PV}; and a function \textsf{Score} which is used to score the $N$ items in each such neighborhood. The first two steps of the algorithm tune several hyperparameters. The first step finds optimal settings for the \textsf{PV} model given the dataset $C$. The second finds a weight parameter $\lambda$ given \textsf{PV}, dataset $C$, and the \textsf{Score} function. These hyperparameters are dataset-specific. Details are discussed in the next section. The main body of the algorithm, the outer $\textbf{for}$ loop, generates a set of $nr\_decoys$ (4 here) decoys for each ground-truth caption. It accomplishes this by first extracting $N$ candidates from the \textsf{PV} neighborhood of the ground-truth caption, excluding those that belong to the same image. In the inner $\textbf{for}$ loop, it computes the similarity of each candidate to the ground-truth and stores them in a list $A$. If enough candidates are generated, the list is sorted in descending order of score. The top $nr\_decoys$ captions are marked as ``decoys'' (\ie \textbf{false}), while the ground-truth caption is marked as ``target'' (\ie \textbf{true}). The score function $\textsf{Score}(\textsf{PV}, \lambda, \cpb, \cbb)$ is a crucial component of the decoy selection mechanism. Its definition leverages our linguistic intuition by combining linguistic surface similarity, $\textsf{sim}_{\textsc{surf}}(\cpb, \cbb)$, with the similarity suggested by the embedding model, $\mbox{\textsf{sim}}_{\textsf{PV}}(\cpb, \cbb)$: \begin{equation} \textsf{Score}\!=\! \left\{\!\! \arraycolsep=1.5pt \begin{array}{ll} 0 & \! \mbox{if \textsf{sim}}_{\textsc{surf}}\!\ge\!L \\ \lambda\, \mbox{\textsf{sim}}_{\textsf{PV}}\!+\!(1\!-\!\lambda)\, \mbox{\textsf{sim}}_{\textsc{surf}} & \text{otherwise} \end{array} \right. \label{eq:score} \end{equation} where the common argument $(\cpb, \cbb)$ is omitted. The higher the similarity score, the more likely that $\cpb$ is a good decoy for $\cbb$. Note that if the surface similarity is above the threshold $L$, the function returns 0, flagging that the two captions are too similar to be used as a pair of target and decoy. In this work, $\mbox{\textsf{sim}}_{\textsc{surf}}$ is computed as the BLEU score between the inputs~\cite{papineni-etal:2002} (with the brevity penalty set to 1). The embedding similarity, $\mbox{\textsf{sim}}_{\textsf{PV}}$, is computed as the cosine similarity between the two in the PV embedding space. \subsection{The \mciccoco dataset} \label{sec:mcic-coco} We applied the \textsc{MC-IC} Algorithm to the COCO dataset~\cite{coco} to generate a dataset for the visual-language dual machine comprehension task. The dataset is called \mciccoco and it is made publicly available\footnote{\tt http://www.github.com/google/mcic-coco}. We describe the details of this dataset below. We set the neighborhood size at $N=500$, and the threshold at $L=0.5$ (see Eq.~\ref{eq:score}). As the COCO dataset has a large body of images (thus captions) focusing on a few categories (such as sports activities), this threshold is important in discarding significantly similar captions to be decoys -- otherwise, even human annotators will experience difficulty in selecting the ground-truth captions. The hyperparameters of the \textsf{PV} model, {\tt dim} (embedding dimension) and {\tt epochs} (number of training epochs), are optimized in the $\textsc{Optimize-PV}$ step of the \textsc{MC-IC} Algorithm. The main idea is to learn embeddings such that ground-truth captions from the same image have similar embeddings. Concretely, the optimization step is a grid-search over the hyper-parameters of the PV-DBOW model~\cite{le-mikolov:2014}, which we train using a softmax loss. Since there are multiple ground-truth captions associated with each image, the dataset is denoted by $C = \{ \langle \ib_{r_c}, \cbb_{r_c} \rangle | 1 \leq r \leq n, 1 \leq c \leq s_r \}$, where $r$ is the index for each unique image ($\ib_{r_c} \equiv \ib_r$), $n$ is the total number images and $s_r > 1$ is the number of unique captions for image $r$. The total number of data examples $m = \sum_{r=1}^n s_r$. Here the hyper-parameters are searched on a grid to minimize ``multiple ground-truth score'' rank (mgs-rank): the average rank (under the cosine-distance score) between $\cbb_{r_c}$ and $\{ \cbb_{r_l} | 1 \leq l \leq s_r, l\not= c\}$. The lower the mgs-rank, the better the resulting paragraph vector model is at modeling multiple ground-truths for a given image as being similar. As such, our grid-search over the \mciccoco dev dataset yields a minimum mgs-rank at {\tt dim}=1024 and {\tt epochs}=5. Similarly, the $\textsc{Optimize-Score}(\textsf{PV}, \textsf{Score})$ step is a grid-search over the $\lambda$ parameter of the \textsf{Score} function, given a paragraph vector embedding model \textsf{PV} and a dataset $C$ of captions and images, as before. A well-chosen $\lambda$ will ensure the multiple ground-truth captions for the same image will be measured with high degree of similarity with the \textsf{Score} function. The $\lambda\in [0,1]$ parameter is searched on a grid to minimize the ``weighted multiple ground-truths score'' rank (wmgs-rank): the average rank (under the \textsf{Score}) between $\cbb_{r_c}$ and $\{ \cbb_{r_l} | 1 \leq l \leq s_r, l\not= c\}$, relative to the top $N$-best closest-cosine neighbors in \textsf{PV}. For example, if given five ground-truths for image $\ib_r$, and when considering $\cbb_{r_1}$, ground-truths $\cbb_{r_2}$ to $\cbb_{r_5}$ are ranking at \#4, \#10, \#16, and \#22 (in top-500 closest-cosine neighbors in \textsf{PV}), then wmgs-rank$(\cbb_{r_1})=13$ (the average of these ranks). Our grid-search over the \mciccoco dev dataset yields a minimum wmgs-rank at $\lambda$=0.3. The resulting \mciccoco dataset has 574,315 instances that are in the format of $\{i: (\langle \ib_i, \cbb_i^j \rangle , \mbox{label}_i^j), j = 1 \ldots 5\}$ where $\mbox{label}_i^j\in \{\mbox{\bf true}, \mbox{\bf false}\}$. For each such instance, there is one and only one $j$ such that the label is \textbf{true}. We have created a train/dev/test split such that all of the instances for the same image occur in the same split. Table~\ref{table:mcic_splits} reports the basic statistics for the dataset. \subsection{Human performance on \mciccoco} \label{sec:human_eval} \noindent \textbf{Setup}\ To measure how well humans can perform on the DMC task, we randomly drew 1,000 instances from the \mciccoco dev set and submitted those instances to human ``raters''\footnote{Raters are vetted, screened and tested before working on any tasks; requirements include native-language proficiency level.} via a crowd-sourcing platform. Three independent responses from 3 different rates were gathered for each instance, for a total of 3,000 responses. To ensure diversity, raters were prohibited from evaluating more than six instances or from responding to the same task instance twice. In total, 807 distinct raters were employed. Raters were shown one instance at a time. They were shown the image and the five caption choices (ground-truth and four decoys, in randomized order) and were instructed to choose the best caption for the image. Before starting evaluation, the raters were trained with sample instances from the \emph{train} dataset, disjoint from the \emph{dev} dataset on which their performance data were collected. The training process presents an image and five sentences, of which the ground-truth caption is highlighted. In addition, specific instructions and clarification were given to the raters on how to choose the best caption for the image. In Figure~\ref{fig:rater-training}, we present three instances on how the rater instructions were presented for rater training. \noindent \textbf{Quantitative results}\ We assessed human performance in two metrics: (1) Percentage of correct rater responses (1-human system): \textbf{81.1\%} (2432 out of 3000); (2) Percentage of instances with at least $50\%$ (\ie 2) correct responses (3-human system): \textbf{82.8\%} (828 out of 1000). Table~\ref{table:human_eval_instances} gives a detailed breakdown on the statistics related to the inter-rater (dis)agreement. The first row, with accuracy at 67.3\%, suggests that this is the level at which the correct answer is obvious (\ie, percentage of ``easy'' instances). The second row, at 82.8\%, indicates that this is the performance ceiling in terms of accuracy that can be expected for the \mciccoco dataset; at the same time, it suggests that the difference between 67.3\% and 82.8\% (i.e., about 15\% of instances) is caused by ``difficult'' instances. Finally, the third row, at 93.1\%, indicates that the level of ``unanswerable'' instances is somewhere in the 10\%-15\% range (combining the increase from 82.8\% to 93.1\% and the remaining 6.9\% that no one gets right). We will investigate those instances in detail in the future. The COCO dataset has a significant number of captions that fit more than one image in the dataset, given the biased concentration on certain categories. Thus, we suspect that even with our threshold-check (cf. the introduction of $L$ in Eq.~\ref{eq:score}), our procedure might have failed to filter out some impossible-to-distinguish decoys. \noindent \textbf{Qualitative examples}\ We present in Figure~\ref{fig:examples} several example instances from the \mciccoco dataset. The first example illustrates how certain aspects of VQA are subsumed by the DMC task: in order to correctly choose answer 3, a system needs to implicitly answer questions like ``how many people are in the image?'' (answer: three, thus choices 4. and 5. are wrong), and ``what are the people doing?'' (answer: riding horses, thus choices 1. and 2. are wrong). The second example illustrates the extent to which a successful computer system needs to be able to differentiate between ``standing'' and ``rolling'' in a visually-grounded way, presumably via a pose model~\cite{yao-li:2010} combined with a translation model between poses and their verbal correspondents. Last but not least, the third examples illustrates a difficult case, which led to human annotator disagreement in our annotation process (both choice 3. and 5. were selected by different annotators). \section{Learning Methods} \label{sLearning} We describe several learning methods for the dual machine comprehension (DMC) task with the \mcic dataset. We start with linear models which will be used as baselines. We then present several neural-network based models. In particular, we describe a novel, hybrid neural network model that combines the feedforward architecture and the seq2seq architecture~\cite{sutskever-etal:2014} for multi-task learning of the DMC task and the image captioning task. This new model achieves the best performance in both tasks. \subsection{Linear models as baselines} \noindent \textbf{Regression}\ To examine how well the two embeddings are aligned in ``semantic understanding space'', a simple approach is to assume that the learners do not have access to the decoys. Instead, by accessing the ground-truth captions only, the models learn a linear regressor from the image embeddings to the target captions' embeddings (``forward regression''), or from the captions to the images (``backward regression''). With the former approach, referred as \textsf{Baseline-I2C}, we check whether the predicted caption for any given image is closest to its true caption. With the latter, referred as \textsf{Baseline-C2I}, we check whether the predicted image embedding by the ground-truth caption is the closest among predicted ones by decoy captions to the real image embeddings. \noindent \textbf{Linear classifier}\ Our next approach \textsf{Baseline-LinM} is a linear classifier learned to discriminate true targets from the decoys. Specifically, we learn a linear discriminant function $f(\ib, \cbb; \thetab) = \ib^{\top}\thetab \cbb$ where $\thetab$ is a matrix measuring the compatibility between two types of embeddings, cf. ~\cite{frome2013devise}. The loss function is then given by \begin{equation} L(\thetab) = \sum_i [ \max_{j \ne j^*} f(\ib_i, \cbb_i^j; \thetab) - f(\ib_i, \cbb_i^{j^*}; \thetab) ]_+ \end{equation} where $[\ ]_{+}$ is the hinge function and $j$ indexes over all the available decoys and $i$ indexes over all training instances. The optimization tries to increase the gap between the target $\cbb_i^{j^*}$ and the worst ``offending'' decoy. We use stochastic (sub)gradient methods to optimize $\thetab$, and select the best model in terms of accuracy on the \mciccoco development set. \subsection{Feedforward Neural Network (FFNN) models} \label{sec:ffnn} To present our neural-network--based models, we use the following notations. Each training instance pair is a tuple $\langle \ib_i, \cbb_i^j\rangle$, where $\ib$ denotes the image, and $\cbb_i^j$ denotes the caption options, which can either be the target or the decoys. We use a binary variable $y_{ijk} \in \{0,1\}$ to denote whether $j$-th caption of the instance $i$ is labeled as $k$, and $\sum_k y_{ijk} = 1$. We first employ the standard feedforward neural-network models to solve the DMC task on the \mciccoco dataset. For each instance pair $\langle \ib_i, \cbb_i^j\rangle$, the input to the neural network is an embedding tuple $\langle \text{DNN}(\ib_i; \Gamma), \text{Emb}(\cbb_i^j; \Omega) \rangle$, where $\Gamma$ denotes the parameters of a deep convolutional neural network $\text{DNN}$. $\text{DNN}$ takes an image and outputs an image embedding vector. $\Omega$ is the embedding matrix, and $\text{Emb}(.)$ denotes the mapping from a list of word IDs to a list of embedding vectors using $\Omega$. The loss function for our FFNN is given by: \begin{dmath} L(\Gamma, \Omega,\ub)\!=\! \sum_{i, j, k} y_{ijk} \log \; \text{FN}_k(\text{DNN}(\ib_i; \Gamma), \text{Emb}(\cbb_i^j; \Omega); \ub) \label{eq:closs} \end{dmath} \noindent where $\text{FN}_k$ denotes the $k$-th output of a feedforward neural network, and $\sum_k \text{FN}_k(.) = 1$. Our architecture uses a two hidden-layer fully connected network with Rectified Linear hidden units, and a softmax layer on top. The formula in Eq.~\ref{eq:closs} is generic with respect to the number of classes. In particular, we consider a 2-class--classifier ($k\in\{0, 1\}$, 1 for 'yes', this is a correct answer; 0 for 'no', this is an incorrect answer), applied independently on all the $\langle \ib_i, \cbb_i^j\rangle$ pairs and apply one FFNN-based binary classifier for each; the final prediction is the caption with the highest 'yes' probability among all instance pairs belonging to instance $i$. \subsection{Vec2seq + FFNN Model} \label{sec:seq+ffnn} We describe here a hybrid neural-network model that combines a recurrent neural-network with a feedforward one. We encode the image into a single-cell RNN encoder, and the caption into an RNN decoder. Because the first sequence only contains one cell, we call this model a vector-to-sequence (Vec2seq) model as a special case of Seq2seq model as in ~\cite{sutskever-etal:2014,bahdanau-etal:2015}. The output of each unit cell of a Vec2seq model (both on the encoding side and the decoding side) can be fed into an FFNN architecture for binary classification. See Figure~\ref{agmc_diagram} for an illustration of the Vec2seq + FFNN model architecture. \paragraph{Multi-task learning} In addition to the classification loss (Eq.~\ref{eq:closs}), we also include a loss for generating an output sequence $\cbb_i^j$ based on an input $\ib_i$ image. We define a binary variable $z_{ijlv} \in \{0,1\}$ to indicate whether the $l$th word of $\cbb_i^j$ is equal to word $v$. $\Ob^d_{ijl}$ denotes the $l$-th output of the decoder of instance pair $\langle \ib_i, \cbb_i^j\rangle$, $\Ob^e_{ij}$ denotes the output of the encoder, and $\Ob^d_{ij:}$ denotes the concatenation of decoder outputs. With these definitions, the loss function for the Vec2seq + FFNN model is: \begin{dmath} L(\thetab, \wb, \ub) = \sum_{i, j, k} y_{ijk} \log \; \text{FN}_k(\Ob^e_{ij}(\ib_i, \cbb_i^j; \thetab), \Ob^d_{ij:}(\ib_i, \cbb_i^j; \thetab); \ub) + \lambda_{gen} \sum_{i,j, l, v} y_{ij1} z_{ijlv} \log \;\text{softmax}_v(\Ob^d_{ijl}(\ib_i, \cbb_i^j; \thetab); \wb) \label{eq:mloss} \end{dmath} \noindent where $\sum_v \text{softmax}_v(.) = 1$; $\thetab$ are the parameters of the Vec2seq model, which include the parameters within each unit cell, as well as the elements in the embedding matrices for images and target sequences; $\wb$ are the output projection parameters that transform the output space of the decoder to the vocabulary space. $\ub$ are the parameters of the FFNN model (Eq.~\ref{eq:closs}); $\lambda_{gen}$ is the weight assigned to the sequence-to-sequence generation loss. Only the true target candidates (the ones with $y_{ij1} = 1$) are included in this loss, as we do not want the decoy target options to affect this computation. The Vec2seq model we use here is an instantiation of the attention-enhanced models proposed in~\cite{bahdanau-etal:2015,chen-etal:2016}. However, our current model does not support location-wise attention, as in the Show-Attend-and-Tell~\cite{xu-etal:2016} model. In this sense, our model is an extension of the Show-and-Tell model with a single attention state representing the entire image, used as image memory representation for all decoder decisions. We apply Gated Recurrent Unit (GRU) as the unit cell~\cite{cho-etal:2014}. We also compare the influence on performance of the $\lambda_{gen}$ parameter. \section{Experiments} \label{sec:results} \subsection{Experimental Setup} \noindent \textbf{Baseline models}\ For the baseline models, we use the 2048-dimensional outputs of Google-Inception-v3~\cite{inception-v3} (pre-trained on ImageNet ILSSVR 2012) to represent the images, and 1024-dimensional paragraph-vector embeddings (section~\ref{sec:creation}) to represent captions. To reduce computation time, both are reduced to 256-dimensional vectors using random projections. \noindent \textbf{Neural-nets based models}\ The experiments with these models are done using the Tensorflow package~\cite{tensorflow2015-whitepaper}. The hyper-parameter choices are decided using the hold-out development portion of the \mciccoco set. For modeling the input tokens, we use a vocabulary size of 8,855 types, selected as the most frequent tokens over the captions from the COCO training set (words occurring at least 5 times). The models are optimized using ADAGRAD with an initial learning rate of 0.01, and clipped gradients (maximum norm 4). We run the training procedures for $3,000,000$ steps, with a mini-batch size of 20. We use 40 workers for computing the updates, and 10 parameter servers for model storing and (asynchronous and distributed) updating. We use the following notations to refer to the neural network models: \ffnnb refers to the version of feedforward neural network architecture with a 2-class--classifier ('yes' or 'no' for answer correctness), over which an $\argmax$ function computes a 5-way decision (i.e., the choice with the highest 'yes' probability); we henceforth refer to this model simply as FFNN. The \seqff refers to the hybrid model described in Section~\ref{sec:seq+ffnn}, combining Vec2seq and \ffnnb. The RNN part of the model uses a two-hidden--layer GRU unit-cell~\cite{cho-etal:2014} configuration, while the FFNN part uses a two-hidden--layer architecture. The $\lambda_{gen}$ hyper-parameter from the loss-function $L(\thetab, \wb, \ub)$ (Eq.~\ref{eq:mloss}) is by default set to 1.0 (except for Section~\ref{sec:lambda_gen} where we directly measure its effect on performance). \noindent \textbf{Evaluation metrics}\ The metrics we use to measure performance come in two flavors. First, the accuracy in detecting (the index of) the true target among the decoys provides a direct way of measuring the performance level on the comprehension task. We use this metric as the main indicator of comprehension performance. Second, because our \seqff models are multi-task models, they can also generate new captions given the input image. The performance level for the generation task is measured using the standard scripts measuring ROUGE-L~\cite{lin-och:2004} and CIDEr~\cite{cider}, using as reference the available captions from the COCO data (around 5 for most of the images). Code for these metrics is available as part of the COCO evaluation toolkit~\footnote{\tt https://github.com/tylin/coco-caption}. As usual, both the hypothesis strings and the reference strings are preprocessed: remove all the non-alphabetic characters; transform all letters to lowercase, and tokenize using white space; replace all words occurring less than 5 times with an unknown token $\langle\mbox{UNK}\rangle$ (total vocabulary of 8,855 types); truncate to the first 30 tokens. \subsection{Results} \label{sDMCResults} Table~\ref{table:baselines} summarizes our main results on the comprehension task. We report the accuracies (and their standard deviations) for random choice, baselines, and neural network-based models. Interestingly, the \textsf{Baseline-I2C} model performs at the level of random choice, and much worse than the \textsf{Baseline-C2I model}. This discrepancy reflects the inherent difficulty in vision-Language tasks: for each image, there are several possible equally good descriptions, thus a linear mapping from the image embeddings to the captions might not be enough -- statistically, the \emph{linear} model will just predict the mean of those captions. However, for the reverse direction where the captions are the independent variables, the learned model does not have to capture the variability in image embeddings corresponding to the different but equally good captions -- there is only one such image embedding. Nonlinear neural networks overcome these modeling limitations. The results clearly indicate their superiority over the baselines. The \seqff model obtains the best results, with accuracies of 60.5\% (dev) and 59.0\% (test); the accuracy numbers indicate that the \seqff architecture is superior to the non-recursive fully-connected FFNN architecture (at 55.1\% accuracy on test). We show next the impact on performance of the embedding dimension and neural-network sizes, for both the feedforward and the recurrent architectures. \subsection{Analysis: embedding dimension and neural-network sizes} \label{sec:impact_size} In this section, we compare neural networks models of different sizes. Specifically, we compare embedding dimensions of $\cbr{64, 256, 512, 1024, 2048}$, and two hidden-layer architectures with sizes of $\cbr{(64, 16), (256, 64), (512, 128), (1024, 256), (2048, 512)}$. The results in Table~\ref{table:ffnn_sizes} illustrate an interesting behavior for the neural-network architectures. For the FFNN models, contrary to expectations, bigger network sizes leads to decreasing accuracy. On the other hand, for \seqff models, accuracy increases with increased size in model parameters, up until the embedding dimension of the RNN model matches the embedding dimension of the Inception model, at 2048. At accuracy levels of 63.4\% (dev) and 60.8\% (test), this performance establishes a high-bar for a computer model performance on the DMC task using the \mciccoco dataset. According to the estimate from Table~\ref{table:human_eval_instances}, this level of performance is still \emph{significantly} below the 82.8\% accuracy achievable by humans, which makes \mciccoco a challenging testbed for future models of Vision-Language machine comprehension. \subsection{Multi-task learning for DMC and Image Captioning} \label{sec:lambda_gen} In this section, we compare models with different values of $\lambda_{gen}$ in Eq.~\ref{eq:mloss}. This parameter allows for a natural progression from learning for the DMC task only ($\lambda_{gen} = 0$) to focusing on the image captioning loss ($\lambda_{gen} \rightarrow +\infty$). In between the two extremes, we have a multi-task learning objective for jointly learning related tasks. The results in Table~\ref{table:lambda_gen} illustrate one of the main points of this paper. That is, the ability to perform the comprehension task (as measured by the accuracy metric) positively correlates with the ability to perform other tasks that require machine comprehension, such as caption generation. At $\lambda_{gen} = 4$, the \seqff model not only has a high accuracy of detecting the ground-truth option, but it also generates its own captions given the input image, with an accuracy measured on \mciccoco at 0.9890 (dev) and 0.9380 (test) CIDEr scores. On the other hand, at an accuracy level of about 59\% (on test, at $\lambda_{gen} = 0.1$), the generation performance is at only 0.9010 (dev) and 0.8650 (test) CIDEr scores. We note that there is an inherent trade-off between prediction accuracy and generation performance, as seen for $\lambda_{gen}$ values above 4.0. This agrees with the intuition that training a \seqff model using a loss $L(\thetab, \wb, \ub)$ with a larger $\lambda_{gen}$ means that the ground-truth detection loss (the first term of the loss in Eq.\ref{eq:mloss}) may get overwhelmed by the word-generation loss (the second term). However, our empirical results suggest that there is value in training models with a multi-task setup, in which both the comprehension side as well as the generation side are carefully tuned to maximize performance. \section{Discussion} We have proposed and described in detail a new multi-modal machine comprehension task (DMC), combining the challenges of understanding visual scenes and complex language constructs simultaneously. The underlying hypothesis for this work is that computer systems that can be shown to perform increasingly well on this task will do so by constructing a visually-grounded understanding of various linguistic elements and their dependencies. This type of work can therefore benefit research in both machine visual understanding and language comprehension. The \seqff architecture that we propose for addressing this combined challenge is a generic multi-task model. It can be trained end-to-end to display both the ability to choose the most likely text associated with an image (thus enabling a direct measure of its ``comprehension'' performance), as well as the ability to generate a complex description of that image (thus enabling a direct measure of its performance in an end-to-end complex and meaningful task). The empirical results we present validate the underlying hypothesis of our work, by showing that we can measure the decisions made by such a computer system and validate that improvements in comprehension and generation happen in tandem. The experiments presented in this work are done training our systems in an end-to-end fashion, starting directly from raw pixels. We hypothesize that our framework can be fruitfully used to show that incorporating specialized vision systems (such as object detection, scene recognition, pose detection, etc.) is beneficial. More precisely, not only it can lead to a direct and measurable impact on a computer system's ability to perform image understanding, but it can express that understanding in an end-to-end complex task. \begin{thebibliography}{10} \bibitem{tensorflow2015-whitepaper} M.~Abadi, A.~Agarwal, P.~Barham, E.~Brevdo, Z.~Chen, C.~Citro, G.~Corrado, A.~Davis, J.~Dean, M.~Devin, S.~Ghemawat, I.~Goodfellow, A.~Harp, G.~Irving, M.~Isard, Y.~Jia, R.~Jozefowicz, L.~Kaiser, M.~Kudlur, J.~Levenberg, D.~Man\'{e}, R.~Monga, S.~Moore, D.~Murray, C.~Olah, M.~Schuster, J.~Shlens, B.~Steiner, I.~Sutskever, K.~Talwar, P.~Tucker, V.~Vanhoucke, V.~Vasudevan, F.~Vi\'{e}gas, O.~Vinyals, P.~Warden, M.~Wattenberg, M.~Wicke, Y.~Yu, and X.~Zheng. \newblock {TensorFlow}: Large-scale machine learning on heterogeneous systems, 2015. \newblock Software available from tensorflow.org. \bibitem{spice} P.~Anderson, B.~Fernando, M.~Johnson, and S.~Gould. \newblock {SPICE:} semantic propositional image caption evaluation. \newblock {\em CoRR}, abs/1607.08822, 2016. \bibitem{antol15vqa} S.~Antol, A.~Agrawal, J.~Lu, M.~Mitchell, D.~Batra, C.~L. Zitnick, and D.~Parikh. \newblock {VQA}: Visual question answering. \newblock In {\em International Conference on Computer Vision (ICCV)}, 2015. \bibitem{bahdanau-etal:2015} D.~Bahdanau, K.~Cho, and Y.~Bengio. \newblock Neural machine translation by jointly learning to align and translate. \newblock In {\em Proceedings of ICLR}, 2015. \bibitem{meteor} S.~Banerjee and A.~Lavie. \newblock {METEOR}: An automatic metric for {MT} evaluation with improved correlation with human judgments. \newblock In {\em Proceedings of the {ACL} {W}orkshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization}, 2005. \bibitem{bengio2009book} Y.~Bengio. \newblock Learning deep architectures for ai. \newblock {\em Found. Trends Mach. Learn.}, 2(1):1--127, Jan. 2009. \bibitem{bernardi16survey} R.~Bernardi, R.~Cakici, D.~Elliott, A.~Erdem, E.~Erdem, N.~Ikizler-Cinbis, F.~Keller, A.~Muscat, and B.~Plank. \newblock Automatic description generation from images: A survey of models, datasets, and evaluation measures. \newblock {\em JAIR}, 55, 2016. \bibitem{chen-etal:2016} D.~Chen, J.~Bolton, and C.~D. Manning. \newblock {A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task}. \newblock In {\em Proceedings of ACL}, 2016. \bibitem{cho-etal:2014} K.~Cho, B.~van Merrienboer, {\c{C}}.~G{\"{u}}l{\c{c}}ehre, D.~Bahdanau, F.~Bougares, H.~Schwenk, and Y.~Bengio. \newblock Learning phrase representations using {RNN} encoder-decoder for statistical machine translation. \newblock In {\em Proceedings of {EMNLP}, October 25-29, 2014, Doha, Qatar}, pages 1724--1734, 2014. \bibitem{donahue2014long} J.~Donahue, L.~A. Hendricks, S.~Guadarrama, M.~Rohrbach, S.~Venugopalan, K.~Saenko, and T.~Darrell. \newblock Long-term recurrent convolutional networks for visual recognition and description. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2014. \bibitem{fang2014captions} H.~Fang, S.~Gupta, F.~Iandola, R.~Srivastava, L.~Deng, P.~Doll{\'a}r, J.~Gao, X.~He, M.~Mitchell, J.~Platt, et~al. \newblock From captions to visual concepts and back. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{frome2013devise} A.~Frome, G.~S. Corrado, J.~Shlens, S.~Bengio, J.~Dean, T.~Mikolov, et~al. \newblock Devise: A deep visual-semantic embedding model. \newblock In {\em Advances in Neural Information Processing Systems (NIPS)}, 2013. \bibitem{gao15machine} H.~Gao, J.~Mao, J.~Zhou, Z.~Huang, L.~Wang, and W.~Xu. \newblock Are you talking to a machine? dataset and methods for multilingual image question answering. \newblock In {\em NIPS}, 2015. \bibitem{hodosh16eval} M.~Hodosh and J.~Hockenmaier. \newblock Focused evaluation for image description with binary forced-choice tasks. \newblock In {\em Proc. 5th Vision and Language Workshop}, 2016. \bibitem{hodosh13framing} M.~Hodosh, P.~Young, and J.~Hockenmaier. \newblock Framing image description as a ranking task: Data, models and evaluation metrics. \newblock {\em JAIR}, 2013. \bibitem{karpathy2014deep} A.~Karpathy and L.~Fei-Fei. \newblock Deep visual-semantic alignments for generating image descriptions. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{kiros2014unifying} R.~Kiros, R.~Salakhutdinov, and R.~S. Zemel. \newblock Unifying visual-semantic embeddings with multimodal neural language models. \newblock {\em Transactions of the Association for Computational Linguistics}, 2015. \bibitem{krishnavisualgenome} R.~Krishna, Y.~Zhu, O.~Groth, J.~Johnson, K.~Hata, J.~Kravitz, S.~Chen, Y.~Kalantidis, L.-J. Li, D.~A. Shamma, M.~Bernstein, and L.~Fei-Fei. \newblock {Visual Genome}: Connecting language and vision using crowdsourced dense image annotations. \newblock 2016. \bibitem{le-mikolov:2014} Q.~Le and T.~Mikolov. \newblock Distributed representations of sentences and documents. \newblock In {\em Proceedings of the 31st International Conference on Machine Learning}, Beijing, China, 2014. \bibitem{lin-och:2004} C.-Y. Lin and F.~J. Och. \newblock Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. \newblock In {\em Proceedings of ACL}, 2004. \bibitem{coco} T.~Lin, M.~Maire, S.~J. Belongie, L.~D. Bourdev, R.~B. Girshick, J.~Hays, P.~Perona, D.~Ramanan, P.~Doll{\'{a}}r, and C.~L. Zitnick. \newblock Microsoft {COCO:} common objects in context. \newblock {\em CoRR}, abs/1405.0312, 2014. \bibitem{lin16leverage} X.~Lin and D.~Parikh. \newblock Leveraging visual question answering for image-caption ranking. \newblock {\em CoRR}, abs/1605.01379, 2016. \bibitem{malinoski14qa} M.~Malinowski and M.~Fritz. \newblock A multi-world approach to question answering about real-world scenes based on uncertain input. \newblock In {\em NIPS}, 2014. \bibitem{malinowski15neural} M.~Malinowski, M.~Rohrbach, and M.~Fritz. \newblock Ask your neurons: A neural-based approach to answering questions about images. \newblock In {\em ICCV}, 2015. \bibitem{mao15mrnn} J.~Mao, W.~Xu, Y.~Yang, J.~Wang, and A.~Yuille. \newblock Deep captioning with multimodal recurrent neural networks ({mRNN}). \newblock In {\em Proc. Int. Conf. Learn. Representations}, 2015. \bibitem{papineni-etal:2002} K.~Papineni, S.~Roukos, T.~Ward, and W.-J. Zhu. \newblock Bleu: A method for automatic evaluation of machine translation. \newblock In {\em Proceedings of ACL}, pages 311--318, 2002. \bibitem{mixer15} M.~Ranzato, S.~Chopra, M.~Auli, and W.~Zaremba. \newblock Sequence level training with recurrent neural networks. \newblock {\em CoRR}, abs/1511.06732, 2015. \bibitem{ren15visual} M.~Ren, R.~Kiros, and R.~Zemel. \newblock Image question answering: A visual semantic embedding model and a new dataset. \newblock In {\em NIPS}, 2015. \bibitem{sutskever-etal:2014} I.~Sutskever, O.~Vinyals, and Q.~V.~V. Le. \newblock Sequence to sequence learning with neural networks. \newblock In {\em Advances in Neural Information Processing Systems 27}, pages 3104--3112. Curran Associates, Inc., 2014. \bibitem{inception-v3} C.~Szegedy, V.~Vanhoucke, S.~Ioffe, J.~Shlens, and Z.~Wojna. \newblock Rethinking the inception architecture for computer vision. \newblock volume abs/1512.00567, 2015. \bibitem{tu14video} K.~Tu, M.~Meng, M.~W. Lee, T.~E. Choe, and S.~C. Zhu. \newblock Joint video and text parsing for understanding events and answering queries. \newblock {\em IEEE MultiMedia}, 2014. \bibitem{cider} R.~Vedantam, C.~Lawrence~Zitnick, and D.~Parikh. \newblock Cider: Consensus-based image description evaluation. \newblock In {\em The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, June 2015. \bibitem{vinyals2014show} O.~Vinyals, A.~Toshev, S.~Bengio, and D.~Erhan. \newblock Show and tell: A neural image caption generator. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{wu16survey} Q.~Wu, D.~Teney, P.~Wang, C.~Shen, A.~R. Dick, and A.~van~den Hengel. \newblock Visual question answering: {A} survey of methods and datasets. \newblock {\em CoRR}, abs/1607.05910, 2016. \bibitem{wu16external} Q.~Wu, P.~Wang, C.~Shen, A.~Dick, and A.~van~den Hengel. \newblock Ask me anything: Free-form visual question answering based on knowledge from external sources. \newblock In {\em CVPR}, 2016. \bibitem{xu-etal:2016} K.~Xu, J.~Ba, R.~Kiros, K.~Cho, A.~C. Courville, R.~Salakhutdinov, R.~S. Zemel, and Y.~Bengio. \newblock Show, attend and tell: Neural image caption generation with visual attention. \newblock {\em CoRR}, abs/1502.03044, 2015. \bibitem{xu2015show} K.~Xu, J.~Ba, R.~Kiros, A.~Courville, R.~Salakhutdinov, R.~Zemel, and Y.~Bengio. \newblock Show, attend and tell: Neural image caption generation with visual attention. \newblock In {\em Proc. of the 32nd International Conference on Machine Learning (ICML)}, 2015. \bibitem{yang16san} Z.~Yang, X.~He, J.~Gao, L.~Deng, and A.~Smola. \newblock Stacked attention networks for image question answering. \newblock In {\em CVPR}, 2016. \bibitem{yao-li:2010} B.~Yao and F.-F. Li. \newblock Modeling mutual context of object and human pose in human-object interaction activities. \newblock In {\em Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2010. \bibitem{yu15madlib} L.~Yu, E.~Park, A.~C. Berg, and T.~L. Berg. \newblock Visual madlibs: Fill-in-theblank description generation and question answering. \newblock In {\em ICCV}, 2015. \bibitem{zhu16visual7w} Y.~Zhu, O.~Groth, M.~Bernstein, and L.~Fei-Fei. \newblock Visual7w: Grounded question answering in images. \newblock In {\em CVPR}, 2016. \end{thebibliography} \end{document}
Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task
1612.07833
Table 5: The impact of λgen on MCIC-COCOaccuracy, together with caption-generation performance (ROUGE-L and CIDEr against 5 references). All results are obtained with a Vec2seq+FFNNmodel (embedding size 2048 and hidden-layer sizes of 2048 and 512).
[ "[ITALIC] λgen", "Acc Dev", "Acc Test", "ROUGE-L Dev", "ROUGE-L Test", "CIDEr Dev", "CIDEr Test" ]
[ [ "0.0", "50.7", "50.7 ±0.5", "-", "-", "-", "-" ], [ "0.1", "61.1", "59.0 ±0.5", "0.517", "0.511", "0.901", "0.865" ], [ "1.0", "[BOLD] 63.4", "[BOLD] 60.8 ±0.5", "0.528", "0.518", "0.972", "0.903" ], [ "2.0", "[BOLD] 63.4", "[BOLD] 61.3 ±0.5", "0.528", "0.519", "0.971", "0.921" ], [ "4.0", "[BOLD] 63.0", "[BOLD] 60.9 ±0.5", "[BOLD] 0.533", "[BOLD] 0.524", "[BOLD] 0.989", "[BOLD] 0.938" ], [ "8.0", "62.1", "60.1 ±0.5", "0.526", "0.520", "0.957", "0.914" ], [ "16.0", "61.8", "59.6 ±0.5", "0.530", "0.519", "0.965", "0.912" ] ]
That is, the ability to perform the comprehension task (as measured by the accuracy metric) positively correlates with the ability to perform other tasks that require machine comprehension, such as caption generation. At λgen=4, the Vec2seq+FFNNmodel not only has a high accuracy of detecting the ground-truth option, but it also generates its own captions given the input image, with an accuracy measured on MCIC-COCOat 0.9890 (dev) and 0.9380 (test) CIDEr scores. On the other hand, at an accuracy level of about 59% (on test, at λgen=0.1), the generation performance is at only 0.9010 (dev) and 0.8650 (test) CIDEr scores.
\documentclass[10pt,twocolumn,letterpaper]{article} \usepackage[ruled,vlined]{algorithm2e} \DeclareMathOperator{\Ob}{\mathbf{O}} \DeclareMathOperator{\ab}{\mathbf{a}} \DeclareMathOperator{\cbb}{\mathbf{c}} \DeclareMathOperator{\cpb}{\mathbf{c}^\prime} \DeclareMathOperator{\ib}{\mathbf{i}} \DeclareMathOperator{\tb}{\mathbf{t}} \DeclareMathOperator{\tpb}{\mathbf{t}^\prime} \DeclareMathOperator{\ub}{\mathbf{u}} \DeclareMathOperator{\wb}{\mathbf{w}} \DeclareMathOperator{\thetab}{\mathbf{\Theta}} \DeclareMathOperator*{\argmax}{\mathop{\mathrm{argmax}}} \newcommand{\cbr}[1]{\left\{#1\right\}} \def\sm{\small} \def\pairci{\langle\mbox{\it image}, \mbox{\it\{caption(s)\} }\rangle} \def\spaircpc{\langle \cpb, \cbb \rangle} \def\spairci{\langle \ib, \cbb \rangle} \def\spaircti{\langle \ib, \cpb \rangle} \def\spaircpi{\langle \ib, \cpb \rangle} \def\mcic{\textsc{MC}$_{\mbox{\scriptsize IC}}\;$} \def\mciccoco{\textsc{MCIC-COCO}$\;$} \def\ffnnb{FFNN$_{\mbox{\scriptsize 2-class}}^{\mbox{\scriptsize argmax 1..5}}\;$} \def\seqff{Vec2seq+FFNN$\;$} \cvprfinalcopy % *** Uncomment this line for the final submission \def\cvprPaperID{3121} % *** Enter the CVPR Paper ID here \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \begin{document} \title{Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task} \author{Nan Ding \\ Google\\ {\tt\small dingnan@google.com} \and Sebastian Goodman \\ Google\\ {\tt\small seabass@google.com}\\ \and Fei Sha \\ Google\\ {\tt\small fsha@google.com}\\ \and Radu Soricut \\ Google\\ {\tt\small rsoricut@google.com} } \maketitle \begin{abstract} We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable \emph{text} describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing ``keywords'' (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded ``understanding'' of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and well-studied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning. \end{abstract} \section{Introduction} There has been a great deal of interest in multi-modal artificial intelligence research recently, bringing together the fields of Computer Vision and Natural Language Processing. This interest has been fueled in part by the availability of many large-scale image datasets with textual annotations. Several vision+language tasks have been proposed around these datasets~\cite{hodosh13framing,karpathy2014deep,coco,antol15vqa}. Image Captioning~\cite{hodosh13framing,donahue2014long,karpathy2014deep,fang2014captions,kiros2014unifying,vinyals2014show,mao15mrnn,xu2015show} and Visual Question Answering~\cite{malinoski14qa,malinowski15neural,tu14video,antol15vqa,yu15madlib,wu16external,ren15visual,gao15machine,yang16san,zhu16visual7w,lin16leverage} have in particular attracted a lot of attention. The performances on these tasks have been steadily improving, owing much to the wide use of deep learning architectures~\cite{bengio2009book}. A central theme underlying these efforts is the use of natural language to identify how much visual information is perceived and understood by a computer system. Presumably, a system that understands a visual scene well enough ought to be able to describe what the scene is about (thus ``captioning'') or provide correct and visually-grounded answers when queried (thus ``question-answering''). In this paper, we argue for directly measuring how well the semantic representations of the visual and linguistic modalities align (in some abstract semantic space). For instance, given an image and two captions -- a correct one and an incorrect yet-cunningly-similar one -- can we both qualitatively and quantitatively measure the extent to which humans can dismiss the incorrect one but computer systems blunder? Arguably, the degree of the modal alignment is a strong indicator of task-specific performance on any vision+language task. Consequentially, computer systems that can learn to maximize and exploit such alignment should outperform those that do not. We take a two-pronged approach for addressing this issue. First, we introduce a new and challenging Dual Machine Comprehension (DMC) task, in which a computer system must identify the most suitable textual description from several options: one being the target and the others being ``adversarialy''-chosen decoys. All options are free-form, coherent, and fluent sentences with \emph{high degrees of semantic similarity} (hence, they are ``cunningly similar''). A successful computer system has to demonstrate comprehension beyond just recognizing ``keywords'' (or key phrases) and their corresponding visual concepts; they must arrive at a coinciding and visually-grounded understanding of various linguistic elements and their dependencies. What makes the DMC task even more appealing is that it admits an easy-to-compute and well-studied performance metric: the accuracy in detecting the true target among the decoys. Second, we illustrate how solving the DMC task benefits related vision+language tasks. To this end, we render the DMC task as a classification problem, and incorporate it in a multi-task learning framework for end-to-end training of joint objectives. Our work makes the following contributions: (1) an effective and extensible algorithm for generating decoys from human-created image captions (Section~\ref{sec:creation}); (2) an instantiation of applying this algorithm to the COCO dataset~\cite{coco}, resulting in a large-scale dual machine-comprehension dataset that we make publicly available (Section~\ref{sec:mcic-coco}); (3) a human evaluation on this dataset, which provides an upper-bound on performance (Section~\ref{sec:human_eval}); (4) a benchmark study of baseline and competitive learning approaches (Section~\ref{sec:results}), which underperform humans by a substantial gap (about 20\% absolute); and (5) a novel multi-task learning model that simultaneously learns to solve the DMC task and the Image Captioning task (Sections~\ref{sec:seq+ffnn} and \ref{sec:lambda_gen}). Our empirical study shows that performance on the DMC task positively correlates with performance on the Image Captioning task. Therefore, besides acting as a standalone benchmark, the new DMC task can be useful in improving other complex vision+language tasks. Both suggest the DMC task as a fruitful direction for future research. \section{Related work} Image understanding is a long-standing challenge in computer vision. There has recently been a great deal of interest in bringing together vision and language understanding. Particularly relevant to our work are image captioning (IC) and visual question-answering (VQA). Both have instigated a large body of publications, a detailed exposition of which is beyond the scope of this paper. Interested readers should refer to two recent surveys~\cite{bernardi16survey,wu16survey}. In IC tasks, systems attempt to generate a fluent and correct sentence describing an input image. IC systems are usually evaluated on how well the generated descriptions align with human-created captions (ground-truth). The language generation model of an IC system plays a crucial role; it is often trained such that the probabilities of the ground-truth captions are maximized (MLE training), though more advanced methods based on techniques borrowed from Reinforcement Learning have been proposed~\cite{mixer15}. To provide visual grounding, image features are extracted and injected into the language model. Note that language generation models need to both decipher the information encoded in the visual features, and model natural language generation. In VQA tasks, the aim is to answer an input question correctly with respect to a given input image. In many variations of this task, answers are limited to single words or a binary response (``yes'' or ``no'')~\cite{antol15vqa}. The Visual7W dataset~\cite{zhu16visual7w} contains anaswers in a richer format such as phrases, but limits questions to ``wh-''style (what, where, who, etc). The Visual Genome dataset~\cite{krishnavisualgenome}, on the other hand, can potentially define more complex questions and answers due to its extensive textual annotations. Our DMC task is related but significantly different. In our task, systems attempt to discriminate the best caption for an input image from a set of captions --- all but one are decoys. Arguably, it is a form of VQA task, where the same default (thus uninformative) question is asked: \emph{Which of the following sentences best describes this image?} However, unlike current VQA tasks, choosing the correct answer in our task entails a deeper ``understanding'' of the available answers. Thus, to perform well, a computer system needs to understand both complex scenes (visual understanding) and complex sentences (language understanding), \emph{and} be able to reconcile them. The DMC task admits a simple classification-based evaluation metric: the accuracy of selecting the true target. This is a clear advantage over the IC tasks, which often rely on imperfect metrics such as BLEU~\cite{papineni-etal:2002}, ROUGE~\cite{lin-och:2004}, METEOR~\cite{meteor}, CIDEr~\cite{cider}, or SPICE~\cite{spice}. Related to our proposal is the work in~\cite{hodosh13framing}, which frames image captioning as a ranking problem. While both share the idea of selecting captions from a large set, our framework has some important and distinctive components. First, we devise an algorithm for smart selection of candidate decoys, with the goal of selecting those that are sufficiently similar to the true targets to be challenging, and yet still be reliably identifiable by human raters. Second, we have conducted a thorough human evaluation in order to establish a performance ceiling, while also quantifying the level to which current learning systems underperform. Lastly, we show that there exists a positive correlation between the performance on the DMC task and the performance on related vision+language tasks by proposing and experimenting with a multi-task learning model. Our work is also substantially different from their more recent work~\cite{hodosh16eval}, where only one decoy is considered and its generation is either random, or focusing on visual concept similarity (``switching people or scenes'') instead of our focus on both linguistic surface and paragraph vector embedding similarity. \section{The Dual Machine Comprehension Task} \subsection{Design overview} We propose a new multi-modal machine comprehension task to examine how well visual and textual semantic understanding are aligned. Given an image, human evaluators or machines must accurately identify the best sentence describing the scene from several decoy sentences. Accuracy on this task is defined as the percentage that the true targets are identified. It seems straightforward to construct a dataset for this task, as there are several existing datasets which are composed of images and their (multiple) ground-truth captions, including the popular COCO dataset~\cite{coco}. Thus, for any given image, it appears that one just needs to use the captions corresponding to other images as decoys. However, this na\"{i}ve approach could be overly simplistic as it is provides no control over the properties of the decoys. Specifically, our desideratum is to recruit \emph{challenging} decoys that are sufficiently similar to the targets. However, for a small number of decoys, e.g. 4-5, randomly selected captions could be significantly different from the target. The resulting dataset would be too ``easy'' to shed any insight on the task. Since we are also interested in human performance on this task, it is thus impractical to increase the number of decoys to raise the difficulty level of the task at the expense of demanding humans to examine tediously and unreliably a large number of decoys. In short, we need an \emph{automatic procedure to reliably create difficult sets of decoy captions} that are sufficiently similar to the targets. We describe such a procedure in the following. While it focuses on identifying decoy captions, the main idea is potentially adaptable to other settings. The algorithm is flexible in that the ``difficulty" of the dataset can be controlled to some extent through the algorithm's parameters. \subsection{Algorithm to create an MC-IC dataset} \label{sec:creation} The main idea behind our algorithm is to carefully define a ``good decoy''. The algorithm exploits recent advances in paragraph vector (PV) models~\cite{le-mikolov:2014}, while also using linguistic surface analysis to define similarity between two sentences. Due to space limits, we omit a detailed introduction of the PV model. It suffices to note that the model outputs a continuously-valued embedding for a sentence, a paragraph, or even a document. The pseudo-code is given in Algorithm~\ref{aMCIC} (the name \textsc{MC-IC} stands for ``Machine-Comprehension for Image \& Captions''). As input, the algorithm takes a set $C$ of $\pairci$ pairs\footnote{On the order of at least hundreds of thousands of examples; smaller sets result in less challenging datasets.}, as those extracted from a variety of publicly-available corpora, including the COCO dataset~\cite{coco}. The output of the algorithm is the \mcic set. \begin{algorithm}[t] \caption{\textsc{MC-IC}($C$, $N$, $\mbox{\it Score}$)} \label{aMCIC} \SetAlgoLined \KwResult{Dataset \mcic} $\textsf{PV} \gets \textsc{Optimize-PV}(C)$ \\ $\lambda \gets \textsc{Optimize-Score}(\textsf{PV}, C, \mbox{\it Score})$ \\ \mcic$\gets \emptyset$ \\ $nr\_decoys = 4$ \\ \For{$\langle \ib_i, \cbb_i \rangle \in C$}{ $A\gets []$ \\ $T_{\cbb_i} \gets \textsf{PV}(\cbb_i)[1..N]$ \\ \For{$\cbb_d\in T_{\cbb_i}$}{ $score \gets \mbox{\it Score}(\textsf{PV}, \lambda, \cbb_d, \cbb_i)$ \\ \If{$score > 0$}{ $A.\mbox{\bf append}(\langle score, \cbb_d\rangle)$ } } \If{$|A| \geq nr\_decoys$}{ $R\gets \mbox{\bf descending-sort}(A)$ \\ \For{$l \in [1..nr\_decoys]$}{ $\langle score, \cbb_d\rangle\gets R[l]$ \\ \mcic$\gets$ \mcic$\cup\{(\langle \ib_i, \cbb_d\rangle, \mbox{\bf false})\}$ \\ } \mcic$\gets$ \mcic$\cup\{(\langle \ib_i, \cbb_i \rangle, \mbox{\bf true})\}$ \\ } } \end{algorithm} Concretely, the \textsc{MC-IC} Algorithm has three main arguments: a dataset $C = \{ \langle \ib_i, \cbb_i \rangle | 1 \leq i \leq m\}$ where $\ib_i$ is an image and $\cbb_i$ is its ground-truth caption\footnote{For an image with multiple ground-truth captions, we split it to multiple instances with the same image for each one of the ground-truth captions; the train/dev/test splits are done such that they contain disjoint {\em image} sets, as opposed to disjoint {\em instance} sets.}; an integer $N$ which controls the size of $\cbb_i$'s neighborhood in the embedding space defined by the paragraph vector model \textsf{PV}; and a function \textsf{Score} which is used to score the $N$ items in each such neighborhood. The first two steps of the algorithm tune several hyperparameters. The first step finds optimal settings for the \textsf{PV} model given the dataset $C$. The second finds a weight parameter $\lambda$ given \textsf{PV}, dataset $C$, and the \textsf{Score} function. These hyperparameters are dataset-specific. Details are discussed in the next section. The main body of the algorithm, the outer $\textbf{for}$ loop, generates a set of $nr\_decoys$ (4 here) decoys for each ground-truth caption. It accomplishes this by first extracting $N$ candidates from the \textsf{PV} neighborhood of the ground-truth caption, excluding those that belong to the same image. In the inner $\textbf{for}$ loop, it computes the similarity of each candidate to the ground-truth and stores them in a list $A$. If enough candidates are generated, the list is sorted in descending order of score. The top $nr\_decoys$ captions are marked as ``decoys'' (\ie \textbf{false}), while the ground-truth caption is marked as ``target'' (\ie \textbf{true}). The score function $\textsf{Score}(\textsf{PV}, \lambda, \cpb, \cbb)$ is a crucial component of the decoy selection mechanism. Its definition leverages our linguistic intuition by combining linguistic surface similarity, $\textsf{sim}_{\textsc{surf}}(\cpb, \cbb)$, with the similarity suggested by the embedding model, $\mbox{\textsf{sim}}_{\textsf{PV}}(\cpb, \cbb)$: \begin{equation} \textsf{Score}\!=\! \left\{\!\! \arraycolsep=1.5pt \begin{array}{ll} 0 & \! \mbox{if \textsf{sim}}_{\textsc{surf}}\!\ge\!L \\ \lambda\, \mbox{\textsf{sim}}_{\textsf{PV}}\!+\!(1\!-\!\lambda)\, \mbox{\textsf{sim}}_{\textsc{surf}} & \text{otherwise} \end{array} \right. \label{eq:score} \end{equation} where the common argument $(\cpb, \cbb)$ is omitted. The higher the similarity score, the more likely that $\cpb$ is a good decoy for $\cbb$. Note that if the surface similarity is above the threshold $L$, the function returns 0, flagging that the two captions are too similar to be used as a pair of target and decoy. In this work, $\mbox{\textsf{sim}}_{\textsc{surf}}$ is computed as the BLEU score between the inputs~\cite{papineni-etal:2002} (with the brevity penalty set to 1). The embedding similarity, $\mbox{\textsf{sim}}_{\textsf{PV}}$, is computed as the cosine similarity between the two in the PV embedding space. \subsection{The \mciccoco dataset} \label{sec:mcic-coco} We applied the \textsc{MC-IC} Algorithm to the COCO dataset~\cite{coco} to generate a dataset for the visual-language dual machine comprehension task. The dataset is called \mciccoco and it is made publicly available\footnote{\tt http://www.github.com/google/mcic-coco}. We describe the details of this dataset below. We set the neighborhood size at $N=500$, and the threshold at $L=0.5$ (see Eq.~\ref{eq:score}). As the COCO dataset has a large body of images (thus captions) focusing on a few categories (such as sports activities), this threshold is important in discarding significantly similar captions to be decoys -- otherwise, even human annotators will experience difficulty in selecting the ground-truth captions. The hyperparameters of the \textsf{PV} model, {\tt dim} (embedding dimension) and {\tt epochs} (number of training epochs), are optimized in the $\textsc{Optimize-PV}$ step of the \textsc{MC-IC} Algorithm. The main idea is to learn embeddings such that ground-truth captions from the same image have similar embeddings. Concretely, the optimization step is a grid-search over the hyper-parameters of the PV-DBOW model~\cite{le-mikolov:2014}, which we train using a softmax loss. Since there are multiple ground-truth captions associated with each image, the dataset is denoted by $C = \{ \langle \ib_{r_c}, \cbb_{r_c} \rangle | 1 \leq r \leq n, 1 \leq c \leq s_r \}$, where $r$ is the index for each unique image ($\ib_{r_c} \equiv \ib_r$), $n$ is the total number images and $s_r > 1$ is the number of unique captions for image $r$. The total number of data examples $m = \sum_{r=1}^n s_r$. Here the hyper-parameters are searched on a grid to minimize ``multiple ground-truth score'' rank (mgs-rank): the average rank (under the cosine-distance score) between $\cbb_{r_c}$ and $\{ \cbb_{r_l} | 1 \leq l \leq s_r, l\not= c\}$. The lower the mgs-rank, the better the resulting paragraph vector model is at modeling multiple ground-truths for a given image as being similar. As such, our grid-search over the \mciccoco dev dataset yields a minimum mgs-rank at {\tt dim}=1024 and {\tt epochs}=5. Similarly, the $\textsc{Optimize-Score}(\textsf{PV}, \textsf{Score})$ step is a grid-search over the $\lambda$ parameter of the \textsf{Score} function, given a paragraph vector embedding model \textsf{PV} and a dataset $C$ of captions and images, as before. A well-chosen $\lambda$ will ensure the multiple ground-truth captions for the same image will be measured with high degree of similarity with the \textsf{Score} function. The $\lambda\in [0,1]$ parameter is searched on a grid to minimize the ``weighted multiple ground-truths score'' rank (wmgs-rank): the average rank (under the \textsf{Score}) between $\cbb_{r_c}$ and $\{ \cbb_{r_l} | 1 \leq l \leq s_r, l\not= c\}$, relative to the top $N$-best closest-cosine neighbors in \textsf{PV}. For example, if given five ground-truths for image $\ib_r$, and when considering $\cbb_{r_1}$, ground-truths $\cbb_{r_2}$ to $\cbb_{r_5}$ are ranking at \#4, \#10, \#16, and \#22 (in top-500 closest-cosine neighbors in \textsf{PV}), then wmgs-rank$(\cbb_{r_1})=13$ (the average of these ranks). Our grid-search over the \mciccoco dev dataset yields a minimum wmgs-rank at $\lambda$=0.3. The resulting \mciccoco dataset has 574,315 instances that are in the format of $\{i: (\langle \ib_i, \cbb_i^j \rangle , \mbox{label}_i^j), j = 1 \ldots 5\}$ where $\mbox{label}_i^j\in \{\mbox{\bf true}, \mbox{\bf false}\}$. For each such instance, there is one and only one $j$ such that the label is \textbf{true}. We have created a train/dev/test split such that all of the instances for the same image occur in the same split. Table~\ref{table:mcic_splits} reports the basic statistics for the dataset. \subsection{Human performance on \mciccoco} \label{sec:human_eval} \noindent \textbf{Setup}\ To measure how well humans can perform on the DMC task, we randomly drew 1,000 instances from the \mciccoco dev set and submitted those instances to human ``raters''\footnote{Raters are vetted, screened and tested before working on any tasks; requirements include native-language proficiency level.} via a crowd-sourcing platform. Three independent responses from 3 different rates were gathered for each instance, for a total of 3,000 responses. To ensure diversity, raters were prohibited from evaluating more than six instances or from responding to the same task instance twice. In total, 807 distinct raters were employed. Raters were shown one instance at a time. They were shown the image and the five caption choices (ground-truth and four decoys, in randomized order) and were instructed to choose the best caption for the image. Before starting evaluation, the raters were trained with sample instances from the \emph{train} dataset, disjoint from the \emph{dev} dataset on which their performance data were collected. The training process presents an image and five sentences, of which the ground-truth caption is highlighted. In addition, specific instructions and clarification were given to the raters on how to choose the best caption for the image. In Figure~\ref{fig:rater-training}, we present three instances on how the rater instructions were presented for rater training. \noindent \textbf{Quantitative results}\ We assessed human performance in two metrics: (1) Percentage of correct rater responses (1-human system): \textbf{81.1\%} (2432 out of 3000); (2) Percentage of instances with at least $50\%$ (\ie 2) correct responses (3-human system): \textbf{82.8\%} (828 out of 1000). Table~\ref{table:human_eval_instances} gives a detailed breakdown on the statistics related to the inter-rater (dis)agreement. The first row, with accuracy at 67.3\%, suggests that this is the level at which the correct answer is obvious (\ie, percentage of ``easy'' instances). The second row, at 82.8\%, indicates that this is the performance ceiling in terms of accuracy that can be expected for the \mciccoco dataset; at the same time, it suggests that the difference between 67.3\% and 82.8\% (i.e., about 15\% of instances) is caused by ``difficult'' instances. Finally, the third row, at 93.1\%, indicates that the level of ``unanswerable'' instances is somewhere in the 10\%-15\% range (combining the increase from 82.8\% to 93.1\% and the remaining 6.9\% that no one gets right). We will investigate those instances in detail in the future. The COCO dataset has a significant number of captions that fit more than one image in the dataset, given the biased concentration on certain categories. Thus, we suspect that even with our threshold-check (cf. the introduction of $L$ in Eq.~\ref{eq:score}), our procedure might have failed to filter out some impossible-to-distinguish decoys. \noindent \textbf{Qualitative examples}\ We present in Figure~\ref{fig:examples} several example instances from the \mciccoco dataset. The first example illustrates how certain aspects of VQA are subsumed by the DMC task: in order to correctly choose answer 3, a system needs to implicitly answer questions like ``how many people are in the image?'' (answer: three, thus choices 4. and 5. are wrong), and ``what are the people doing?'' (answer: riding horses, thus choices 1. and 2. are wrong). The second example illustrates the extent to which a successful computer system needs to be able to differentiate between ``standing'' and ``rolling'' in a visually-grounded way, presumably via a pose model~\cite{yao-li:2010} combined with a translation model between poses and their verbal correspondents. Last but not least, the third examples illustrates a difficult case, which led to human annotator disagreement in our annotation process (both choice 3. and 5. were selected by different annotators). \section{Learning Methods} \label{sLearning} We describe several learning methods for the dual machine comprehension (DMC) task with the \mcic dataset. We start with linear models which will be used as baselines. We then present several neural-network based models. In particular, we describe a novel, hybrid neural network model that combines the feedforward architecture and the seq2seq architecture~\cite{sutskever-etal:2014} for multi-task learning of the DMC task and the image captioning task. This new model achieves the best performance in both tasks. \subsection{Linear models as baselines} \noindent \textbf{Regression}\ To examine how well the two embeddings are aligned in ``semantic understanding space'', a simple approach is to assume that the learners do not have access to the decoys. Instead, by accessing the ground-truth captions only, the models learn a linear regressor from the image embeddings to the target captions' embeddings (``forward regression''), or from the captions to the images (``backward regression''). With the former approach, referred as \textsf{Baseline-I2C}, we check whether the predicted caption for any given image is closest to its true caption. With the latter, referred as \textsf{Baseline-C2I}, we check whether the predicted image embedding by the ground-truth caption is the closest among predicted ones by decoy captions to the real image embeddings. \noindent \textbf{Linear classifier}\ Our next approach \textsf{Baseline-LinM} is a linear classifier learned to discriminate true targets from the decoys. Specifically, we learn a linear discriminant function $f(\ib, \cbb; \thetab) = \ib^{\top}\thetab \cbb$ where $\thetab$ is a matrix measuring the compatibility between two types of embeddings, cf. ~\cite{frome2013devise}. The loss function is then given by \begin{equation} L(\thetab) = \sum_i [ \max_{j \ne j^*} f(\ib_i, \cbb_i^j; \thetab) - f(\ib_i, \cbb_i^{j^*}; \thetab) ]_+ \end{equation} where $[\ ]_{+}$ is the hinge function and $j$ indexes over all the available decoys and $i$ indexes over all training instances. The optimization tries to increase the gap between the target $\cbb_i^{j^*}$ and the worst ``offending'' decoy. We use stochastic (sub)gradient methods to optimize $\thetab$, and select the best model in terms of accuracy on the \mciccoco development set. \subsection{Feedforward Neural Network (FFNN) models} \label{sec:ffnn} To present our neural-network--based models, we use the following notations. Each training instance pair is a tuple $\langle \ib_i, \cbb_i^j\rangle$, where $\ib$ denotes the image, and $\cbb_i^j$ denotes the caption options, which can either be the target or the decoys. We use a binary variable $y_{ijk} \in \{0,1\}$ to denote whether $j$-th caption of the instance $i$ is labeled as $k$, and $\sum_k y_{ijk} = 1$. We first employ the standard feedforward neural-network models to solve the DMC task on the \mciccoco dataset. For each instance pair $\langle \ib_i, \cbb_i^j\rangle$, the input to the neural network is an embedding tuple $\langle \text{DNN}(\ib_i; \Gamma), \text{Emb}(\cbb_i^j; \Omega) \rangle$, where $\Gamma$ denotes the parameters of a deep convolutional neural network $\text{DNN}$. $\text{DNN}$ takes an image and outputs an image embedding vector. $\Omega$ is the embedding matrix, and $\text{Emb}(.)$ denotes the mapping from a list of word IDs to a list of embedding vectors using $\Omega$. The loss function for our FFNN is given by: \begin{dmath} L(\Gamma, \Omega,\ub)\!=\! \sum_{i, j, k} y_{ijk} \log \; \text{FN}_k(\text{DNN}(\ib_i; \Gamma), \text{Emb}(\cbb_i^j; \Omega); \ub) \label{eq:closs} \end{dmath} \noindent where $\text{FN}_k$ denotes the $k$-th output of a feedforward neural network, and $\sum_k \text{FN}_k(.) = 1$. Our architecture uses a two hidden-layer fully connected network with Rectified Linear hidden units, and a softmax layer on top. The formula in Eq.~\ref{eq:closs} is generic with respect to the number of classes. In particular, we consider a 2-class--classifier ($k\in\{0, 1\}$, 1 for 'yes', this is a correct answer; 0 for 'no', this is an incorrect answer), applied independently on all the $\langle \ib_i, \cbb_i^j\rangle$ pairs and apply one FFNN-based binary classifier for each; the final prediction is the caption with the highest 'yes' probability among all instance pairs belonging to instance $i$. \subsection{Vec2seq + FFNN Model} \label{sec:seq+ffnn} We describe here a hybrid neural-network model that combines a recurrent neural-network with a feedforward one. We encode the image into a single-cell RNN encoder, and the caption into an RNN decoder. Because the first sequence only contains one cell, we call this model a vector-to-sequence (Vec2seq) model as a special case of Seq2seq model as in ~\cite{sutskever-etal:2014,bahdanau-etal:2015}. The output of each unit cell of a Vec2seq model (both on the encoding side and the decoding side) can be fed into an FFNN architecture for binary classification. See Figure~\ref{agmc_diagram} for an illustration of the Vec2seq + FFNN model architecture. \paragraph{Multi-task learning} In addition to the classification loss (Eq.~\ref{eq:closs}), we also include a loss for generating an output sequence $\cbb_i^j$ based on an input $\ib_i$ image. We define a binary variable $z_{ijlv} \in \{0,1\}$ to indicate whether the $l$th word of $\cbb_i^j$ is equal to word $v$. $\Ob^d_{ijl}$ denotes the $l$-th output of the decoder of instance pair $\langle \ib_i, \cbb_i^j\rangle$, $\Ob^e_{ij}$ denotes the output of the encoder, and $\Ob^d_{ij:}$ denotes the concatenation of decoder outputs. With these definitions, the loss function for the Vec2seq + FFNN model is: \begin{dmath} L(\thetab, \wb, \ub) = \sum_{i, j, k} y_{ijk} \log \; \text{FN}_k(\Ob^e_{ij}(\ib_i, \cbb_i^j; \thetab), \Ob^d_{ij:}(\ib_i, \cbb_i^j; \thetab); \ub) + \lambda_{gen} \sum_{i,j, l, v} y_{ij1} z_{ijlv} \log \;\text{softmax}_v(\Ob^d_{ijl}(\ib_i, \cbb_i^j; \thetab); \wb) \label{eq:mloss} \end{dmath} \noindent where $\sum_v \text{softmax}_v(.) = 1$; $\thetab$ are the parameters of the Vec2seq model, which include the parameters within each unit cell, as well as the elements in the embedding matrices for images and target sequences; $\wb$ are the output projection parameters that transform the output space of the decoder to the vocabulary space. $\ub$ are the parameters of the FFNN model (Eq.~\ref{eq:closs}); $\lambda_{gen}$ is the weight assigned to the sequence-to-sequence generation loss. Only the true target candidates (the ones with $y_{ij1} = 1$) are included in this loss, as we do not want the decoy target options to affect this computation. The Vec2seq model we use here is an instantiation of the attention-enhanced models proposed in~\cite{bahdanau-etal:2015,chen-etal:2016}. However, our current model does not support location-wise attention, as in the Show-Attend-and-Tell~\cite{xu-etal:2016} model. In this sense, our model is an extension of the Show-and-Tell model with a single attention state representing the entire image, used as image memory representation for all decoder decisions. We apply Gated Recurrent Unit (GRU) as the unit cell~\cite{cho-etal:2014}. We also compare the influence on performance of the $\lambda_{gen}$ parameter. \section{Experiments} \label{sec:results} \subsection{Experimental Setup} \noindent \textbf{Baseline models}\ For the baseline models, we use the 2048-dimensional outputs of Google-Inception-v3~\cite{inception-v3} (pre-trained on ImageNet ILSSVR 2012) to represent the images, and 1024-dimensional paragraph-vector embeddings (section~\ref{sec:creation}) to represent captions. To reduce computation time, both are reduced to 256-dimensional vectors using random projections. \noindent \textbf{Neural-nets based models}\ The experiments with these models are done using the Tensorflow package~\cite{tensorflow2015-whitepaper}. The hyper-parameter choices are decided using the hold-out development portion of the \mciccoco set. For modeling the input tokens, we use a vocabulary size of 8,855 types, selected as the most frequent tokens over the captions from the COCO training set (words occurring at least 5 times). The models are optimized using ADAGRAD with an initial learning rate of 0.01, and clipped gradients (maximum norm 4). We run the training procedures for $3,000,000$ steps, with a mini-batch size of 20. We use 40 workers for computing the updates, and 10 parameter servers for model storing and (asynchronous and distributed) updating. We use the following notations to refer to the neural network models: \ffnnb refers to the version of feedforward neural network architecture with a 2-class--classifier ('yes' or 'no' for answer correctness), over which an $\argmax$ function computes a 5-way decision (i.e., the choice with the highest 'yes' probability); we henceforth refer to this model simply as FFNN. The \seqff refers to the hybrid model described in Section~\ref{sec:seq+ffnn}, combining Vec2seq and \ffnnb. The RNN part of the model uses a two-hidden--layer GRU unit-cell~\cite{cho-etal:2014} configuration, while the FFNN part uses a two-hidden--layer architecture. The $\lambda_{gen}$ hyper-parameter from the loss-function $L(\thetab, \wb, \ub)$ (Eq.~\ref{eq:mloss}) is by default set to 1.0 (except for Section~\ref{sec:lambda_gen} where we directly measure its effect on performance). \noindent \textbf{Evaluation metrics}\ The metrics we use to measure performance come in two flavors. First, the accuracy in detecting (the index of) the true target among the decoys provides a direct way of measuring the performance level on the comprehension task. We use this metric as the main indicator of comprehension performance. Second, because our \seqff models are multi-task models, they can also generate new captions given the input image. The performance level for the generation task is measured using the standard scripts measuring ROUGE-L~\cite{lin-och:2004} and CIDEr~\cite{cider}, using as reference the available captions from the COCO data (around 5 for most of the images). Code for these metrics is available as part of the COCO evaluation toolkit~\footnote{\tt https://github.com/tylin/coco-caption}. As usual, both the hypothesis strings and the reference strings are preprocessed: remove all the non-alphabetic characters; transform all letters to lowercase, and tokenize using white space; replace all words occurring less than 5 times with an unknown token $\langle\mbox{UNK}\rangle$ (total vocabulary of 8,855 types); truncate to the first 30 tokens. \subsection{Results} \label{sDMCResults} Table~\ref{table:baselines} summarizes our main results on the comprehension task. We report the accuracies (and their standard deviations) for random choice, baselines, and neural network-based models. Interestingly, the \textsf{Baseline-I2C} model performs at the level of random choice, and much worse than the \textsf{Baseline-C2I model}. This discrepancy reflects the inherent difficulty in vision-Language tasks: for each image, there are several possible equally good descriptions, thus a linear mapping from the image embeddings to the captions might not be enough -- statistically, the \emph{linear} model will just predict the mean of those captions. However, for the reverse direction where the captions are the independent variables, the learned model does not have to capture the variability in image embeddings corresponding to the different but equally good captions -- there is only one such image embedding. Nonlinear neural networks overcome these modeling limitations. The results clearly indicate their superiority over the baselines. The \seqff model obtains the best results, with accuracies of 60.5\% (dev) and 59.0\% (test); the accuracy numbers indicate that the \seqff architecture is superior to the non-recursive fully-connected FFNN architecture (at 55.1\% accuracy on test). We show next the impact on performance of the embedding dimension and neural-network sizes, for both the feedforward and the recurrent architectures. \subsection{Analysis: embedding dimension and neural-network sizes} \label{sec:impact_size} In this section, we compare neural networks models of different sizes. Specifically, we compare embedding dimensions of $\cbr{64, 256, 512, 1024, 2048}$, and two hidden-layer architectures with sizes of $\cbr{(64, 16), (256, 64), (512, 128), (1024, 256), (2048, 512)}$. The results in Table~\ref{table:ffnn_sizes} illustrate an interesting behavior for the neural-network architectures. For the FFNN models, contrary to expectations, bigger network sizes leads to decreasing accuracy. On the other hand, for \seqff models, accuracy increases with increased size in model parameters, up until the embedding dimension of the RNN model matches the embedding dimension of the Inception model, at 2048. At accuracy levels of 63.4\% (dev) and 60.8\% (test), this performance establishes a high-bar for a computer model performance on the DMC task using the \mciccoco dataset. According to the estimate from Table~\ref{table:human_eval_instances}, this level of performance is still \emph{significantly} below the 82.8\% accuracy achievable by humans, which makes \mciccoco a challenging testbed for future models of Vision-Language machine comprehension. \subsection{Multi-task learning for DMC and Image Captioning} \label{sec:lambda_gen} In this section, we compare models with different values of $\lambda_{gen}$ in Eq.~\ref{eq:mloss}. This parameter allows for a natural progression from learning for the DMC task only ($\lambda_{gen} = 0$) to focusing on the image captioning loss ($\lambda_{gen} \rightarrow +\infty$). In between the two extremes, we have a multi-task learning objective for jointly learning related tasks. The results in Table~\ref{table:lambda_gen} illustrate one of the main points of this paper. That is, the ability to perform the comprehension task (as measured by the accuracy metric) positively correlates with the ability to perform other tasks that require machine comprehension, such as caption generation. At $\lambda_{gen} = 4$, the \seqff model not only has a high accuracy of detecting the ground-truth option, but it also generates its own captions given the input image, with an accuracy measured on \mciccoco at 0.9890 (dev) and 0.9380 (test) CIDEr scores. On the other hand, at an accuracy level of about 59\% (on test, at $\lambda_{gen} = 0.1$), the generation performance is at only 0.9010 (dev) and 0.8650 (test) CIDEr scores. We note that there is an inherent trade-off between prediction accuracy and generation performance, as seen for $\lambda_{gen}$ values above 4.0. This agrees with the intuition that training a \seqff model using a loss $L(\thetab, \wb, \ub)$ with a larger $\lambda_{gen}$ means that the ground-truth detection loss (the first term of the loss in Eq.\ref{eq:mloss}) may get overwhelmed by the word-generation loss (the second term). However, our empirical results suggest that there is value in training models with a multi-task setup, in which both the comprehension side as well as the generation side are carefully tuned to maximize performance. \section{Discussion} We have proposed and described in detail a new multi-modal machine comprehension task (DMC), combining the challenges of understanding visual scenes and complex language constructs simultaneously. The underlying hypothesis for this work is that computer systems that can be shown to perform increasingly well on this task will do so by constructing a visually-grounded understanding of various linguistic elements and their dependencies. This type of work can therefore benefit research in both machine visual understanding and language comprehension. The \seqff architecture that we propose for addressing this combined challenge is a generic multi-task model. It can be trained end-to-end to display both the ability to choose the most likely text associated with an image (thus enabling a direct measure of its ``comprehension'' performance), as well as the ability to generate a complex description of that image (thus enabling a direct measure of its performance in an end-to-end complex and meaningful task). The empirical results we present validate the underlying hypothesis of our work, by showing that we can measure the decisions made by such a computer system and validate that improvements in comprehension and generation happen in tandem. The experiments presented in this work are done training our systems in an end-to-end fashion, starting directly from raw pixels. We hypothesize that our framework can be fruitfully used to show that incorporating specialized vision systems (such as object detection, scene recognition, pose detection, etc.) is beneficial. More precisely, not only it can lead to a direct and measurable impact on a computer system's ability to perform image understanding, but it can express that understanding in an end-to-end complex task. \begin{thebibliography}{10} \bibitem{tensorflow2015-whitepaper} M.~Abadi, A.~Agarwal, P.~Barham, E.~Brevdo, Z.~Chen, C.~Citro, G.~Corrado, A.~Davis, J.~Dean, M.~Devin, S.~Ghemawat, I.~Goodfellow, A.~Harp, G.~Irving, M.~Isard, Y.~Jia, R.~Jozefowicz, L.~Kaiser, M.~Kudlur, J.~Levenberg, D.~Man\'{e}, R.~Monga, S.~Moore, D.~Murray, C.~Olah, M.~Schuster, J.~Shlens, B.~Steiner, I.~Sutskever, K.~Talwar, P.~Tucker, V.~Vanhoucke, V.~Vasudevan, F.~Vi\'{e}gas, O.~Vinyals, P.~Warden, M.~Wattenberg, M.~Wicke, Y.~Yu, and X.~Zheng. \newblock {TensorFlow}: Large-scale machine learning on heterogeneous systems, 2015. \newblock Software available from tensorflow.org. \bibitem{spice} P.~Anderson, B.~Fernando, M.~Johnson, and S.~Gould. \newblock {SPICE:} semantic propositional image caption evaluation. \newblock {\em CoRR}, abs/1607.08822, 2016. \bibitem{antol15vqa} S.~Antol, A.~Agrawal, J.~Lu, M.~Mitchell, D.~Batra, C.~L. Zitnick, and D.~Parikh. \newblock {VQA}: Visual question answering. \newblock In {\em International Conference on Computer Vision (ICCV)}, 2015. \bibitem{bahdanau-etal:2015} D.~Bahdanau, K.~Cho, and Y.~Bengio. \newblock Neural machine translation by jointly learning to align and translate. \newblock In {\em Proceedings of ICLR}, 2015. \bibitem{meteor} S.~Banerjee and A.~Lavie. \newblock {METEOR}: An automatic metric for {MT} evaluation with improved correlation with human judgments. \newblock In {\em Proceedings of the {ACL} {W}orkshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization}, 2005. \bibitem{bengio2009book} Y.~Bengio. \newblock Learning deep architectures for ai. \newblock {\em Found. Trends Mach. Learn.}, 2(1):1--127, Jan. 2009. \bibitem{bernardi16survey} R.~Bernardi, R.~Cakici, D.~Elliott, A.~Erdem, E.~Erdem, N.~Ikizler-Cinbis, F.~Keller, A.~Muscat, and B.~Plank. \newblock Automatic description generation from images: A survey of models, datasets, and evaluation measures. \newblock {\em JAIR}, 55, 2016. \bibitem{chen-etal:2016} D.~Chen, J.~Bolton, and C.~D. Manning. \newblock {A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task}. \newblock In {\em Proceedings of ACL}, 2016. \bibitem{cho-etal:2014} K.~Cho, B.~van Merrienboer, {\c{C}}.~G{\"{u}}l{\c{c}}ehre, D.~Bahdanau, F.~Bougares, H.~Schwenk, and Y.~Bengio. \newblock Learning phrase representations using {RNN} encoder-decoder for statistical machine translation. \newblock In {\em Proceedings of {EMNLP}, October 25-29, 2014, Doha, Qatar}, pages 1724--1734, 2014. \bibitem{donahue2014long} J.~Donahue, L.~A. Hendricks, S.~Guadarrama, M.~Rohrbach, S.~Venugopalan, K.~Saenko, and T.~Darrell. \newblock Long-term recurrent convolutional networks for visual recognition and description. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2014. \bibitem{fang2014captions} H.~Fang, S.~Gupta, F.~Iandola, R.~Srivastava, L.~Deng, P.~Doll{\'a}r, J.~Gao, X.~He, M.~Mitchell, J.~Platt, et~al. \newblock From captions to visual concepts and back. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{frome2013devise} A.~Frome, G.~S. Corrado, J.~Shlens, S.~Bengio, J.~Dean, T.~Mikolov, et~al. \newblock Devise: A deep visual-semantic embedding model. \newblock In {\em Advances in Neural Information Processing Systems (NIPS)}, 2013. \bibitem{gao15machine} H.~Gao, J.~Mao, J.~Zhou, Z.~Huang, L.~Wang, and W.~Xu. \newblock Are you talking to a machine? dataset and methods for multilingual image question answering. \newblock In {\em NIPS}, 2015. \bibitem{hodosh16eval} M.~Hodosh and J.~Hockenmaier. \newblock Focused evaluation for image description with binary forced-choice tasks. \newblock In {\em Proc. 5th Vision and Language Workshop}, 2016. \bibitem{hodosh13framing} M.~Hodosh, P.~Young, and J.~Hockenmaier. \newblock Framing image description as a ranking task: Data, models and evaluation metrics. \newblock {\em JAIR}, 2013. \bibitem{karpathy2014deep} A.~Karpathy and L.~Fei-Fei. \newblock Deep visual-semantic alignments for generating image descriptions. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{kiros2014unifying} R.~Kiros, R.~Salakhutdinov, and R.~S. Zemel. \newblock Unifying visual-semantic embeddings with multimodal neural language models. \newblock {\em Transactions of the Association for Computational Linguistics}, 2015. \bibitem{krishnavisualgenome} R.~Krishna, Y.~Zhu, O.~Groth, J.~Johnson, K.~Hata, J.~Kravitz, S.~Chen, Y.~Kalantidis, L.-J. Li, D.~A. Shamma, M.~Bernstein, and L.~Fei-Fei. \newblock {Visual Genome}: Connecting language and vision using crowdsourced dense image annotations. \newblock 2016. \bibitem{le-mikolov:2014} Q.~Le and T.~Mikolov. \newblock Distributed representations of sentences and documents. \newblock In {\em Proceedings of the 31st International Conference on Machine Learning}, Beijing, China, 2014. \bibitem{lin-och:2004} C.-Y. Lin and F.~J. Och. \newblock Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. \newblock In {\em Proceedings of ACL}, 2004. \bibitem{coco} T.~Lin, M.~Maire, S.~J. Belongie, L.~D. Bourdev, R.~B. Girshick, J.~Hays, P.~Perona, D.~Ramanan, P.~Doll{\'{a}}r, and C.~L. Zitnick. \newblock Microsoft {COCO:} common objects in context. \newblock {\em CoRR}, abs/1405.0312, 2014. \bibitem{lin16leverage} X.~Lin and D.~Parikh. \newblock Leveraging visual question answering for image-caption ranking. \newblock {\em CoRR}, abs/1605.01379, 2016. \bibitem{malinoski14qa} M.~Malinowski and M.~Fritz. \newblock A multi-world approach to question answering about real-world scenes based on uncertain input. \newblock In {\em NIPS}, 2014. \bibitem{malinowski15neural} M.~Malinowski, M.~Rohrbach, and M.~Fritz. \newblock Ask your neurons: A neural-based approach to answering questions about images. \newblock In {\em ICCV}, 2015. \bibitem{mao15mrnn} J.~Mao, W.~Xu, Y.~Yang, J.~Wang, and A.~Yuille. \newblock Deep captioning with multimodal recurrent neural networks ({mRNN}). \newblock In {\em Proc. Int. Conf. Learn. Representations}, 2015. \bibitem{papineni-etal:2002} K.~Papineni, S.~Roukos, T.~Ward, and W.-J. Zhu. \newblock Bleu: A method for automatic evaluation of machine translation. \newblock In {\em Proceedings of ACL}, pages 311--318, 2002. \bibitem{mixer15} M.~Ranzato, S.~Chopra, M.~Auli, and W.~Zaremba. \newblock Sequence level training with recurrent neural networks. \newblock {\em CoRR}, abs/1511.06732, 2015. \bibitem{ren15visual} M.~Ren, R.~Kiros, and R.~Zemel. \newblock Image question answering: A visual semantic embedding model and a new dataset. \newblock In {\em NIPS}, 2015. \bibitem{sutskever-etal:2014} I.~Sutskever, O.~Vinyals, and Q.~V.~V. Le. \newblock Sequence to sequence learning with neural networks. \newblock In {\em Advances in Neural Information Processing Systems 27}, pages 3104--3112. Curran Associates, Inc., 2014. \bibitem{inception-v3} C.~Szegedy, V.~Vanhoucke, S.~Ioffe, J.~Shlens, and Z.~Wojna. \newblock Rethinking the inception architecture for computer vision. \newblock volume abs/1512.00567, 2015. \bibitem{tu14video} K.~Tu, M.~Meng, M.~W. Lee, T.~E. Choe, and S.~C. Zhu. \newblock Joint video and text parsing for understanding events and answering queries. \newblock {\em IEEE MultiMedia}, 2014. \bibitem{cider} R.~Vedantam, C.~Lawrence~Zitnick, and D.~Parikh. \newblock Cider: Consensus-based image description evaluation. \newblock In {\em The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, June 2015. \bibitem{vinyals2014show} O.~Vinyals, A.~Toshev, S.~Bengio, and D.~Erhan. \newblock Show and tell: A neural image caption generator. \newblock In {\em Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2015. \bibitem{wu16survey} Q.~Wu, D.~Teney, P.~Wang, C.~Shen, A.~R. Dick, and A.~van~den Hengel. \newblock Visual question answering: {A} survey of methods and datasets. \newblock {\em CoRR}, abs/1607.05910, 2016. \bibitem{wu16external} Q.~Wu, P.~Wang, C.~Shen, A.~Dick, and A.~van~den Hengel. \newblock Ask me anything: Free-form visual question answering based on knowledge from external sources. \newblock In {\em CVPR}, 2016. \bibitem{xu-etal:2016} K.~Xu, J.~Ba, R.~Kiros, K.~Cho, A.~C. Courville, R.~Salakhutdinov, R.~S. Zemel, and Y.~Bengio. \newblock Show, attend and tell: Neural image caption generation with visual attention. \newblock {\em CoRR}, abs/1502.03044, 2015. \bibitem{xu2015show} K.~Xu, J.~Ba, R.~Kiros, A.~Courville, R.~Salakhutdinov, R.~Zemel, and Y.~Bengio. \newblock Show, attend and tell: Neural image caption generation with visual attention. \newblock In {\em Proc. of the 32nd International Conference on Machine Learning (ICML)}, 2015. \bibitem{yang16san} Z.~Yang, X.~He, J.~Gao, L.~Deng, and A.~Smola. \newblock Stacked attention networks for image question answering. \newblock In {\em CVPR}, 2016. \bibitem{yao-li:2010} B.~Yao and F.-F. Li. \newblock Modeling mutual context of object and human pose in human-object interaction activities. \newblock In {\em Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 2010. \bibitem{yu15madlib} L.~Yu, E.~Park, A.~C. Berg, and T.~L. Berg. \newblock Visual madlibs: Fill-in-theblank description generation and question answering. \newblock In {\em ICCV}, 2015. \bibitem{zhu16visual7w} Y.~Zhu, O.~Groth, M.~Bernstein, and L.~Fei-Fei. \newblock Visual7w: Grounded question answering in images. \newblock In {\em CVPR}, 2016. \end{thebibliography} \end{document}
A recurrent neural network without chaos
1612.06212
Table 3: Experiments on Penn Treebank with dropout.
[ "Model", "Size", "Training", "Val. perp.", "Test perp." ]
[ [ "Vanilla RNN", "20M parameters", "Jozefowicz et al. ( 2015 )", "103.0", "97.7" ], [ "GRU", "20M parameters", "Jozefowicz et al. ( 2015 )", "95.5", "91.7" ], [ "LSTM", "20M parameters", "Jozefowicz et al. ( 2015 )", "83.3", "78.8" ], [ "LSTM (2 layers)", "20M parameters", "Trained by us", "78.4", "74.3" ], [ "CFN (2 layers)", "20M parameters", "Trained by us", "79.7", "74.9" ], [ "LSTM (2 layers)", "50M parameters", "Trained by us", "75.9", "71.8" ], [ "CFN (2 layers)", "50M parameters", "Trained by us", "77.0", "72.2" ] ]
Experiments with Dropout. The dropout rate p and q are chosen as follows: For the experiments with 20M parameters, we set p=55% and q=45% for the CFN and p=60% and q=40% for the LSTM; For the experiments with 50M parameters, we set p=65% and q=55% for the CFN and p=70% and q=50% for the LSTM.
\documentclass{article} % For LaTeX2e \usepackage[named]{algorithm} \newtheorem{remark}{Remark} \newtheorem{theorem}{Theorem} \newtheorem{definition}{Definition} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{claim}{Claim} \newtheorem{corollary}{Corollary} \newcommand{\cX}{\mathcal{X}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\fu}{\mathfrak{u}} \newcommand{\fv}{\mathfrak{v}} \newcommand{\veps}{\varepsilon} \newcommand{\lsm}[1]{ \mathrm{logsoftmax}\left(#1\right) } \title{A recurrent neural network without chaos } \author{Thomas Laurent \\ Department of Mathematics\\ Loyola Marymount University\\ Los Angeles, CA 90045, USA \\ \texttt{tlaurent@lmu.edu} \\ \And James von Brecht \\ Department of Mathematics\\ California State University, Long Beach\\ Long Beach, CA 90840, USA \\ \texttt{james.vonbrecht@csulb.edu} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} We introduce an exceptionally simple gated recurrent neural network (RNN) that achieves performance comparable to well-known gated architectures, such as LSTMs and GRUs, on the word-level language modeling task. We prove that our model has simple, predicable and non-chaotic dynamics. This stands in stark contrast to more standard gated architectures, whose underlying dynamical systems exhibit chaotic behavior. \end{abstract} \section{Introduction} Gated recurrent neural networks, such as the Long Short Term Memory network (LSTM) introduced by \cite{hochreiter1997long} and the Gated Recurrent Unit (GRU) proposed by \cite{cho2014learning}, prove highly effective for machine learning tasks that involve sequential data. We propose an exceptionally simple variant of these gated architectures. The basic model takes the form \begin{equation} \label{cfn1} h_{t} = \theta_{t} \odot \tanh(h_{t-1} ) + \eta_{t} \odot \tanh(W x_{t} ) , \end{equation} where $\odot$ stands for the Hadamard product. The horizontal/forget gate (i.e. $\theta_{t}$) and the vertical/input gate (i.e. $\eta_{t}$) take the usual form used in most gated RNN architectures. Specifically %(i.e. the $f_t,i_t$ and $o_t$ gates in LSTM or the $\theta_{t}$ and $r_{t}$ gates in GRU) \begin{equation} \label{cfn2} \theta_{t} := \sigma\left( U_{\theta} h_{t-1} + V_{\theta} x_{t} + b_{\theta} \right) \quad \text{and} \quad \eta_{t} := \sigma\left( U_{\eta} h_{t-1} + V_{\eta} x_{t} + b_{\eta} \right) \end{equation} where $\sigma(x):=(1 + \mathrm{e}^{-x} )^{-1}$ denotes the logistic sigmoid function. The network \eqref{cfn1}--\eqref{cfn2} has quite intuitive dynamics. Suppose the data $x_{t}$ present the model with a sequence \begin{equation}\label{eq:IR} (Wx_t)(i)= \begin{cases} 10 & \text{if $t =T$ } \\ 0 & \text{otherwise}, \end{cases} \end{equation} where $(Wx_t)(i)$ stands for the $i^{{\rm th} }$ component of the vector $Wx_t$. In other words we consider an input sequence $x_{t}$ for which the learned $i^{ {\rm th} }$ feature $(Wx_t)(i)$ remains off except at time $T$. When initialized from $h_0 = 0$, the corresponding response of the network to this ``impulse" in the $i^{ {\rm th} }$ feature is \begin{equation} h_t(i) \approx \begin{cases} \label{response} 0 & \text{if } t < T \\ \eta_T & \text{if } t =T \\ \alpha_t & \text{if } t >T \\ \end{cases} \end{equation} with $\alpha_t$ a sequence that relaxes toward zero. The forget gate $\theta_t$ control the rate of this relaxation. Thus $h_{t}(i)$ activates when presented with a strong $i^{ {\rm th} }$ feature, and then relaxes toward zero until the data present the network once again with strong $i^{ {\rm th} }$ feature. Overall this leads to a dynamically simple model, in which the activation patterns in the hidden states of the network have a clear cause and predictable subsequent behavior. Dynamics of this sort do not occur in other RNN models. Instead, the three most popular recurrent neural network architectures, namely the vanilla RNN, the LSTM and the GRU, have complex, irregular, and unpredictable dynamics. Even in the absence of input data, these networks can give rise to chaotic dynamical systems. In other words, when presented with null input data the activation patterns in their hidden states do not necessarily follow a predictable path. The proposed network \eqref{cfn1}--\eqref{cfn2} has rather dull and minimalist dynamics in comparison; its only attractor is the zero state, and so it stands at the polar-opposite end of the spectrum from chaotic systems. Perhaps surprisingly, at least in the light of this comparison, the proposed network \eqref{cfn1} performs as well as LSTMs and GRUs on the word level language modeling task. We therefore conclude that the ability of an RNN to form chaotic temporal dynamics, in the sense we describe in Section 2, cannot explain its success on word-level language modeling tasks. In the next section, we review the phenomenon of chaos in RNNs via both synthetic examples and trained models. We also prove a precise, quantified description of the dynamical picture \eqref{eq:IR}--\eqref{response} for the proposed network. In particular, we show that the dynamical system induced by the proposed network is never chaotic, and for this reason we refer to it as a Chaos-Free Network (CFN). The final section provides a series of experiments that demonstrate that CFN achieve results comparable to LSTM on the word-level language modeling task. All together, these observations show that an architecture as simple as \eqref{cfn1}--\eqref{cfn2} can achieve performance comparable to the more dynamically complex LSTM. \section{Chaos in Recurrent Neural Networks} The study of RNNs from a dynamical systems point-of-view has brought fruitful insights into generic features of RNNs \citep{sussillo2013opening, pascanu2013difficulty}. We shall pursue a brief investigation of CFN, LSTM and GRU networks using this formalism, as it allows us to identify key distinctions between them. Recall that for a given mapping $\Phi : \R^{d} \mapsto \R^{d},$ a given initial time $t_{0} \in \mathbb{N}$ and a given initial state $\fu_{0} \in \R^{d},$ a simple repeated iteration of the mapping $\Phi$ \begin{align*} \fu_{t+1} &= \Phi( \fu_{t} ) \quad t > t_0,\\ \fu_{t_0} &= \fu_{0} \quad \quad \;\, t = t_0, \end{align*} defines a \emph{discrete-time dynamical system}. The index $t \in \N$ represents the current time, while the point $\fu_{t} \in \R^{d}$ represents the current state of the system. The set of all visited states $ \mathcal{O}^{+}(\fu_0) := \{ \fu_{t_0} , \fu_{t_0+1} , \ldots , \fu_{t_0 + n } , \ldots \} $ defines the \emph{forward trajectory} or \emph{forward orbit} through $\fu_{0}$. An {\it attractor} for the dynamical system is a set that is invariant (any trajectory that starts in the set remains in the set) and that attracts all trajectories that start sufficiently close to it. The attractors of chaotic dynamical systems are often fractal sets, and for this reason they are referred to as {\it strange attractors}. Most RNNs generically take the functional form \begin{equation} \label{eq:extforce} \fu_{t} = \Psi( \fu_{t-1} , W_{1} x_{t} , W_{2} x_{t} , \ldots , W_{k}x_{t} ), \end{equation} where $x_{t}$ denotes the $t^{ {\rm th}}$ input data point. For example, in the case of the CFN \eqref{cfn1}--\eqref{cfn2}, we have $W_1=W$, $W_2=V_{\theta}$ and $W_3=V_{\eta}$. To gain insight into the underlying design of the architecture of an RNN, it proves usefull to consider how trajectories %$\fu_{t_0}$, $\fu_{t_0+1}$, $\fu_{t_0+2}$, \ldots, behave when they are not influenced by any external input. This lead us to consider the dynamical system \begin{equation}\label{eq:gendyn} \fu_{t} = \Phi(\fu_{t-1}) \qquad \Phi( \fu ) := \Psi( \fu ,0 ,0 , \ldots ,0 ), \end{equation} which we refer to as the {\it dynamical system induced} by the recurrent neural network. The time-invariant system \eqref{eq:gendyn} is much more tractable than \eqref{eq:extforce}, and it offers a mean to investigate the inner working of a given architecture; it separates the influence of input data $x_{t},$ which can produce essentially any possible response, from the model itself. Studying trajectories that are not influenced by external data will give us an indication on the ability of a given RNN to generate complex and sophisticated trajectories by its own. As we shall see shortly, the dynamical system induced by a CFN has excessively simple and predictable trajectories: all of them converge to the zero state. In other words, its only attractor is the zero state. This is in sharp contrast with the dynamical systems induced by LSTM or GRU, who can exhibit chaotic behaviors and have {\it strange attractors}. % (see Figure \ref{fig:att}). The learned parameters $W_{j}$ in \eqref{eq:extforce} describe how data influence the evolution of hidden states at each time step. From a modeling perspective, \eqref{eq:gendyn} would occur in the scenario where a trained RNN has learned a weak coupling between a specific data point $x_{t_0}$ and the hidden state at that time, in the sense that the data influence is small and so all $W_{j} x_{t_0} \approx 0$ nearly vanish. The hidden state then transitions according to $ \fu_{t_0} \approx \Psi( \fu_{t_0-1} , 0, 0 , \ldots , 0) = \Phi(\fu_{t_0-1}). $ We refer to \cite{bertschinger2004real} for a study of the chaotic behavior of a simplified vanilla RNN with a specific statistical model, namely an i.i.d. Bernoulli process, for the input data as well as a specific statistical model, namely i.i.d. Gaussian, for the weights of the recurrence matrix. \subsection{Chaotic behavior of LSTM and GRU in the absence of input data} In this subsection we briefly show that LSTM and GRU, {\it in the absence of input data}, can lead to dynamical systems $\fu_t = \Phi(\fu_{t-1})$ that are chaotic in the classical sense of the term \citep{strogatz2014nonlinear}. Figure \ref{fig:att} depicts the strange attractor of the dynamical system: \begin{align}\label{eq:lstms} &\fu_{t} = \begin{bmatrix} h_{t} \\ c_{t} \end{bmatrix} \quad \fu \mapsto \Phi(\fu) = \begin{bmatrix} o \odot \tanh\left( f \odot c + i \odot g \right) \\ f \odot c + i \odot g \end{bmatrix}\\ &i := \sigma( W_{i} h + b_{i} ) \quad f := \sigma( W_{f} h + b_{f} ) \quad o := \sigma( W_{o} h + b_{o} ) \quad g := \tanh( W_{g} h + b_{g} ) \end{align} induced by a two-unit LSTM with weight matrices \begin{equation} W_{i} = \begin{bmatrix} -1 & -4 \\ -3 & -2 \end{bmatrix}\quad W_{o} = \begin{bmatrix} \;\;\,4 & \;\;\,1\\ -9 &-7 \end{bmatrix}\quad W_{f} = \begin{bmatrix} -2 &\;\;\, 6\\ \;\;\,0 & -6 \end{bmatrix}\quad W_{g} = \begin{bmatrix} -1 & -6\\ \;\;\, 6 & -9 \end{bmatrix} \label{eq:lstms2} \end{equation} and zero bias for the model parameters. These weights were randomly generated from a normal distribution with standard deviation 5 and then rounded to the nearest integer. Figure \ref{fig:att}(a) was obtained by choosing an initial state $\fu_0=(h_0,c_0)$ uniformly at random in $[0,1]^2\times [0,1]^2$ and plotting the h-component of the iterates $\fu_t=(h_t,c_t)$ for $t$ between $10^3$ and $10^5$ (so this figure should be regarded as a two dimensional projection of a four dimensional attractor, which explain its tangled appearance). Most trajectories starting in $[0,1]^2\times [0,1]^2$ converge toward the depicted attractor. The resemblance between this attractor and classical strange attractors such as the H\'enon attractor is striking (see Figure \ref{fig:attbis} in the appendix for a depiction of the H\'enon attractor). Successive zooms on the branch of the LSTM attractor from Figure \ref{fig:att}(a) reveal its fractal nature. Figure \ref{fig:att}(b) is an enlargement of the red box in Figure \ref{fig:att}(a), and Figure \ref{fig:att}(c) is an enlargement of the magenta box in Figure \ref{fig:att}(b). We see that the structure repeats itself as we zoom in. The most practical consequence of chaos is that the long-term behavior of their forward orbits can exhibit a high degree of sensitivity to the initial states $\fu_{0}$. Figure \ref{fig:fill} provides an example of such behavior for the dynamical system \eqref{eq:lstms}--\eqref{eq:lstms2}. An initial condition $\fu_0$ was drawn uniformly at random in $[0,1]^2\times [0,1]^2$. We then computed $100,000$ small amplitude perturbations $\hat{\fu}_0$ of $\fu_0$ by adding a small random number drawn uniformly from $[-10^{-7},10^{-7}]$ to each component. We then iterated \eqref{eq:lstms}--\eqref{eq:lstms2} for $200$ steps and plotted the h-component of the final state $\hat\fu_{200}$ for each of the $100,000$ trials on Figure \ref{fig:fill}(a). The collection of these $100,000$ final states essentially fills out the entire attractor, despite the fact that their initial conditions are highly localized (i.e. at distance of no more than $10^{-7}$) around a fixed point. In other words, the time $t=200$ map of the dynamical system will map a small neighborhood around a fixed initial condition $\fu_0$ to the entire attractor. Figure \ref{fig:fill}(b) additionally illustrates this sensitivity to initial conditions for points on the attractor itself. We take an initial condition $\fu_0$ on the attractor and perturb it by $10^{-7}$ to a nearby initial condition $\hat{\fu}_0$. We then plot the distance $\|\hat{\fu}_t-\fu_t\|$ between the two corresponding trajectories for the first 200 time steps. After an initial phase of agreement, the trajectories strongly diverge. The synthetic example \eqref{eq:lstms}--\eqref{eq:lstms2} illustrates the potentially chaotic nature of the LSTM architecture. We now show that chaotic behavior occurs for \emph{trained} models as well, and not just for synthetically generated instances. We take the parameter values of an LSTM with $228$ hidden units trained on the Penn Treebank corpus without dropout (c.f. the experimental section for the precise procedure). We then set all data inputs $x_{t}$ to zero and run the corresponding induced dynamical system. Two trajectories starting from nearby initial conditions $\fu_0$ and $\hat{\fu}_0$ were computed (as before $\hat{\fu}_0$ was obtained by adding to each components of $\fu_0$ a small random number drawn uniformly from $[-10^{-7},10^{-7}]$). Figure \ref{fig:trained}(a) plots the first component h(1) of the hidden state for both trajectories over the first 1600 time steps. After an initial phase of agreement, the forward trajectories $\mathcal{O}^+(\fu_0)$ and $\mathcal{O}^+(\hat{\fu}_0)$ strongly diverge. We also see that both trajectories exhibit the typical aperiodic behavior that characterizes chaotic systems. If the inputs $x_{t}$ do not vanish, but come from actual word-level data, then the behavior is very different. The LSTM is now no longer an autonomous system whose dynamics are driven by its hidden states, but a time dependent system whose dynamics are mostly driven by the external inputs. Figure \ref{fig:trained}(b) shows the first component $h(1)$ of the hidden states of two trajectories that start with initial conditions $\fu_0$ and $\hat{\fu}_0$ that are far apart. The sensitivity to initial condition disappears, and instead the trajectories converge toward each other after about $70$ steps. The memory of this initial difference is lost. Overall these experiments indicate that a trained LSTM, when it is not driven by external inputs, can be chaotic. In the presence of input data, the LSTM becomes a forced system whose dynamics are dominated by external forcing. Like LSTM networks, GRU can also lead to dynamical systems that are chaotic and they can also have strange attractors. The depiction of such an attractor, in the case of a two-unit GRU, is provided in Figure \ref{fig:GRU} of the appendix. \subsection{Chaos-free behavior of the CFN} The dynamical behavior of the CFN is dramatically different from that of the LSTM. In this subsection we start by showing that the hidden states of the CFN activate and relax toward zero in a predictable fashion in response to input data. On one hand, this shows that the CFN cannot produce non-trivial dynamics without some influence from data. On the other, this leads to an interpretable model; any non-trivial activations in the hidden states of a CFN have a clear cause emanating from data-driven activation. This follows from a precise, quantified description of the intuitive picture \eqref{eq:IR}--\eqref{response} sketched in the introduction. We begin with the following simple estimate that sheds light on how the hidden states of the CFN activate and then relax toward the origin. \begin{lemma} \label{estimate} For any $T,k>0$ we have $$ |h_{T+k}(i)| \le \Theta^k \; |h_T(i)| + \frac{H}{1-\Theta} \left( \max_{T\le t \le T+k} \left|(Wx_t)(i)\right| \right) $$ where $\Theta$ and $H$ are the maximum values of the $i^{ {\rm th} }$ components of the $\theta$ and $\eta$ gate in the time interval $[T,T+k]$, that is: $$ \Theta = \max_{T\le t \le T+k} \theta_t(i) \quad \text{and} \quad H = \max_{T\le t \le T+k} \eta_t(i). $$ \end{lemma} This estimate shows that if during a time interval $[T_1,T_2]$ one of \begin{enumerate} \item[(i)] the embedded inputs $Wx_t$ have weak $i^{{\rm th} }$ feature (i.e. $\max_{T\le t \le T+k} \left|(Wx_t)(i)\right|$ is small), \item[(ii)] or the input gates $\eta_t$ have their $i^{ {\rm th} }$ component close to zero (i.e. $H$ is small), \end{enumerate} occurs then the $i^{ {\rm th} }$ component of the hidden state $h_t$ will relaxes toward zero at a rate that depends on the value of the $i^{ {\rm th} }$ component the the forget gate. Overall this leads to the following simple picture: $h_{t}(i)$ activates when presented with an embedded input $Wx_t$ with strong $i^{ {\rm th} }$ feature, and then relaxes toward zero until the data present the network once again with strong $i^{{\rm th}}$ feature. The strength of the activation and the decay rate are controlled by the $i^{ {\rm th} }$ component of the input and forget gates. The proof of Lemma \ref{estimate} is elementary --- \vspace{-.3cm} \begin{proof}[Proof of Lemma \ref{estimate}] Using the non-expansivity of the hyperbolic tangent, i.e. $|\tanh(x)| \leq |x|$, and the triangle inequality, we obtain from \eqref{cfn1} $$ |h_{t}(i)| \leq \Theta \; |h_{t-1}(i)| + H \; \max_{T\le t \le T+k} \left|(Wx_t)(i)\right| $$ whenever $t$ is in the interval $[T,T+k]$. Iterating this inequality and summing the geometric series then gives $$ |h_{T+k}(i)| \leq \Theta^k |h_{T}(i)| + \left(\frac{1 - \Theta^k}{ 1- \Theta } \right)\; H \; \max_{T\le t \le T+k} \left|(Wx_t)(i)\right| $$ from which we easily conclude. \end{proof} We now turn toward the analysis of the long-term behavior of the the dynamical system \begin{align}\label{eq:cfn} \fu_t = h_t, \qquad &\fu \mapsto \Phi(\fu) := \sigma\left( U_{\theta} \fu + b_{\theta} \right) \odot \tanh( \fu ). \end{align} induced by a CFN. The following lemma shows that the only attractor of this dynamical system is the zero state. \begin{lemma} \label{zero} Starting from any initial state $\fu_{0}$, the trajectory $\mathcal{O}^{+}(\fu_0)$ will eventually converge to the zero state. That is, $\lim_{t \to +\infty} \fu_t=0$ regardless of the the initial state $ \fu_{0}$. \end{lemma} \vspace{-.3cm} \begin{proof} From the definition of $\Phi$ we clearly have that the sequence defined by $\fu_{t+1}=\Phi(\fu_t)$ satisfies $-1<\fu_t(i)<1$ for all $t$ and all $i$. Since the sequence $\fu_t$ is bounded, so is the sequence ${\bf v}_t:=U_{\theta} \fu_t + b_{\theta}$. That is there exists a finite $C>0$ such that $ (U_{\theta} \fu_t)(i) + b_{\theta}(i)<C $ for all $t$ and $i$. Using the non-expansivity of the hyperbolic tangent, we then obtain that $ |\fu_{t}(i)| \le \sigma(C) |\fu_{t-1}(i)| $, for all $t$ and all $i$. We conclude by noting that $0<\sigma(C)<1$. \end{proof} Lemma \ref{zero} remains true for a multi-layer CFN, that is, a CFN in which the first layer is defined by \eqref{cfn1} and the subsequent layers $2 \leq \ell \leq L$ are defined by: \begin{equation*} \label{cfnmulti} h^{(\ell)}_{t} = \theta^{(\ell)}_{t} \odot \tanh(h^{(\ell)}_{t-1} ) + \eta^{(\ell)}_{t} \odot \tanh(W^{(\ell)} h^{(\ell-1)}_{t} ). \end{equation*} Assume that $Wx_t=0$ for all $t>T$, then an extension of the arguments contained in the proof of the two previous lemmas shows that \begin{equation} \label{multi} |h^{ (\ell) }_{T+k}| \leq C(1+k)^{(\ell-1)} \Theta^k \end{equation} where $0<\Theta<1$ is the maximal values for the input gates involved in layer $1$ to $\ell$ of the network, and $C>0$ is some constant depending only on the norms $\|W^{(j)}\|_{\infty}$ of the matrices and the sizes $|h^{(j)}_{T}|$ of the initial conditions at all previous $1 \leq j \leq \ell$ levels. Estimate \eqref{multi} shows that Lemma \ref{zero} remains true for multi-layer architectures. Inequality \eqref{multi} shows that higher levels (i.e. larger $\ell$) decay more slowly, and remain non-trivial, while earlier levels (i.e. smaller $\ell$) decay more quickly. We illustrate this behavior computationally with a simple experiment. We take a 2-layer, 224-unit CFN network trained on Penn Treebank and feed it the following input data: The first 1000 inputs $x_t$ are the first 1000 words of the test set of Penn Treebank; All subsequent inputs are zero. In other words, $x_t=0$ if $t>1000$. For each of the two layers we then select the 10 units that decay the slowest after $t>1000$ and plot them on Figure \ref{fig:relaxing_rate}. The first layer retains information for about 10 to 20 time steps, whereas the second layer retains information for about 100 steps. This experiment conforms with the analysis \eqref{multi}, and indicates that adding a third or fourth layer would potentially allow a multi-layer CFN architecture to retain information for even longer periods. \section{Experiments} In this section we show that despite its simplicity, the CFN network achieves performance comparable to the much more complex LSTM network on the word level language modeling task. We use two datasets for these experiments, namely the Penn Treebank corpus \citep{marcus1993building} and the Text8 corpus \citep{mikolov2014learning}. We consider both one-layer and two-layer CFNs and LSTMs for our experiments. We train both CFN and LSTM networks in a similar fashion and always compare models that use the same number of parameters. We compare their performance with and without dropout, and show that in both cases they obtain similar results. We also provide results published in \cite{mikolov2014learning}, \cite{empirical_exploration_ICML15} and \cite{sukhbaatar2015end} for the sake of comparison. For concreteness, the exact implementation for the two-layer architecture of our model is \begin{align} \label{bobo} h^{(0)}_t& = W^{(0)} x_t \nonumber \\ \hat h^{(0)}_t&=\text{Drop}(h^{(0)}_{t},p) \nonumber \\ h^{(1)}_{t} &= \theta^{(1)}_{t} \odot \tanh(h^{(1)}_{t-1} ) + \eta^{(1)}_{t} \odot \tanh(W^{(1)} \hat h^{(0)}_{t} ) \nonumber \\ \hat h^{(1)}_t&=\text{Drop}(h^{(1)}_{t},p) \nonumber \\ h^{(2)}_{t} &= \theta^{(2)}_{t} \odot \tanh(h^{(2)}_{t-1} ) + \eta^{(2)}_{t} \odot \tanh(W^{(2)} \hat h^{(1)}_{t} ) \nonumber \\ \hat h^{(2)}_t&=\text{Drop}(h^{(2)}_{t},p) \nonumber \\ y_t\;\;&=\text{LogSoftmax}(W^{(3)} \hat h^{(2)}_t+b) \end{align} where $\text{Drop}(z,p)$ denotes the dropout operator with a probability $p$ of setting components in $z$ to zero. We compute the gates according to \begin{align} &\theta^{(\ell)}_{t} := \sigma\left( U^{(\ell)}_{\theta} \tilde h^{(\ell)}_{t-1} + V^{(\ell)}_{\theta} \tilde h^{(\ell-1)}_{t} + b_{\theta} \right) \;\; \text{and} \; \; \eta^{(\ell)}_{t} := \sigma\left( U^{(\ell)}_{\eta} \tilde h^{(\ell)}_{t-1} + V^{(\ell)}_{\eta} \tilde h^{(\ell-1)}_{t} + b_{\eta} \right) \\ & \text{where } \quad \tilde h^{(\ell)}_{t-1}=\text{Drop}(h^{(\ell)}_{t-1},q) \quad \text{and} \quad \tilde h^{(\ell-1)}_{t}=\text{Drop}(h^{(\ell-1)}_{t},q),\label{bibi} \end{align} and thus the model has two dropout hyperparameters. The parameter $p$ controls the amount of dropout between layers; the parameter $q$ controls the amount of dropout inside each gate. We use a similar dropout strategy for the LSTM, in that all sigmoid gates $f,o$ and $i$ receive the same amount $q$ of dropout. To train the CFN and LSTM networks, we use a simple online steepest descent algorithm. We update the weights $w$ via \begin{equation} w^{(k+1)} = w^{(k)} - \text{lr} \cdot \vec p \quad \text{ where } \quad \vec p = \frac{\nabla_w L}{\|\nabla_w L\|_2} \label{sngd}, \end{equation} and $\nabla_w L$ denotes the approximate gradient of the loss with respect to the weights as estimated from a certain number of presented examples. We use the usual backpropagation through time approximation when estimating the gradient: we unroll the net $T$ steps in the past and neglect longer dependencies. In all experiments, the CFN and LSTM networks are unrolled for $T=35$ steps and we take minibatches of size $20$. In the case of an exact gradient, the update \eqref{sngd} simply corresponds to making a step of length $\text{lr}$ in the direction of steepest descent. As all search directions $\vec{p}$ have Euclidean norm $\| \vec{p} \|_{2} = 1$, we perform no gradient clipping during training. We initialize all the weights in the CFN, except for the bias of the gates, uniformly at random in $[-0.07,0.07]$. We initialize the bias $b_{\theta}$ and $b_\eta$ of the gates to $1$ and $-1,$ respectively, so that at the beginning of the training $$ \theta_t \approx \sigma(1)\approx 0.73 \quad \text{ and } \quad \eta_t \approx \sigma(-1) \approx 0.23. $$ We initialize the weights of the LSTM in exactly the same way; the bias for the forget and input gate are initialized to $1$ and $-1$, and all the other weights are initialized uniformly in $[-0.07,0.07]$. This initialization scheme favors the flow of information in the horizontal direction. The importance of a careful initialization of the forget gate was first pointed out in \cite{gers2000learning} and further emphasized in \cite{empirical_exploration_ICML15}. Finally, we initialize all hidden states to zero for both models. {\bf Dataset Construction.} The Penn Treebank Corpus has 1 million words and a vocabulary size of 10,000. We used the code from \cite{zaremba2014recurrent} to construct and split the dataset into a training set (929K words), a validation set (73K words) and a test set (82K words). The Text8 corpus has 100 million characters and a vocabulary size of 44,000. We used the script from \cite{mikolov2014learning} to construct and split the dataset into a training set (first 99M characters) and a development set (last 1M characters). {\bf Experiments without Dropout.} Tables \ref{PTB5} and \ref{text8} provide a comparison of various recurrent network architectures without dropout evaluated on the Penn Treebank corpus and the Text8 corpus. The last two rows of each table provide results for LSTM and CFN networks trained and initialized in the manner described above. We have tried both one and two layer architectures, and reported only the best result. The learning rate schedules used for each network are described in the appendix. We also report results published in \cite{empirical_exploration_ICML15} were a vanilla RNN, a GRU and an LSTM network were trained on Penn Treebank, each of them having 5 million parameters (only the test perplexity was reported). % Note that the LSTM network that we trained with the online steepest descent algorithm reaches lower perplexity than what was reported in \cite{empirical_exploration_ICML15} (105 vs 110). It is unclear wheter it is due to the training algorithm or simply to better hyperparameters tuning. Finally we report results published in \cite{mikolov2014learning} and \cite{sukhbaatar2015end} where various networks are trained on Text8. Of these four networks, only the LSTM network from \cite{mikolov2014learning} has the same number of parameters than the CFN and LSTM networks we trained (46.4M parameters). The vanilla RNN, Structurally Constrained Recurrent Network (SCRN) and End-To-End Memory Network (MemN2N) all have 500 units, but less than 46.4M parameters. We nonetheless indicate their performance in Table \ref{text8} to provide some context. {\bf Experiments with Dropout.} Table \ref{PTB20} provides a comparison of various recurrent network architectures with dropout evaluated on the Penn Treebank corpus. The first three rows report results published in \citep{empirical_exploration_ICML15} and the last four rows provide results for LSTM and CFN networks trained and initialized with the strategy previously described. The dropout rate $p$ and $q$ are chosen as follows: For the experiments with 20M parameters, we set $p=55\%$ and $q=45\%$ for the CFN and $p=60\%$ and $q=40\%$ for the LSTM; For the experiments with 50M parameters, we set $p=65\%$ and $q=55\%$ for the CFN and $p=70\%$ and $q=50\%$ for the LSTM. \section{Conclusion} Despite its simple dynamics, the CFN obtains results that compare well against LSTM networks and GRUs on word-level language modeling. This indicates that it might be possible, in general, to build RNNs that perform well while avoiding the intricate, uninterpretable and potentially chaotic dynamics that can occur in LSTMs and GRUs. Of course, it remains to be seen if dynamically simple RNNs such as the proposed CFN can perform well on a wide variety of tasks, potentially requiring longer term dependencies than the one needed for word level language modeling. The experiments presented in Section 2 indicate a plausible path forward --- activations in the higher layers of a multi-layer CFN decay at a slower rate than the activations in the lower layers. In theory, complexity and long-term dependencies can therefore be captured using a more ``feed-forward'' approach (i.e. stacking layers) rather than relying on the intricate and hard to interpret dynamics of an LSTM or a GRU. Overall, the CFN is a simple model and it therefore has the potential of being mathematically well-understood. In particular, Section 2 reveals that the dynamics of its hidden states are inherently more interpretable than those of an LSTM. The mathematical analysis here provides a few key insights into the network, in both the presence and absence of input data, but obviously more work is needed before a complete picture can emerge. We hope that this investigation opens up new avenues of inquiry, and that such an understanding will drive subsequent improvements. \bibliographystyle{iclr2017_conference} \newpage \section*{Appendix} {\bf Strange attractor of the H\'enon map.} For the sake of comparison, we provide in Figure \ref{fig:attbis} a depiction of a well-known strange attractor (the H\'enon attractor) arising from a discrete-time dynamical system. We generate these pictures by reproducing the numerical experiments from \cite{henon1976two}. The discrete dynamical system considered here is the two dimensional map $$ x_{t+1}=y_{t}+1-ax^2_{t}, \quad y_{t+1}=bx_{t}, $$ with parameters set to $a= 1.4$ and $b = 0.3$. We obtain Figure \ref{fig:attbis}(a) by choosing the initial state $(x_0,y_0)=(0,0)$ and plotting the iterates $(x_{t}, y_{t})$ for $t$ between $10^3$ and $10^5$. All trajectories starting close to the origin at time $t=0$ converge toward the depicted attractor. Successive zooms on the branch of the attractor reveal its fractal nature. The structure repeats in a fashion remarkably similar to the 2-unit LSTM in Section 2. {\bf Strange attractor of a 2-unit GRU.} As with LSTMs, the GRU gated architecture can induce a chaotic dynamical system. Figure \ref{fig:GRU} depicts the strange attractor of the dynamical system \begin{align*}%\label{eq:grus} \fu_t = h_t, \qquad &\fu \mapsto \Phi(\fu) := (1 - z) \odot \fu + z \odot \tanh\left( U(r \odot \fu) \right) \\ &z := \sigma\left( W_{z} \fu + b_{z} \right) \quad r := \sigma\left( W_{r} \fu + b_{r} \right) \nonumber, \end{align*} induced by a two-dimensional GRU, with weight matrices $$ W_{z} = \begin{bmatrix} 0 & 1 \\ 1 & 1 \end{bmatrix} \quad W_{r} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \quad U = \begin{bmatrix} -5 & -8 \\ \;\,\,8 & \;\,\,5 \end{bmatrix} $$ and zero bias for the model parameters. Here also successive zooms on the branch of the attractor reveal its fractal nature. As in the LSTM, the forward trajectories of this dynamical system exhibit a high degree of sensitivity to initial states. \newpage {\bf Network sizes and learning rate schedules used in the experiments.} In the Penn Treebank experiment without dropout (Table 1), the CFN network has two hidden layers of 224 units each for a total of 5 million parameters. The LSTM has one hidden layer with 228 units for a total of 5 million parameters as well. We also tried a two-layer LSTM with 5 million parameters but the result was worse (test perplexity of 110.6) and we did not report it in the table. %We used a simple and aggressive learning rate schedule for both architectures: $\text{lr}$ was divided by 3 at the end of each epoch. The initial learning rate for the CFN and LSTM networks were $\text{lr}_0=5$ and $\text{lr}_0=7$ respectively. For the Text8 experiments (Table 2), the LSTM has two hidden layers with 481 hidden units for a total 46.4 million parameters. We also tried a one-layer LSTM with 46.4 million parameters but the result was worse (perplexity of 140.8). The CFN has two hidden layers with 495 units each, for a total of 46.4 million parameters as well. %We kept the same aggressive learning rate than for the Penn Treebank experiments (i.e., divide lr by 3 at each epoch) and experimented with various initial leaning rate. For both experiments without dropout (Table 1 and 2), we used a simple and aggressive learning rate schedule: at each epoch, lr is divided by 3. For the CFN the initial learning rate was chosen to be $\text{lr}_0=5.5$ for PTB and $\text{lr}_0=5$ for Text8. For the LSTM we chose $\text{lr}_0=7$ for PTB and $\text{lr}_0=5$ for Text8. In the Penn Treebank experiment with dropout (Table 3), the CFN with 20M parameters has two hidden layers of 731 units each and the LSTM with 20M parameters trained by us has two hidden layers of 655 units each. We also tried a one-layer LSTM with 20M parameters and it led to similar but slightly worse results than the two-layer architecture. For both network, the learning rate was divided by $1.1$ each time the validation perplexity did not decrease by at least $1\%$. The initial learning rate were chosen to be $\text{lr}_0=7$ for the CFN and $\text{lr}_0=5$ for the LSTM. \end{document}
Attention-over-Attention Neural Networks for Reading Comprehension
1607.04423
Table 4: Detailed results of 5-best re-ranking on CBTest NE/CN datasets. Each row includes all of the features from previous rows. LMglobal denotes the global LM, LMlocal denotes the local LM, LMwc denotes the word-class LM.
[ "[EMPTY]", "CBTest NE Valid", "CBTest NE Test", "CBTest CN Valid", "CBTest CN Test" ]
[ [ "AoA Reader", "77.8", "72.0", "72.2", "69.4" ], [ "+Global LM", "78.3", "72.6", "73.9", "71.2" ], [ "+Local LM", "79.4", "73.8", "74.7", "71.7" ], [ "+Word-class LM", "79.6", "74.0", "75.7", "73.1" ] ]
Generally speaking, in NE category, the performance is mainly boosted by the LMlocal feature. However, on the contrary, the CN category benefits from LMglobal and LMwc rather than the LMlocal.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{18} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Attention-over-Attention Neural Networks for Reading Comprehension} \author{Yiming Cui$^\dag$, Zhipeng Chen$^\dag$, Si Wei$^\dag$, Shijin Wang$^\dag$, Ting Liu$^\ddag$ \and Guoping Hu$^\dag$\\ {$^\dag$Joint Laboratory of HIT and iFLYTEK, iFLYTEK Research, Beijing, China}\\ {$^\ddag$Research Center for Social Computing and Information Retrieval,}\\ {Harbin Institute of Technology, Harbin, China}\\ {$^\dag$\tt\{ymcui,zpchen,siwei,sjwang3,gphu\}@iflytek.com}\\ {$^\ddag$\tt tliu@ir.hit.edu.cn}\\ } \date{} \begin{document} \maketitle \begin{abstract} Cloze-style reading comprehension is a representative problem in mining relationship between document and query. In this paper, we present a simple but novel model called {\em attention-over-attention} reader for better solving cloze-style reading comprehension task. The proposed model aims to place another attention mechanism over the document-level attention and induces ``attended attention'' for final answer predictions. One advantage of our model is that it is simpler than related works while giving excellent performance. In addition to the primary model, we also propose an N-best re-ranking strategy to double check the validity of the candidates and further improve the performance. Experimental results show that the proposed methods significantly outperform various state-of-the-art systems by a large margin in public datasets, such as CNN and Children's Book Test. \end{abstract} \section{Introduction}\label{introduction} To read and comprehend the human languages are challenging tasks for the machines, which requires that the understanding of natural languages and the ability to do reasoning over various clues. Reading comprehension is a general problem in the real world, which aims to read and comprehend a given article or context, and answer the questions based on it. Recently, the cloze-style reading comprehension problem has become a popular task in the community. The cloze-style query \cite{taylor-etal-1953} is a problem that to fill in an appropriate word in the given sentences while taking the context information into account. To teach the machine to do cloze-style reading comprehensions, large-scale training data is necessary for learning relationships between the given document and query. To create large-scale training data for neural networks, \newcite{hermann-etal-2015} released the CNN/Daily Mail news dataset, where the document is formed by the news articles and the queries are extracted from the summary of the news. \newcite{hill-etal-2015} released the Children's Book Test dataset afterwards, where the training samples are generated from consecutive 20 sentences from books, and the query is formed by 21st sentence. Following these datasets, a vast variety of neural network approaches have been proposed \citep{kadlec-etal-2016,cui-etal-2016,chen-etal-2016,dhingra-etal-2016,sordoni-etal-2016,trischler-etal-2016,seo-etal-2016,xiong-etal-2016}, and most of them stem from the attention-based neural network \cite{bahdanau-etal-2014}, which has become a stereotype in most of the NLP tasks and is well-known by its capability of learning the ``importance'' distribution over the inputs. In this paper, we present a novel neural network architecture, called {\em attention-over-attention} model. As we can understand the meaning literally, our model aims to place another attention mechanism over the existing document-level attention. Unlike the previous works, that are using heuristic merging functions \cite{cui-etal-2016}, or setting various pre-defined non-trainable terms \cite{trischler-etal-2016}, our model could automatically generate an ``attended attention'' over various document-level attentions, and make a mutual look not only from {\em query-to-document} but also {\em document-to-query}, which will benefit from the interactive information. To sum up, the main contributions of our work are listed as follows. \begin{itemize} \item To our knowledge, this is the first time that the mechanism of nesting another attention over the existing attentions is proposed, i.e. {\em attention-over-attention} mechanism. \item Unlike the previous works on introducing complex architectures or many non-trainable hyper-parameters to the model, our model is much more simple but outperforms various state-of-the-art systems by a large margin. \item We also propose an N-best re-ranking strategy to re-score the candidates in various aspects and further improve the performance. \end{itemize} The following of the paper will be organized as follows. In Section \ref{rc-task}, we will give a brief introduction to the cloze-style reading comprehension task as well as related public datasets. Then the proposed attention-over-attention reader will be presented in detail in Section \ref{nn-for-rc} and N-best re-ranking strategy in Section \ref{reranking}. The experimental results and analysis will be given in Section \ref{experiments} and Section \ref{analysis}. Related work will be discussed in Section \ref{related-work}. Finally, we will give a conclusion of this paper and envisions on future work. \section{Cloze-style Reading Comprehension}\label{rc-task} In this section, we will give a brief introduction to the cloze-style reading comprehension task at the beginning. And then, several existing public datasets will be described in detail. \subsection{Task Description} Formally, a general Cloze-style reading comprehension problem can be illustrated as a triple: \begin{equation} \nonumber \langle \mathcal D, \mathcal Q, \mathcal A \rangle \end{equation} The triple consists of a document $\mathcal D$, a query $\mathcal Q$ and the answer to the query $\mathcal A$. Note that the answer is usually a {\em single} word in the document, which requires the human to exploit context information in both document and query. The type of the answer word varies from predicting a preposition given a fixed collocation to identifying a named entity from a factual illustration. \subsection{Existing Public Datasets} Large-scale training data is essential for training neural networks. Several public datasets for the cloze-style reading comprehension has been released. Here, we introduce two representative and widely-used datasets. \subsubsection*{$\bullet$~~ CNN / Daily Mail} \newcite{hermann-etal-2015} have firstly published two datasets: CNN and Daily Mail news data \footnote{The pre-processed CNN and Daily Mail datasets are available at \url{http://cs.nyu.edu/~kcho/DMQA/}}. They construct these datasets with web-crawled CNN and Daily Mail news data. One of the characteristics of these datasets is that the news article is often associated with a summary. So they first regard the main body of the news article as the {\em Document}, and the {\em Query} is formed by the summary of the article, where one entity word is replaced by a special placeholder to indicate the missing word. The replaced entity word will be the {\em Answer} of the {\em Query}. Apart from releasing the dataset, they also proposed a methodology that anonymizes the named entity tokens in the data, and these tokens are also re-shuffle in each sample. The motivation is that the news articles are containing limited named entities, which are usually celebrities, and the world knowledge can be learned from the dataset. So this methodology aims to exploit general relationships between anonymized named entities within a single document rather than the common knowledge. The following research on these datasets showed that the entity word anonymization is not as effective as expected \citep{chen-etal-2016}. \subsubsection*{$\bullet$~~ Children's Book Test} There was also a dataset called the Children's Book Test (CBTest) released by \newcite{hill-etal-2015}, which is built on the children's book story through Project Gutenberg \footnote{The CBTest datasets are available at \url{http://www.thespermwhale.com/jaseweston/babi/CBTest.tgz}}. Different from the CNN/Daily Mail datasets, there is no summary available in the children's book. So they proposed another way to extract query from the original data. The document is composed of 20 consecutive sentences in the story, and the 21st sentence is regarded as the query, where one word is blanked with a special placeholder. In the CBTest datasets, there are four types of sub-datasets available which are classified by the part-of-speech and named entity tag of the answer word, containing Named Entities (NE), Common Nouns (CN), Verbs and Prepositions. In their studies, they have found that the answering of verbs and prepositions are relatively less dependent on the content of document, and the humans can even do preposition blank-filling without the presence of the document. The studies shown by \newcite{hill-etal-2015}, answering verbs and prepositions are less dependent with the presence of document. Thus, most of the related works are focusing on solving NE and CN types. \section{Attention-over-Attention Reader}\label{nn-for-rc} In this section, we will give a detailed introduction to the proposed Attention-over-Attention Reader (AoA Reader). Our model is primarily motivated by Kadlec et al., \shortcite{kadlec-etal-2016}, which aims to directly estimate the answer from the document-level attention instead of calculating blended representations of the document. As previous studies by \newcite{cui-etal-2016} showed that the further investigation of query representation is necessary, and it should be paid more attention to utilizing the information of query. In this paper, we propose a novel work that placing another attention over the primary attentions, to indicate the ``importance'' of each attentions. Now, we will give a formal description of our proposed model. When a cloze-style training triple $\langle \mathcal D, \mathcal Q, \mathcal A \rangle$ is given, the proposed model will be constructed in the following steps. \subsubsection*{$\bullet$~~ Contextual Embedding} We first transform every word in the document $\mathcal D$ and query $\mathcal Q$ into one-hot representations and then convert them into continuous representations with a shared embedding matrix $W_e$. By sharing word embedding, both the document and query can participate in the learning of embedding and both of them will benefit from this mechanism. After that, we use two bi-directional RNNs to get contextual representations of the document and query individually, where the representation of each word is formed by concatenating the forward and backward hidden states. After making a trade-off between model performance and training complexity, we choose the Gated Recurrent Unit (GRU) \cite{cho-etal-2014} as recurrent unit implementation. \begin{gather} e(x) = W_e \cdot x,~where~~x\in \mathcal D , \mathcal Q \\ \overrightarrow{h_s(x)} = \overrightarrow{GRU}(e(x)) \\ \overleftarrow{h_s(x)} = \overleftarrow{GRU}(e(x)) \\ h_s(x) = [\overrightarrow{h_s(x)}; \overleftarrow{h_s(x)}] \end{gather} We take $h_{doc}\in\mathbb{R}^{|\mathcal D|*2d}$ and $h_{query}\in\mathbb{R}^{|\mathcal Q|*2d}$ to denote the contextual representations of document and query, where $d$ is the dimension of GRU (one-way). \subsubsection*{$\bullet$~~ Pair-wise Matching Score} After obtaining the contextual embeddings of the document $h_{doc}$ and query $h_{query}$, we calculate a pair-wise matching matrix, which indicates the pair-wise matching degree of one document word and one query word. Formally, when given $i$th word of the document and $j$th word of query, we can compute a matching score by their dot product. \begin{equation} M(i,j) = h_{doc}(i)^{T} \cdot h_{query}(j) \end{equation} In this way, we can calculate every pair-wise matching score between each document and query word, forming a matrix $M\in\mathbb{R}^{|\mathcal D|*|\mathcal Q|}$, where the value of $i$th row and $j$th column is filled by $M(i,j)$. \subsubsection*{$\bullet$~~ Individual Attentions} After getting the pair-wise matching matrix $M$, we apply a column-wise softmax function to get probability distributions in each column, where each column is an individual document-level attention when considering a single query word. We denote $\alpha(t)\in\mathbb{R}^{|\mathcal D|}$ as the document-level attention regarding query word at time $t$, which can be seen as a {\em query-to-document} attention. \newcommand\D{\displaystyle} \begin{gather} \alpha(t) = softmax(M(1,t),...,M(|\mathcal D|,t)) \\ \alpha = [\alpha(1), \alpha(2), ..., \alpha(|\mathcal Q|)] \end{gather} \subsubsection*{$\bullet$~~ Attention-over-Attention} Different from \newcite{cui-etal-2016}, instead of using naive heuristics (such as {\em summing} or {\em averaging}) to combine these individual attentions into a final attention, we introduce another attention mechanism to automatically decide the importance of each individual attention. First, we calculate a reversed attention, that is, for every document word at time $t$, we calculate the ``importance'' distribution on the query, to indicate which query words are more important given a single document word. We apply a row-wise softmax function to the pair-wise matching matrix $M$ to get query-level attentions. We denote $\beta(t)\in\mathbb{R}^{|\mathcal Q|}$ as the query-level attention regarding document word at time $t$, which can be seen as a {\em document-to-query} attention. \begin{equation} \beta(t) = softmax(M(t,1),...,M(t,|\mathcal Q|)) \end{equation} So far, we have obtained both {\em query-to-document} attention $\alpha$ and {\em document-to-query} attention $\beta$. Our motivation is to exploit mutual information between the document and query. However, most of the previous works are only relying on {\em query-to-document} attention, that is, only calculate one document-level attention when considering the whole query. Then we average all the $\beta(t)$ to get an averaged query-level attention $\beta$. Note that, we do not apply another softmax to the $\beta$, because averaging individual attentions do not break the normalizing condition. \begin{equation} \beta = \frac{1}{n}\sum\limits_{t=1}^{|\mathcal D|}\beta(t) \end{equation} Finally, we calculate dot product of $\alpha$ and $\beta$ to get the ``attended document-level attention'' $s\in\mathbb{R}^{|\mathcal D|}$, i.e. the {\em attention-over-attention} mechanism. Intuitively, this operation is calculating a weighted sum of each individual document-level attention $\alpha(t)$ when looking at query word at time $t$. In this way, the contributions by each query word can be learned explicitly, and the final decision (document-level attention) is made through the voted result by the importance of each query word. \begin{equation} s = \alpha^{T} \beta \end{equation} \subsubsection*{$\bullet$~~ Final Predictions} Following \newcite{kadlec-etal-2016}, we use {\em sum attention} mechanism to get aggregated results. Note that the final output should be reflected in the vocabulary space $V$, rather than document-level attention $|\mathcal D|$, which will make a significant difference in the performance, though \newcite{kadlec-etal-2016} did not illustrate this clearly. \begin{equation} P(w|\mathcal D, \mathcal Q) = \sum_{i \in I(w,\mathcal D)} s_i ,~~w \in V \end{equation} where $I(w,\mathcal D)$ indicate the positions that word $w$ appears in the document $\mathcal D$. As the training objectives, we seek to maximize the log-likelihood of the correct answer. \begin{equation} \mathcal{L} = \sum_{i} \log(p(x))~~, x\in\mathcal{A}\end{equation} The proposed neural network architecture is depicted in Figure \ref{nn-arch}. Note that, as our model mainly adds limited steps of calculations to the AS Reader \cite{kadlec-etal-2016} and does not employ any additional weights, the computational complexity is similar to the AS Reader. \section{N-best Re-ranking Strategy}\label{reranking} Intuitively, when we do cloze-style reading comprehensions, we often refill the candidate into the blank of the query to double-check its appropriateness, fluency and grammar to see if the candidate we choose is the most suitable one. If we do find some problems in the candidate we choose, we will choose the second possible candidate and do some checking again. To mimic the process of double-checking, we propose to use N-best re-ranking strategy after generating answers from our neural networks. The procedure can be illustrated as follows. \subsubsection*{$\bullet$~~ N-best Decoding} Instead of only picking the candidate that has the highest possibility as answer, we can also extract follow-up candidates in the decoding process, which forms an N-best list. \subsubsection*{$\bullet$~~ Refill Candidate into Query} As a characteristic of the cloze-style problem, each candidate can be refilled into the blank of the query to form a complete sentence. This allows us to check the candidate according to its context. \subsubsection*{$\bullet$~~ Feature Scoring} The candidate sentences can be scored in many aspects. In this paper, we exploit three features to score the N-best list. \begin{itemize} \item Global N-gram LM: This is a fundamental metric in scoring sentence, which aims to evaluate its fluency. This model is trained on the document part of training data. \item Local N-gram LM: Different from global LM, the local LM aims to explore the information with the given document, so the statistics are obtained from the test-time document. It should be noted that the local LM is trained sample-by-sample, it is not trained on the entire test set, which is not legal in the real test case. This model is useful when there are many unknown words in the test sample. \item Word-class LM: Similar to global LM, the word-class LM is also trained on the document part of training data, but the words are converted to its word class ID. The word class can be obtained by using clustering methods. In this paper, we simply utilized the {\em mkcls} tool for generating 1000 word classes \cite{och-1999}. \end{itemize} \subsubsection*{$\bullet$~~ Weight Tuning} To tune the weights among these features, we adopt the K-best MIRA algorithm \cite{cherry-foster:2012:NAACL-HLT} to automatically optimize the weights on the validation set, which is widely used in statistical machine translation tuning procedure. \subsubsection*{$\bullet$~~ Re-scoring and Re-ranking} After getting the weights of each feature, we calculate the weighted sum of each feature in the N-best sentences and then choose the candidate that has the lowest cost as the final answer. \section{Experiments}\label{experiments} \subsection{Experimental Setups} The general settings of our neural network model are listed below in detail. \begin{itemize} \item Embedding Layer: The embedding weights are randomly initialized with the uniformed distribution in the interval $[-0.05,0.05]$. For regularization purpose, we adopted $l_2$-regularization to 0.0001 and dropout rate of 0.1 \cite{srivastava-etal-2014}. Also, it should be noted that we do not exploit any pre-trained embedding models. \item Hidden Layer: Internal weights of GRUs are initialized with random orthogonal matrices \cite{saxe2013exact}. \item Optimization: We adopted ADAM optimizer for weight updating \cite{kingma2014adam}, with an initial learning rate of 0.001. As the GRU units still suffer from the gradient exploding issues, we set the gradient clipping threshold to 5 \cite{pascanu-etal-2013}. We used batched training strategy of 32 samples. \end{itemize} Dimensions of embedding and hidden layer for each task are listed in Table \ref{dim-stats}. In re-ranking step, we generate 5-best list from the baseline neural network model, as we did not observe a significant variance when changing the N-best list size. All language model features are trained on the training proportion of each dataset, with 8-gram word-based setting and Kneser-Ney smoothing \cite{kneser-1995} trained by SRILM toolkit \cite{stolcke-2002}. The results are reported with the best model, which is selected by the performance of validation set. The ensemble model is made up of four best models, which are trained using different random seed. Implementation is done with Theano \cite{theano2016} and Keras \cite{chollet2015keras}, and all models are trained on Tesla K40 GPU. \subsection{Overall Results} Our experiments are carried out on public datasets: CNN news datasets \cite{hermann-etal-2015} and CBTest NE/CN datasets \cite{hill-etal-2015}. The statistics of these datasets are listed in Table \ref{cbt-stats}, and the experimental results are given in Table \ref{public-result}. As we can see that, our AoA Reader outperforms state-of-the-art systems by a large margin, where 2.3\% and 2.0\% absolute improvements over EpiReader in CBTest NE and CN test sets, which demonstrate the effectiveness of our model. Also by adding additional features in the re-ranking step, there is another significant boost 2.0\% to 3.7\% over AoA Reader in CBTest NE/CN test sets. We have also found that our single model could stay on par with the previous best ensemble system, and even we have an absolute improvement of 0.9\% beyond the best ensemble model (Iterative Attention) in the CBTest NE validation set. When it comes to ensemble model, our AoA Reader also shows significant improvements over previous best ensemble models by a large margin and set up a new state-of-the-art system. To investigate the effectiveness of employing {\em attention-over-attention} mechanism, we also compared our model to CAS Reader, which used pre-defined merging heuristics, such as {\em sum} or {\em avg} etc. Instead of using pre-defined merging heuristics, and letting the model explicitly learn the weights between individual attentions results in a significant boost in the performance, where 4.1\% and 3.7\% improvements can be made in CNN validation and test set against CAS Reader. \subsection{Effectiveness of Re-ranking Strategy} As we have seen that the re-ranking approach is effective in cloze-style reading comprehension task, we will give a detailed ablations in this section to show the contributions by each feature. To have a thorough investigation in the re-ranking step, we listed the detailed improvements while adding each feature mentioned in Section \ref{reranking}. From the results in Table \ref{rerank-cbt}, we found that the NE and CN category both benefit a lot from the re-ranking features, but the proportions are quite different. Generally speaking, in NE category, the performance is mainly boosted by the $LM_{local}$ feature. However, on the contrary, the CN category benefits from $LM_{global}$ and $LM_{wc}$ rather than the $LM_{local}$. Also, we listed the weights of each feature in Table \ref{weights-cbt}. The $LM_{global}$ and $LM_{wc}$ are all trained by training set, which can be seen as {\em Global Feature}. However, the $LM_{local}$ is only trained within the respective document part of test sample, which can be seen as {\em Local Feature}. \begin{equation} \eta = \frac{LM_{global} + LM_{wc}}{LM_{local}} \end{equation} We calculated the ratio between the global and local features and found that the NE category is much more dependent on local features than CN category. Because it is much more likely to meet a new named entity than a common noun in the test phase, so adding the local LM provides much more information than that of common noun. However, on the contrary, answering common noun requires less local information, which can be learned in the training data relatively. \section{Quantitative Analysis}\label{analysis} In this section, we will give a quantitative analysis to our AoA Reader. The following analyses are carried out on CBTest NE dataset. First, we investigate the relations between the length of the document and corresponding accuracy. The result is depicted in Figure \ref{length-acc}. As we can see that the AoA Reader shows consistent improvements over AS Reader on the different length of the document. Especially, when the length of document exceeds 700, the improvements become larger, indicating that the AoA Reader is more capable of handling long documents. Furthermore, we also investigate if the model tends to choose a high-frequency candidate than a lower one, which is shown in Figure \ref{rank-acc}. Not surprisingly, we found that both models do a good job when the correct answer appears more frequent in the document than the other candidates. This is because that the correct answer that has the highest frequency among the candidates takes up over 40\% of the test set (1071 out of 2500). But interestingly we have also found that, when the frequency rank of correct answer exceeds 7 (less frequent among candidates), these models also give a relatively high performance. Empirically, we think that these models tend to choose extreme cases in terms of candidate frequency (either too high or too low). One possible reason is that it is hard for the model to choose a candidate that has a neutral frequency as the correct answer, because of its ambiguity (neutral choices are hard to made). \section{Related Work}\label{related-work} Cloze-style reading comprehension tasks have been widely investigated in recent studies. We will take a brief revisit to the related works. \newcite{hermann-etal-2015} have proposed a method for obtaining large quantities of $\langle \mathcal D, \mathcal Q, \mathcal A \rangle$ triples through news articles and its summary. Along with the release of cloze-style reading comprehension dataset, they also proposed an attention-based neural network to handle this task. Experimental results showed that the proposed neural network is effective than traditional baselines. \newcite{hill-etal-2015} released another dataset, which stems from the children's books. Different from \newcite{hermann-etal-2015}'s work, the document and query are all generated from the raw story without any summary, which is much more general than previous work. To handle the reading comprehension task, they proposed a window-based memory network, and self-supervision heuristics is also applied to learn hard-attention. Unlike previous works, that using blended representations of document and query to estimate the answer, \newcite{kadlec-etal-2016} proposed a simple model that directly pick the answer from the document, which is motivated by the Pointer Network \cite{vinyals-etal-2015}. A restriction of this model is that the answer should be a single word and appear in the document. Results on various public datasets showed that the proposed model is effective than previous works. \citet{liu-etal-2016} proposed to exploit reading comprehension models to other tasks. They first applied the reading comprehension model into Chinese zero pronoun resolution task with automatically generated large-scale pseudo training data. The experimental results on OntoNotes 5.0 data showed that their method significantly outperforms various state-of-the-art systems. Our work is primarily inspired by \newcite{cui-etal-2016} and \newcite{kadlec-etal-2016} , where the latter model is widely applied to many follow-up works \cite{sordoni-etal-2016,trischler-etal-2016,cui-etal-2016}. Unlike the CAS Reader \cite{cui-etal-2016}, we do not assume any heuristics to our model, such as using merge functions: $sum$, $avg$ etc. We used a mechanism called ``attention-over-attention'' to explicitly calculate the weights between different individual document-level attentions, and get the final attention by computing the weighted sum of them. Also, we find that our model is typically general and simple than the recently proposed model, and brings significant improvements over these cutting edge systems. \section{Conclusion}\label{conclusion} We present a novel neural architecture, called attention-over-attention reader, to tackle the cloze-style reading comprehension task. The proposed AoA Reader aims to compute the attentions not only for the document but also the query side, which will benefit from the mutual information. Then a weighted sum of attention is carried out to get an attended attention over the document for the final predictions. Among several public datasets, our model could give consistent and significant improvements over various state-of-the-art systems by a large margin. The future work will be carried out in the following aspects. We believe that our model is general and may apply to other tasks as well, so firstly we are going to fully investigate the usage of this architecture in other tasks. Also, we are interested to see that if the machine really ``comprehend'' our language by utilizing neural networks approaches, but not only serve as a ``document-level'' language model. In this context, we are planning to investigate the problems that need comprehensive reasoning over several sentences. \section*{Acknowledgments} We would like to thank all three anonymous reviewers for their thorough reviewing and providing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015409. \bibliographystyle{acl_natbib} \end{document}
Attention-over-Attention Neural Networks for Reading Comprehension
1607.04423
Table 3: Embedding and hidden layer dimensions for each task.
[ "[EMPTY]", "Embed. # units", "Hidden # units" ]
[ [ "CNN News", "384", "256" ], [ "CBTest NE", "384", "384" ], [ "CBTest CN", "384", "256" ] ]
In re-ranking step, we generate 5-best list from the baseline neural network model, as we did not observe a significant variance when changing the N-best list size. All language model features are trained on the training proportion of each dataset, with 8-gram word-based setting and Kneser-Ney smoothing Kneser and Ney The results are reported with the best model, which is selected by the performance of validation set. The ensemble model is made up of four best models, which are trained using different random seed.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{18} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Attention-over-Attention Neural Networks for Reading Comprehension} \author{Yiming Cui$^\dag$, Zhipeng Chen$^\dag$, Si Wei$^\dag$, Shijin Wang$^\dag$, Ting Liu$^\ddag$ \and Guoping Hu$^\dag$\\ {$^\dag$Joint Laboratory of HIT and iFLYTEK, iFLYTEK Research, Beijing, China}\\ {$^\ddag$Research Center for Social Computing and Information Retrieval,}\\ {Harbin Institute of Technology, Harbin, China}\\ {$^\dag$\tt\{ymcui,zpchen,siwei,sjwang3,gphu\}@iflytek.com}\\ {$^\ddag$\tt tliu@ir.hit.edu.cn}\\ } \date{} \begin{document} \maketitle \begin{abstract} Cloze-style reading comprehension is a representative problem in mining relationship between document and query. In this paper, we present a simple but novel model called {\em attention-over-attention} reader for better solving cloze-style reading comprehension task. The proposed model aims to place another attention mechanism over the document-level attention and induces ``attended attention'' for final answer predictions. One advantage of our model is that it is simpler than related works while giving excellent performance. In addition to the primary model, we also propose an N-best re-ranking strategy to double check the validity of the candidates and further improve the performance. Experimental results show that the proposed methods significantly outperform various state-of-the-art systems by a large margin in public datasets, such as CNN and Children's Book Test. \end{abstract} \section{Introduction}\label{introduction} To read and comprehend the human languages are challenging tasks for the machines, which requires that the understanding of natural languages and the ability to do reasoning over various clues. Reading comprehension is a general problem in the real world, which aims to read and comprehend a given article or context, and answer the questions based on it. Recently, the cloze-style reading comprehension problem has become a popular task in the community. The cloze-style query \cite{taylor-etal-1953} is a problem that to fill in an appropriate word in the given sentences while taking the context information into account. To teach the machine to do cloze-style reading comprehensions, large-scale training data is necessary for learning relationships between the given document and query. To create large-scale training data for neural networks, \newcite{hermann-etal-2015} released the CNN/Daily Mail news dataset, where the document is formed by the news articles and the queries are extracted from the summary of the news. \newcite{hill-etal-2015} released the Children's Book Test dataset afterwards, where the training samples are generated from consecutive 20 sentences from books, and the query is formed by 21st sentence. Following these datasets, a vast variety of neural network approaches have been proposed \citep{kadlec-etal-2016,cui-etal-2016,chen-etal-2016,dhingra-etal-2016,sordoni-etal-2016,trischler-etal-2016,seo-etal-2016,xiong-etal-2016}, and most of them stem from the attention-based neural network \cite{bahdanau-etal-2014}, which has become a stereotype in most of the NLP tasks and is well-known by its capability of learning the ``importance'' distribution over the inputs. In this paper, we present a novel neural network architecture, called {\em attention-over-attention} model. As we can understand the meaning literally, our model aims to place another attention mechanism over the existing document-level attention. Unlike the previous works, that are using heuristic merging functions \cite{cui-etal-2016}, or setting various pre-defined non-trainable terms \cite{trischler-etal-2016}, our model could automatically generate an ``attended attention'' over various document-level attentions, and make a mutual look not only from {\em query-to-document} but also {\em document-to-query}, which will benefit from the interactive information. To sum up, the main contributions of our work are listed as follows. \begin{itemize} \item To our knowledge, this is the first time that the mechanism of nesting another attention over the existing attentions is proposed, i.e. {\em attention-over-attention} mechanism. \item Unlike the previous works on introducing complex architectures or many non-trainable hyper-parameters to the model, our model is much more simple but outperforms various state-of-the-art systems by a large margin. \item We also propose an N-best re-ranking strategy to re-score the candidates in various aspects and further improve the performance. \end{itemize} The following of the paper will be organized as follows. In Section \ref{rc-task}, we will give a brief introduction to the cloze-style reading comprehension task as well as related public datasets. Then the proposed attention-over-attention reader will be presented in detail in Section \ref{nn-for-rc} and N-best re-ranking strategy in Section \ref{reranking}. The experimental results and analysis will be given in Section \ref{experiments} and Section \ref{analysis}. Related work will be discussed in Section \ref{related-work}. Finally, we will give a conclusion of this paper and envisions on future work. \section{Cloze-style Reading Comprehension}\label{rc-task} In this section, we will give a brief introduction to the cloze-style reading comprehension task at the beginning. And then, several existing public datasets will be described in detail. \subsection{Task Description} Formally, a general Cloze-style reading comprehension problem can be illustrated as a triple: \begin{equation} \nonumber \langle \mathcal D, \mathcal Q, \mathcal A \rangle \end{equation} The triple consists of a document $\mathcal D$, a query $\mathcal Q$ and the answer to the query $\mathcal A$. Note that the answer is usually a {\em single} word in the document, which requires the human to exploit context information in both document and query. The type of the answer word varies from predicting a preposition given a fixed collocation to identifying a named entity from a factual illustration. \subsection{Existing Public Datasets} Large-scale training data is essential for training neural networks. Several public datasets for the cloze-style reading comprehension has been released. Here, we introduce two representative and widely-used datasets. \subsubsection*{$\bullet$~~ CNN / Daily Mail} \newcite{hermann-etal-2015} have firstly published two datasets: CNN and Daily Mail news data \footnote{The pre-processed CNN and Daily Mail datasets are available at \url{http://cs.nyu.edu/~kcho/DMQA/}}. They construct these datasets with web-crawled CNN and Daily Mail news data. One of the characteristics of these datasets is that the news article is often associated with a summary. So they first regard the main body of the news article as the {\em Document}, and the {\em Query} is formed by the summary of the article, where one entity word is replaced by a special placeholder to indicate the missing word. The replaced entity word will be the {\em Answer} of the {\em Query}. Apart from releasing the dataset, they also proposed a methodology that anonymizes the named entity tokens in the data, and these tokens are also re-shuffle in each sample. The motivation is that the news articles are containing limited named entities, which are usually celebrities, and the world knowledge can be learned from the dataset. So this methodology aims to exploit general relationships between anonymized named entities within a single document rather than the common knowledge. The following research on these datasets showed that the entity word anonymization is not as effective as expected \citep{chen-etal-2016}. \subsubsection*{$\bullet$~~ Children's Book Test} There was also a dataset called the Children's Book Test (CBTest) released by \newcite{hill-etal-2015}, which is built on the children's book story through Project Gutenberg \footnote{The CBTest datasets are available at \url{http://www.thespermwhale.com/jaseweston/babi/CBTest.tgz}}. Different from the CNN/Daily Mail datasets, there is no summary available in the children's book. So they proposed another way to extract query from the original data. The document is composed of 20 consecutive sentences in the story, and the 21st sentence is regarded as the query, where one word is blanked with a special placeholder. In the CBTest datasets, there are four types of sub-datasets available which are classified by the part-of-speech and named entity tag of the answer word, containing Named Entities (NE), Common Nouns (CN), Verbs and Prepositions. In their studies, they have found that the answering of verbs and prepositions are relatively less dependent on the content of document, and the humans can even do preposition blank-filling without the presence of the document. The studies shown by \newcite{hill-etal-2015}, answering verbs and prepositions are less dependent with the presence of document. Thus, most of the related works are focusing on solving NE and CN types. \section{Attention-over-Attention Reader}\label{nn-for-rc} In this section, we will give a detailed introduction to the proposed Attention-over-Attention Reader (AoA Reader). Our model is primarily motivated by Kadlec et al., \shortcite{kadlec-etal-2016}, which aims to directly estimate the answer from the document-level attention instead of calculating blended representations of the document. As previous studies by \newcite{cui-etal-2016} showed that the further investigation of query representation is necessary, and it should be paid more attention to utilizing the information of query. In this paper, we propose a novel work that placing another attention over the primary attentions, to indicate the ``importance'' of each attentions. Now, we will give a formal description of our proposed model. When a cloze-style training triple $\langle \mathcal D, \mathcal Q, \mathcal A \rangle$ is given, the proposed model will be constructed in the following steps. \subsubsection*{$\bullet$~~ Contextual Embedding} We first transform every word in the document $\mathcal D$ and query $\mathcal Q$ into one-hot representations and then convert them into continuous representations with a shared embedding matrix $W_e$. By sharing word embedding, both the document and query can participate in the learning of embedding and both of them will benefit from this mechanism. After that, we use two bi-directional RNNs to get contextual representations of the document and query individually, where the representation of each word is formed by concatenating the forward and backward hidden states. After making a trade-off between model performance and training complexity, we choose the Gated Recurrent Unit (GRU) \cite{cho-etal-2014} as recurrent unit implementation. \begin{gather} e(x) = W_e \cdot x,~where~~x\in \mathcal D , \mathcal Q \\ \overrightarrow{h_s(x)} = \overrightarrow{GRU}(e(x)) \\ \overleftarrow{h_s(x)} = \overleftarrow{GRU}(e(x)) \\ h_s(x) = [\overrightarrow{h_s(x)}; \overleftarrow{h_s(x)}] \end{gather} We take $h_{doc}\in\mathbb{R}^{|\mathcal D|*2d}$ and $h_{query}\in\mathbb{R}^{|\mathcal Q|*2d}$ to denote the contextual representations of document and query, where $d$ is the dimension of GRU (one-way). \subsubsection*{$\bullet$~~ Pair-wise Matching Score} After obtaining the contextual embeddings of the document $h_{doc}$ and query $h_{query}$, we calculate a pair-wise matching matrix, which indicates the pair-wise matching degree of one document word and one query word. Formally, when given $i$th word of the document and $j$th word of query, we can compute a matching score by their dot product. \begin{equation} M(i,j) = h_{doc}(i)^{T} \cdot h_{query}(j) \end{equation} In this way, we can calculate every pair-wise matching score between each document and query word, forming a matrix $M\in\mathbb{R}^{|\mathcal D|*|\mathcal Q|}$, where the value of $i$th row and $j$th column is filled by $M(i,j)$. \subsubsection*{$\bullet$~~ Individual Attentions} After getting the pair-wise matching matrix $M$, we apply a column-wise softmax function to get probability distributions in each column, where each column is an individual document-level attention when considering a single query word. We denote $\alpha(t)\in\mathbb{R}^{|\mathcal D|}$ as the document-level attention regarding query word at time $t$, which can be seen as a {\em query-to-document} attention. \newcommand\D{\displaystyle} \begin{gather} \alpha(t) = softmax(M(1,t),...,M(|\mathcal D|,t)) \\ \alpha = [\alpha(1), \alpha(2), ..., \alpha(|\mathcal Q|)] \end{gather} \subsubsection*{$\bullet$~~ Attention-over-Attention} Different from \newcite{cui-etal-2016}, instead of using naive heuristics (such as {\em summing} or {\em averaging}) to combine these individual attentions into a final attention, we introduce another attention mechanism to automatically decide the importance of each individual attention. First, we calculate a reversed attention, that is, for every document word at time $t$, we calculate the ``importance'' distribution on the query, to indicate which query words are more important given a single document word. We apply a row-wise softmax function to the pair-wise matching matrix $M$ to get query-level attentions. We denote $\beta(t)\in\mathbb{R}^{|\mathcal Q|}$ as the query-level attention regarding document word at time $t$, which can be seen as a {\em document-to-query} attention. \begin{equation} \beta(t) = softmax(M(t,1),...,M(t,|\mathcal Q|)) \end{equation} So far, we have obtained both {\em query-to-document} attention $\alpha$ and {\em document-to-query} attention $\beta$. Our motivation is to exploit mutual information between the document and query. However, most of the previous works are only relying on {\em query-to-document} attention, that is, only calculate one document-level attention when considering the whole query. Then we average all the $\beta(t)$ to get an averaged query-level attention $\beta$. Note that, we do not apply another softmax to the $\beta$, because averaging individual attentions do not break the normalizing condition. \begin{equation} \beta = \frac{1}{n}\sum\limits_{t=1}^{|\mathcal D|}\beta(t) \end{equation} Finally, we calculate dot product of $\alpha$ and $\beta$ to get the ``attended document-level attention'' $s\in\mathbb{R}^{|\mathcal D|}$, i.e. the {\em attention-over-attention} mechanism. Intuitively, this operation is calculating a weighted sum of each individual document-level attention $\alpha(t)$ when looking at query word at time $t$. In this way, the contributions by each query word can be learned explicitly, and the final decision (document-level attention) is made through the voted result by the importance of each query word. \begin{equation} s = \alpha^{T} \beta \end{equation} \subsubsection*{$\bullet$~~ Final Predictions} Following \newcite{kadlec-etal-2016}, we use {\em sum attention} mechanism to get aggregated results. Note that the final output should be reflected in the vocabulary space $V$, rather than document-level attention $|\mathcal D|$, which will make a significant difference in the performance, though \newcite{kadlec-etal-2016} did not illustrate this clearly. \begin{equation} P(w|\mathcal D, \mathcal Q) = \sum_{i \in I(w,\mathcal D)} s_i ,~~w \in V \end{equation} where $I(w,\mathcal D)$ indicate the positions that word $w$ appears in the document $\mathcal D$. As the training objectives, we seek to maximize the log-likelihood of the correct answer. \begin{equation} \mathcal{L} = \sum_{i} \log(p(x))~~, x\in\mathcal{A}\end{equation} The proposed neural network architecture is depicted in Figure \ref{nn-arch}. Note that, as our model mainly adds limited steps of calculations to the AS Reader \cite{kadlec-etal-2016} and does not employ any additional weights, the computational complexity is similar to the AS Reader. \section{N-best Re-ranking Strategy}\label{reranking} Intuitively, when we do cloze-style reading comprehensions, we often refill the candidate into the blank of the query to double-check its appropriateness, fluency and grammar to see if the candidate we choose is the most suitable one. If we do find some problems in the candidate we choose, we will choose the second possible candidate and do some checking again. To mimic the process of double-checking, we propose to use N-best re-ranking strategy after generating answers from our neural networks. The procedure can be illustrated as follows. \subsubsection*{$\bullet$~~ N-best Decoding} Instead of only picking the candidate that has the highest possibility as answer, we can also extract follow-up candidates in the decoding process, which forms an N-best list. \subsubsection*{$\bullet$~~ Refill Candidate into Query} As a characteristic of the cloze-style problem, each candidate can be refilled into the blank of the query to form a complete sentence. This allows us to check the candidate according to its context. \subsubsection*{$\bullet$~~ Feature Scoring} The candidate sentences can be scored in many aspects. In this paper, we exploit three features to score the N-best list. \begin{itemize} \item Global N-gram LM: This is a fundamental metric in scoring sentence, which aims to evaluate its fluency. This model is trained on the document part of training data. \item Local N-gram LM: Different from global LM, the local LM aims to explore the information with the given document, so the statistics are obtained from the test-time document. It should be noted that the local LM is trained sample-by-sample, it is not trained on the entire test set, which is not legal in the real test case. This model is useful when there are many unknown words in the test sample. \item Word-class LM: Similar to global LM, the word-class LM is also trained on the document part of training data, but the words are converted to its word class ID. The word class can be obtained by using clustering methods. In this paper, we simply utilized the {\em mkcls} tool for generating 1000 word classes \cite{och-1999}. \end{itemize} \subsubsection*{$\bullet$~~ Weight Tuning} To tune the weights among these features, we adopt the K-best MIRA algorithm \cite{cherry-foster:2012:NAACL-HLT} to automatically optimize the weights on the validation set, which is widely used in statistical machine translation tuning procedure. \subsubsection*{$\bullet$~~ Re-scoring and Re-ranking} After getting the weights of each feature, we calculate the weighted sum of each feature in the N-best sentences and then choose the candidate that has the lowest cost as the final answer. \section{Experiments}\label{experiments} \subsection{Experimental Setups} The general settings of our neural network model are listed below in detail. \begin{itemize} \item Embedding Layer: The embedding weights are randomly initialized with the uniformed distribution in the interval $[-0.05,0.05]$. For regularization purpose, we adopted $l_2$-regularization to 0.0001 and dropout rate of 0.1 \cite{srivastava-etal-2014}. Also, it should be noted that we do not exploit any pre-trained embedding models. \item Hidden Layer: Internal weights of GRUs are initialized with random orthogonal matrices \cite{saxe2013exact}. \item Optimization: We adopted ADAM optimizer for weight updating \cite{kingma2014adam}, with an initial learning rate of 0.001. As the GRU units still suffer from the gradient exploding issues, we set the gradient clipping threshold to 5 \cite{pascanu-etal-2013}. We used batched training strategy of 32 samples. \end{itemize} Dimensions of embedding and hidden layer for each task are listed in Table \ref{dim-stats}. In re-ranking step, we generate 5-best list from the baseline neural network model, as we did not observe a significant variance when changing the N-best list size. All language model features are trained on the training proportion of each dataset, with 8-gram word-based setting and Kneser-Ney smoothing \cite{kneser-1995} trained by SRILM toolkit \cite{stolcke-2002}. The results are reported with the best model, which is selected by the performance of validation set. The ensemble model is made up of four best models, which are trained using different random seed. Implementation is done with Theano \cite{theano2016} and Keras \cite{chollet2015keras}, and all models are trained on Tesla K40 GPU. \subsection{Overall Results} Our experiments are carried out on public datasets: CNN news datasets \cite{hermann-etal-2015} and CBTest NE/CN datasets \cite{hill-etal-2015}. The statistics of these datasets are listed in Table \ref{cbt-stats}, and the experimental results are given in Table \ref{public-result}. As we can see that, our AoA Reader outperforms state-of-the-art systems by a large margin, where 2.3\% and 2.0\% absolute improvements over EpiReader in CBTest NE and CN test sets, which demonstrate the effectiveness of our model. Also by adding additional features in the re-ranking step, there is another significant boost 2.0\% to 3.7\% over AoA Reader in CBTest NE/CN test sets. We have also found that our single model could stay on par with the previous best ensemble system, and even we have an absolute improvement of 0.9\% beyond the best ensemble model (Iterative Attention) in the CBTest NE validation set. When it comes to ensemble model, our AoA Reader also shows significant improvements over previous best ensemble models by a large margin and set up a new state-of-the-art system. To investigate the effectiveness of employing {\em attention-over-attention} mechanism, we also compared our model to CAS Reader, which used pre-defined merging heuristics, such as {\em sum} or {\em avg} etc. Instead of using pre-defined merging heuristics, and letting the model explicitly learn the weights between individual attentions results in a significant boost in the performance, where 4.1\% and 3.7\% improvements can be made in CNN validation and test set against CAS Reader. \subsection{Effectiveness of Re-ranking Strategy} As we have seen that the re-ranking approach is effective in cloze-style reading comprehension task, we will give a detailed ablations in this section to show the contributions by each feature. To have a thorough investigation in the re-ranking step, we listed the detailed improvements while adding each feature mentioned in Section \ref{reranking}. From the results in Table \ref{rerank-cbt}, we found that the NE and CN category both benefit a lot from the re-ranking features, but the proportions are quite different. Generally speaking, in NE category, the performance is mainly boosted by the $LM_{local}$ feature. However, on the contrary, the CN category benefits from $LM_{global}$ and $LM_{wc}$ rather than the $LM_{local}$. Also, we listed the weights of each feature in Table \ref{weights-cbt}. The $LM_{global}$ and $LM_{wc}$ are all trained by training set, which can be seen as {\em Global Feature}. However, the $LM_{local}$ is only trained within the respective document part of test sample, which can be seen as {\em Local Feature}. \begin{equation} \eta = \frac{LM_{global} + LM_{wc}}{LM_{local}} \end{equation} We calculated the ratio between the global and local features and found that the NE category is much more dependent on local features than CN category. Because it is much more likely to meet a new named entity than a common noun in the test phase, so adding the local LM provides much more information than that of common noun. However, on the contrary, answering common noun requires less local information, which can be learned in the training data relatively. \section{Quantitative Analysis}\label{analysis} In this section, we will give a quantitative analysis to our AoA Reader. The following analyses are carried out on CBTest NE dataset. First, we investigate the relations between the length of the document and corresponding accuracy. The result is depicted in Figure \ref{length-acc}. As we can see that the AoA Reader shows consistent improvements over AS Reader on the different length of the document. Especially, when the length of document exceeds 700, the improvements become larger, indicating that the AoA Reader is more capable of handling long documents. Furthermore, we also investigate if the model tends to choose a high-frequency candidate than a lower one, which is shown in Figure \ref{rank-acc}. Not surprisingly, we found that both models do a good job when the correct answer appears more frequent in the document than the other candidates. This is because that the correct answer that has the highest frequency among the candidates takes up over 40\% of the test set (1071 out of 2500). But interestingly we have also found that, when the frequency rank of correct answer exceeds 7 (less frequent among candidates), these models also give a relatively high performance. Empirically, we think that these models tend to choose extreme cases in terms of candidate frequency (either too high or too low). One possible reason is that it is hard for the model to choose a candidate that has a neutral frequency as the correct answer, because of its ambiguity (neutral choices are hard to made). \section{Related Work}\label{related-work} Cloze-style reading comprehension tasks have been widely investigated in recent studies. We will take a brief revisit to the related works. \newcite{hermann-etal-2015} have proposed a method for obtaining large quantities of $\langle \mathcal D, \mathcal Q, \mathcal A \rangle$ triples through news articles and its summary. Along with the release of cloze-style reading comprehension dataset, they also proposed an attention-based neural network to handle this task. Experimental results showed that the proposed neural network is effective than traditional baselines. \newcite{hill-etal-2015} released another dataset, which stems from the children's books. Different from \newcite{hermann-etal-2015}'s work, the document and query are all generated from the raw story without any summary, which is much more general than previous work. To handle the reading comprehension task, they proposed a window-based memory network, and self-supervision heuristics is also applied to learn hard-attention. Unlike previous works, that using blended representations of document and query to estimate the answer, \newcite{kadlec-etal-2016} proposed a simple model that directly pick the answer from the document, which is motivated by the Pointer Network \cite{vinyals-etal-2015}. A restriction of this model is that the answer should be a single word and appear in the document. Results on various public datasets showed that the proposed model is effective than previous works. \citet{liu-etal-2016} proposed to exploit reading comprehension models to other tasks. They first applied the reading comprehension model into Chinese zero pronoun resolution task with automatically generated large-scale pseudo training data. The experimental results on OntoNotes 5.0 data showed that their method significantly outperforms various state-of-the-art systems. Our work is primarily inspired by \newcite{cui-etal-2016} and \newcite{kadlec-etal-2016} , where the latter model is widely applied to many follow-up works \cite{sordoni-etal-2016,trischler-etal-2016,cui-etal-2016}. Unlike the CAS Reader \cite{cui-etal-2016}, we do not assume any heuristics to our model, such as using merge functions: $sum$, $avg$ etc. We used a mechanism called ``attention-over-attention'' to explicitly calculate the weights between different individual document-level attentions, and get the final attention by computing the weighted sum of them. Also, we find that our model is typically general and simple than the recently proposed model, and brings significant improvements over these cutting edge systems. \section{Conclusion}\label{conclusion} We present a novel neural architecture, called attention-over-attention reader, to tackle the cloze-style reading comprehension task. The proposed AoA Reader aims to compute the attentions not only for the document but also the query side, which will benefit from the mutual information. Then a weighted sum of attention is carried out to get an attended attention over the document for the final predictions. Among several public datasets, our model could give consistent and significant improvements over various state-of-the-art systems by a large margin. The future work will be carried out in the following aspects. We believe that our model is general and may apply to other tasks as well, so firstly we are going to fully investigate the usage of this architecture in other tasks. Also, we are interested to see that if the machine really ``comprehend'' our language by utilizing neural networks approaches, but not only serve as a ``document-level'' language model. In this context, we are planning to investigate the problems that need comprehensive reasoning over several sentences. \section*{Acknowledgments} We would like to thank all three anonymous reviewers for their thorough reviewing and providing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015409. \bibliographystyle{acl_natbib} \end{document}
Exploring Prediction Uncertainty in Machine Translation Quality Estimation
1606.09600
Table 2: Asymmetric loss experiments results. The first line in each table corresponds to a standard GP while the others are Warped GPs with different warping functions. All models use the Matèrn52 kernel. The optimistic setting corresponds to w=1/3 for AL and w=0.75 for linex. The pessimistic setting uses w=3 for AL and w=−0.75 for linex, except for English-German, where w=−0.25.
[ "[BOLD] English-Spanish", "[BOLD] English-Spanish [BOLD] Optimistic", "[BOLD] English-Spanish [BOLD] Optimistic", "[BOLD] English-Spanish [BOLD] Pessimistic", "[BOLD] English-Spanish [BOLD] Pessimistic" ]
[ [ "[EMPTY]", "AL", "Linex", "AL", "Linex" ], [ "Std GP", "1.187", "0.447", "1.633", "3.009" ], [ "log", "1.060", "0.299", "1.534", "3.327" ], [ "tanh1", "1.050", "0.300", "1.528", "3.251" ], [ "tanh2", "1.054", "0.300", "1.543", "3.335" ], [ "tanh3", "1.053", "0.299", "1.538", "3.322" ], [ "[BOLD] French-English", "[BOLD] French-English", "[BOLD] French-English", "[BOLD] French-English", "[BOLD] French-English" ], [ "[EMPTY]", "[BOLD] Optimistic", "[BOLD] Optimistic", "[BOLD] Pessimistic", "[BOLD] Pessimistic" ], [ "[EMPTY]", "AL", "Linex", "AL", "Linex" ], [ "Std GP", "0.677", "0.127", "0.901", "0.337" ], [ "log", "0.675", "0.161", "0.914", "0.492" ], [ "tanh1", "0.677", "0.124", "0.901", "0.341" ], [ "tanh2", "0.671", "0.121", "0.894", "0.347" ], [ "tanh3", "0.666", "0.120", "0.886", "0.349" ], [ "[BOLD] English-German", "[BOLD] English-German", "[BOLD] English-German", "[BOLD] English-German", "[BOLD] English-German" ], [ "[EMPTY]", "[BOLD] Optimistic", "[BOLD] Optimistic", "[BOLD] Pessimistic", "[BOLD] Pessimistic" ], [ "[EMPTY]", "AL", "Linex", "AL", "Linex" ], [ "Std GP", "1.528", "0.610", "2.120", "0.217" ], [ "log", "1.457", "0.537", "2.049", "0.222" ], [ "tanh1", "1.459", "0.503", "2.064", "0.220" ], [ "tanh2", "1.455", "0.504", "2.045", "0.220" ], [ "tanh3", "1.456", "0.497", "2.042", "0.219" ] ]
In the optimistic scenario the tanh-based warped GP models give consistently better results than standard GPs. The log-based models also gives good results for AL but for linex the results are mixed except for en-es. This is probably again related to the larger sizes of the fr-en and en-de datasets, which allows the tanh-based models to learn richer representations.
\documentclass[11pt]{article} \aclfinalcopy % \def\aclpaperid{142} % \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\lucia}{\textcolor{blue}} \newcommand{\x}{\mathbf{x}} \newcommand{\fixme}[1]{{\bf \color{red} [*FIXME* }{\em #1}{\bf ]}} \newcommand{\trevor}[1]{{\bf \color{blue} [*FIXME* }{\em #1}{\bf ]}} \newcommand{\daniel}[1]{{\bf \color{green} [*FIXME* }{\em #1}{\bf ]}} \title{Exploring Prediction Uncertainty in Machine Translation \\ Quality Estimation} \author{Daniel Beck$^\dagger$ ~~~~ Lucia Specia$^\dagger$ ~~~~ Trevor Cohn$^\ddagger$\\ $^\dagger$Department of Computer Science\\ University of Sheffield, United Kingdom\\ $^\ddagger$Computing and Information Systems\\ University of Melbourne, Australia\\ {\tt \{debeck1,l.specia\}@sheffield.ac.uk, t.cohn@unimelb.edu.au} } \date{} \begin{document} \maketitle \begin{abstract} Machine Translation Quality Estimation is a notoriously difficult task, which lessens its usefulness in real-world translation environments. Such scenarios can be improved if quality predictions are accompanied by a measure of uncertainty. However, models in this task are traditionally evaluated only in terms of point estimate metrics, which do not take prediction uncertainty into account. We investigate probabilistic methods for Quality Estimation that can provide well-calibrated uncertainty estimates and evaluate them in terms of their full posterior predictive distributions. We also show how this posterior information can be useful in an asymmetric risk scenario, which aims to capture typical situations in translation workflows.% \end{abstract} \section{Introduction} Quality Estimation (QE) \cite{Blatz2004,Specia2009} models aim at predicting the quality of automatically translated text segments. Traditionally, these models provide point estimates and are evaluated using metrics like Mean Absolute Error (MAE), Root-Mean-Square Error (RMSE) and Pearson's $r$ correlation coefficient. However, in practice QE models are built for use in decision making in large workflows involving Machine Translation (MT). In these settings, relying on point estimates would mean that only very accurate prediction models can be useful in practice. A way to improve decision making based on quality predictions is to explore uncertainty estimates. Consider for example a post-editing scenario where professional translators use MT in an effort to speed-up the translation process. A QE model can be used to determine if an MT segment is good enough for post-editing or should be discarded and translated from scratch. But since QE models are not perfect they can end up allowing bad MT segments to go through for post-editing because of a prediction error. In such a scenario, having an uncertainty estimate for the prediction can provide additional information for the filtering decision. For instance, in order to ensure good user experience for the human translator and maximise translation productivity, an MT segment could be forwarded for post-editing only if a QE model assigns a high quality score with \emph{low uncertainty} (high confidence). Such a decision process is not possible with point estimates only. Good uncertainty estimates can be acquired from well-calibrated probability distributions over the quality predictions. In QE, arguably the most successful probabilistic models are Gaussian Processes (GPs) since they considered the state-of-the-art for regression \cite{Cohn2013,Hensman2013}, especially in the low-data regimes typical for this task. We focus our analysis in this paper on GPs since other common models used in QE can only provide point estimates as predictions. Another reason why we focus on probabilistic models is because this lets us employ the ideas proposed by \newcite{Quinonero-Candela2006}, which defined new evaluation metrics that take into account probability distributions over predictions. The remaining of this paper is organised as follows: \begin{itemize} \item In Section \ref{sec:probqe} we further motivate the use of GPs for uncertainty modelling in QE and revisit their underlying theory. We also propose some model extensions previously developed in the GP literature and argue they are more appropriate for the task. \item We intrinsically evaluate our proposed models in terms of their posterior distributions on training and test data in Section \ref{sec:intrinsic}. Specifically, we show that differences in uncertainty modelling are not captured by the usual point estimate metrics commonly used for this task. \item As an example of an application for predicitive distributions, in Section \ref{sec:asymmetric} we show how they can be useful in scenarios with asymmetric risk and how the proposed models can provide better performance in this case. \end{itemize} We discuss related work in Section \ref{sec:relwork} and give conclusions and avenues for future work in Section \ref{sec:conc}. While we focus on QE as application, the methods we explore in this paper can be applied to any text regression task where modelling predictive uncertainty is useful, either in human decision making or by propagating this information for further computational processing. \section{Probabilistic Models for QE} \label{sec:probqe} Traditionally, QE is treated as a regression task with hand-crafted features. Kernel methods are arguably the state-of-the-art in QE since they can easily model non-linearities in the data. Furthermore, the scalability issues that arise in kernel methods do not tend to affect QE in practice since the datasets are usually small, in the order of thousands of instances. The most popular method for QE is Support Vector Regression (SVR), as shown in the multiple instances of the WMT QE shared tasks \cite{Callison-Burch2012,Bojar2013,Bojar2014,Bojar2015}. While SVR models can generate competitive predictions for this task, they lack a probabilistic interpretation, which makes it hard to extract uncertainty estimates using them. Bootstrapping approaches like bagging \cite{Abe1998} can be applied, but this requires setting and optimising hyperparameters like bag size and number of bootstraps. There is also no guarantee these estimates come from a well-calibrated probabilistic distribution. Gaussian Processes (GPs) \cite{Rasmussen2006} is an alternative kernel-based framework that gives competitive results for point estimates \cite{Cohn2013,Shah2013,Beck2014b}. Unlike SVR, they explicitly model uncertainty in the data and in the predictions. This makes GPs very applicable when well-calibrated uncertainty estimates are required. Furthermore, they are very flexible in terms of modelling decisions by allowing the use of a variety of kernels and likelihoods while providing efficient ways of doing model selection. Therefore, in this work we focus on GPs for probabilistic modelling of QE. In what follows we briefly describe the GPs framework for regression. \subsection{Gaussian Process Regression} \label{sec:gpr} Here we follow closely the definition of GPs given by \newcite{Rasmussen2006}. Let $\mathcal{X} = \{(\x_1, y_1),(\x_2, y_2), \dots, (\x_n, y_n) \}$ be our data, where each $\x \in \mathbb{R}^D$ is a $D$-dimensional input and $y$ is its corresponding response variable. A GP is defined as a stochastic model over the latent function $f$ that generates the data $\mathcal{X}$: \begin{equation*} \label{eq:gp} f(\mathbf{x}) \sim \mathcal{GP} (m(\mathbf{x}), k(\mathbf{x},\mathbf{x'})), \end{equation*} where $m(\x)$ is the \emph{mean} function, which is usually the $0$ constant, and $k(\x,\x')$ is the kernel or \emph{covariance} function, which describes the covariance between values of $f$ at the different locations of $\x$ and $\x'$. The prior is combined with a likelihood via Bayes' rule to obtain a posterior over the latent function: \begin{equation*} \label{eq:fposterior} p(f|\mathcal{X}) = \frac{p(\mathbf{y}|\mathbf{X},f) p(f)}{p(\mathbf{y}|\mathbf{X})} , \end{equation*} where $\mathbf{X}$ and $\mathbf{y}$ are the training inputs and response variables, respectively. For regression, we assume that each $y_i = f(\mathbf{x_i}) + \eta$, where $\eta \sim \mathcal{N}(0,\sigma_n^2)$ is added white noise. Having a Gaussian likelihood results in a closed form solution for the posterior. Training a GP involves the optimisation of model hyperparameters, which is done by maximising the marginal likelihood $p(\mathbf{y}|\mathbf{X})$ via gradient ascent. Predictive posteriors for unseen $\x_*$ are obtained by integrating over the latent function evaluations at $\x_*$. GPs can be extended in many different ways by applying different kernels, likelihoods and modifying the posterior, for instance. In the next Sections, we explain in detail some sensible modelling choices in applying GPs for QE. \subsection{Mat\`{e}rn Kernels} \label{sec:matern-kernels} Choosing an appropriate kernel is a crucial step in defining a GP model (and any other kernel method). A common choice is to employ the exponentiated quadratic (EQ) kernel\footnote{Also known as Radial Basis Function (RBF) kernel.}: \begin{align*} \label{eq:2} k_{\text{EQ}}(\x, \x') &= \sigma_v \; \mathrm{exp}(-\frac{r^2}{2}) \, , \\ \mbox{where~} r^2 &= \sum\limits_{i=1}^D\frac{(x_i - x_i')^2}{l_i^2} \end{align*} is the scaled distance between the two inputs, $\sigma_v$ is a scale hyperparameter and $\mathbf{l}$ is a vector of lengthscales. Most kernel methods tie all lengthscale to a single value, resulting in an isotropic kernel. However, since in GPs hyperparameter optimisation can be done efficiently, it is common to employ one lengthscale per feature, a method called Automatic Relevance Determination (ARD). The EQ kernel allows the modelling of non-linearities between the inputs and the response variables but it makes a strong assumption: it generates smooth, infinitely differentiable functions. This assumption can be too strong for noisy data. An alternative is the Mat\`{e}rn class of kernels, which relax the smoothness assumption by modelling functions which are $\nu$-times differentiable only. Common values for $\nu$ are the half-integers $3/2$ and $5/2$, resulting in the following Mat\`{e}rn kernels: \begin{align*} k_{\text{M32}} &= \sigma_v (1 + \sqrt{3r^2}) \; \mathrm{exp}(-\sqrt{3r^2}) \\ k_{\text{M52}} &= \sigma_v \left(1 + \sqrt{5r^2} + \frac{5r^2}{3}\right) \mathrm{exp}(-\sqrt{5r^2}) \, , \end{align*} where we have omitted the dependence of $k_{\text{M32}}$ and $k_{\text{M52}}$ on the inputs $(\x, \x')$ for brevity. Higher values for $\nu$ are usually not very useful since the resulting behaviour is hard to distinguish from limit case $\nu \rightarrow \infty$, which retrieves the EQ kernel \cite[Sec. 4.2]{Rasmussen2006}. The relaxed smoothness assumptions from the Mat\`{e}rn kernels makes them promising candidates for QE datasets, which tend to be very noisy. We expect that employing them will result in a better models for this application. \subsection{Warped Gaussian Processes} \label{sec:wgp} The Gaussian likelihood of standard GPs has support over the entire real number line. However, common quality scores are strictly positive values, which means that the Gaussian assumption is not ideal. A usual way to deal with this problem is model the logarithm of the response variables, since this transformation maps strictly positive values to the real line. However, there is no reason to believe this is the best possible mapping: a better idea would be to learn it from the data. Warped GPs \cite{Snelson2004} are an extension of GPs that allows the learning of arbitrary mappings. It does that by placing a monotonic \emph{warping function} over the observations and modelling the warped values inside a standard GP. The posterior distribution is obtained by applying a change of variables: \begin{equation*} p(y_*|\x_*) = \frac{f'(y_*)}{\sqrt{2\pi\sigma_*^2}} \; \mathrm{exp} \left(\frac{f(y_*) - \mu_*}{2\sigma_*}\right), \end{equation*} where $\mu_*$ and $\sigma_*$ are the mean and standard deviation of the latent (warped) response variable and $f$ and $f'$ are the warping function and its derivative. Point predictions from this model depend on the loss function to be minimised. For absolute error, the median is the optimal value while for squared error it is the mean of the posterior. In standard GPs, since the posterior is Gaussian the median and mean coincide but this in general is not the case for a Warped GP posterior. The median can be easily obtained by applying the inverse warping function to the latent median: \begin{equation*} y^{\mathrm{med}}_* = f^{-1}(\mu_*). \end{equation*} While the inverse of the warping function is usually not available in closed form, we can use its gradient to have a numerical estimate. The mean is obtained by integrating $y^*$ over the latent density: \begin{equation*} \mathbb{E}[y_*] = \int f^{-1}(z) \mathcal{N}_z(\mu_*, \sigma^2_*) dz, \end{equation*} where $z$ is the latent variable. This can be easily approximated using Gauss-Hermite quadrature since it is a one dimensional integral over a Gaussian density. The warping function should be flexible enough to allow the learning of complex mappings, but it needs to be monotonic. \newcite{Snelson2004} proposes a parametric form composed of a sum of $\mathrm{tanh}$ functions, similar to a neural network layer: \begin{equation*} \label{eq:warp} f(y) = y + \sum\limits_{i=1}^{I} a_i \; \mathrm{tanh} (b_i (y + c_i)) , \end{equation*} where $I$ is the number of $\mathrm{tanh}$ terms and $\mathbf{a}, \mathbf{b}$ and $\mathbf{c}$ are treated as model hyperparameters and optimised jointly with the kernel and likelihood hyperparameters. Large values for $I$ allow more complex mappings to be learned but raise the risk of overfitting. Warped GPs provide an easy and elegant way to model response variables with non-Gaussian behaviour within the GP framework. In our experiments we explore models employing warping functions with up to $3$ terms, which is the value recommended by \newcite{Snelson2004}. We also report results using the $f(y) = \log(y)$ warping function. \section{Intrinsic Uncertainty Evaluation} \label{sec:intrinsic} Given a set of different probabilistic QE models, we are interested in evaluating the performance of these models, while also taking their uncertainty into account, particularly to distinguish among models with seemingly same or similar performance. A straightforward way to measure the performance of a probabilistic model is to inspect its negative ($\mathrm{log}$) marginal likelihood. This measure, however, does not capture if a model overfit the training data. We can have a better generalisation measure by calculating the likelihood on \emph{test data} instead. This was proposed in previous work and it is called Negative Log Predictive Density (NLPD) \cite{Quinonero-Candela2006}: \begin{equation*} \label{eq:nlpd} \text{NLPD}(\mathbf{\hat{y}}, \mathbf{y}) = -\frac{1}{n} \sum\limits_{i=1}^n \mathrm{log}\; p(\hat{y}_i = y_i|\x_i). \end{equation*} where $\mathbf{\hat{y}}$ is a set of test predictions, $\mathbf{y}$ is the set of true labels and $n$ is the test set size. This metric has since been largely adopted by the ML community when evaluating GPs and other probabilistic models for regression (see Section \ref{sec:relwork} for some examples). As with other error metrics, lower values are better. Intuitively, if two models produce equally incorrect predictions but they have different uncertainty estimates, NLPD will penalise the overconfident model more than the underconfident one. On the other hand, if predictions are close to the true value then NLPD will penalise the underconfident model instead. In our first set of experiments we evaluate models proposed in Section \ref{sec:probqe} according to their negative $\mathrm{log}$ likelihood (NLL) and the NLPD on test data. We also report two point estimate metrics on test data: Mean Absolute Error (MAE), the most commonly used evaluation metric in QE, and Pearson's $r$, which has recently proposed by \newcite{Graham2015} as a more robust alternative. \subsection{Experimental Settings} \label{sec:exp} Our experiments comprise datasets containing three different language pairs, where the label to predict is post-editing time: \begin{description} \item[English-Spanish (en-es)] This dataset was used in the WMT14 QE shared task \cite{Bojar2014}. It contains $858$ sentences translated by one MT system and post-edited by a professional translator. \item[French-English (fr-en)] Described in \cite{Specia2011}, this dataset contains $2,525$ sentences translated by one MT system and post-edited by a professional translator. \item[English-German (en-de)] This dataset is part of the WMT16 QE shared task\footnote{\url{www.statmt.org/wmt16}}. It was translated by one MT system for consistency we use a subset of $2,828$ instances post-edited by a single professional translator. \end{description} As part of the process of creating these datasets, post-editing time was logged on an sentence basis for all datasets. Following common practice, we normalise the post-editing time by the length of the machine translated sentence to obtain post-editing {\em rates} and use these as our response variables. Technically our approach could be used with any other numeric quality labels from the literature, including the commonly used Human Translation Error Rate (HTER) \cite{Snover2006}. Our decision to focus on post-editing time was based on the fact that time is a more complete measure of post-editing effort, capturing not only technical effort like HTER, but also cognitive effort \cite{Koponen2012}. Additionally, time is more directly applicable in real translation environments -- where uncertainty estimates could be useful, as it relates directly to productivity measures. For model building, we use a standard set of $17$ features from the QuEst framework \cite{Specia2015}. These features are used in the strong baseline models provided by the WMT QE shared tasks. While the best performing systems in the shared tasks use larger feature sets, these are mostly resource-intensive and language-dependent, and therefore not equally applicable to all our language pairs. Moreover, our goal is to compare probabilistic QE models through the predictive uncertainty perspective, rather than improving the state-of-the-art in terms of point predictions. We perform $10$-fold cross validation instead of using a single train/test splits and report averaged metric scores. The model hyperparameters were optimised by maximising the likelihood on the training data. We perform a two-pass procedure similar to that in \cite{Cohn2013}: first we employ an isotropic kernel and optimise all hyperparameters using $10$ random restarts; then we move to an ARD equivalent kernel and perform a final optimisation step to fine tune feature {\em lengthscales}. Point predictions were fixed as the median of the distribution. \subsection{Results and Discussion} \label{sec:iresults} Table \ref{tab:intrinsic} shows the results obtained for all datasets. The first two columns shows an interesting finding in terms of model learning: using a warping function drastically decreases both NLL and NLPD. The main reason behind this is that standard GPs distribute probability mass over negative values, while the warped models do not. For the {\bf fr-en} and {\bf en-de} datasets, NLL and NLPD follow similar trends. This means that we can trust NLL as a measure of uncertainty for these datasets. However, this is not observed in the {\bf en-es} dataset. Since this dataset is considerably smaller than the others, we believe this is evidence of overfitting, thus showing that NLL is not a reliable metric for small datasets. In terms of different warping functions, using the parametric $\mathrm{tanh}$ function with $3$ terms performs better than the $\mathrm{log}$ for the {\bf fr-en} and {\bf en-de} datasets. This is not the case of the {\bf en-es} dataset, where the $\mathrm{log}$ function tends to perform better. We believe that this is again due to the smaller dataset size. The gains from using a Mat\`{e}rn kernel over EQ are less conclusive. While they tend to perform better for {\bf fr-en}, there does not seem to be any difference in the other datasets. % Different kernels can be more appropriate depending on the language pair, but more experiments are needed to verify this, which we leave for future work. The differences in uncertainty modelling are by and large not captured by the point estimate metrics. While MAE does show gains from standard to Warped GPs, it does not reflect the difference found between warping functions for {\bf fr-en}. Pearson's $r$ is also quite inconclusive in this sense, except for some observed gains for {\bf en-es}. This shows that NLPD indeed should be preferred as a evaluation metric when proper prediction uncertainty estimates are required by a QE model. \subsection{Qualitative Analysis} \label{sec:analysis} To obtain more insights about the performance in uncertainty modelling we inspected the predictive distributions for two sentence pairs in the {\bf fr-en} dataset. We show the distributions for a standard GP and a Warped GP with a $\mathrm{tanh3}$ function in Figure \ref{fig:example1}. In the first case, where both models give accurate predictions, we see that the Warped GP distribution is peaked around the predicted value, as it should be. It also gives more probability mass to positive values, showing that the model is able to learn that the label is non-negative. In the second case we analyse the distributions when both models make inaccurate predictions. We can see that the Warped GP is able to give a broader distribution in this case, while still keeping most of the mass outside the negative range. We also report above each plot in Figure \ref{fig:example1} the NLPD for each prediction. Comparing only the Warped GP predictions, we can see that their values reflect the fact that we prefer sharp distributions when predictions are accurate and broader ones when predictions are not accurate. However, it is interesting to see that the metric also penalises predictions when their distributions are too broad, as it is the case with the standard GPs since they can not discriminate between positive and negative values as well as the Warped GPs. Inspecting the resulting warping functions can bring additional modelling insights. In Figure \ref{fig:warps} we show instances of $\mathrm{tanh3}$ warping functions learned from the three datasets and compare them with the $\mathrm{log}$ warping function. % We can see that the parametric $\mathrm{tanh3}$ model is able to learn non-trivial mappings. For instance, in the {\bf en-es} case % the learned function is roughly logarithmic in the low scales but it switches to a linear mapping after $y = 4$. Notice also the difference in the scales, which means that the optimal model uses a latent Gaussian with a larger variance.% \section{Asymmetric Risk Scenarios} \label{sec:asymmetric} Evaluation metrics for QE, including those used in the WMT QE shared tasks, are assumed to be symmetric, i.e., they penalise over and underestimates equally. This assumption is however too simplistic for many possible applications of QE. For example: \begin{itemize} \item In a {\em post-editing} scenario, a project manager may have translators with limited expertise in post-editing. In this case, automatic translations should not be provided to the translator unless they are highly likely to have very good quality. This can be enforced this by increasing the penalisation weight for underestimates. We call this the \emph{pessimistic} scenario. \item In a {\em gisting} scenario, a company wants to automatically translate their product reviews so that they can be published in a foreign language without human intervention. The company would prefer to publish only the reviews translated well enough, but having more reviews published will increase the chances of selling products. In this case, having better recall is more important and thus only reviews with very poor translation quality should be discarded. We can accomplish this by heavier penalisation on overestimates, a scenario we call \emph{optimistic}. \end{itemize} In this Section we show how these scenarios can be addressed by well-calibrated predictive distributions and by employing {\em asymmetric} loss functions. An example of such a function is the asymmetric linear (henceforth, AL) loss, which is a generalisation of the absolute error: \begin{equation*} \label{eq:asymmae} L(\hat{y}, y) = \begin{cases} w(\hat{y} - y) &\text{if } \hat{y} > y\\ y - \hat{y} &\text{if } \hat{y} \le y , \end{cases} \end{equation*} where $w > 0$ is the weight given to overestimates. If $w > 1$ we have the pessimistic scenario, and the optimistic one can be obtained using $0 < w < 1$. For $w = 1$ we retrieve the original absolute error loss. Another asymmetric loss is the linear exponential or {\em linex} loss \cite{Zellner1986}: \begin{equation*} \label{eq:linex} L(\hat{y}, y) = \mathrm{exp}[w(\hat{y} - y)] - (\hat{y} - y) - 1 \end{equation*} where $w \in \mathbb{R}$ is the weight. This loss attempts to keep a linear penalty in lesser risk regions, while imposing an exponential penalty in the higher risk ones. Negative values for $w$ will result in a pessimistic setting, while positive values will result in the optimistic one. For $w = 0$, the loss approximates a squared error loss. Usual values for $w$ tend to be close to $1$ or $-1$ since for higher weights the loss can quickly reach very large scores. Both losses are shown on Figure \ref{fig:losses}. \subsection{Bayes Risk for Asymmetric Losses} \label{sec:risk} The losses introduced above can be incorporated directly into learning algorithms to obtain models for a given scenario. In the context of the AL loss this is called {\em quantile regression} \cite{Koenker2005}, since optimal estimators for this loss are posterior quantiles. However, in a production environment the loss can change over time. For instance, in the gisting scenario discussed above the parameter $w$ could be changed based on feedback from indicators of sales revenue or user experience. If the loss is attached to the underlying learning algorithms, a change in $w$ would require full model retraining, which can be costly. Instead of retraining the model every time there is a different loss, we can train a single probabilistic model and derive Bayes risk estimators for the loss we are interested in. This allows estimates to be obtained without having to retrain models when the loss changes. Additionally, this allows different losses/scenarios to be employed at the same time using the same model.% Minimum Bayes risk estimators for asymmetric losses were proposed by \newcite{Christoffersen1997} and we follow their derivations in our experiments. The best estimator for the AL loss is equivalent to the $\frac{w}{w + 1}$ quantile of the predictive distribution. Note that we retrieve the median when $w = 1$, as expected. The best estimator for the linex loss can be easily derived and results in: \begin{equation*} \hat{y} = \mu_y - \frac{w \sigma^2_y}{2} \end{equation*} where $\mu_y$ and $\sigma^2_y$ are the mean and the variance of the predictive posterior. \subsection{Experimental Settings} \label{sec:exp2} Here we assess the models and datasets used in Section \ref{sec:exp} in terms of their performance in the asymmetric setting. Following the explanation in the previous Section, we do not perform any retraining: we collect the predictions obtained using the 10-fold cross-validation protocol and apply different Bayes estimators corresponding to the asymmetric losses. Evaluation is performed using the same loss employed in the estimator (for instance, when using the linex estimator with $w = 0.75$ we report the results using the linex loss with same $w$) and averaged over the 10 folds. To simulate both pessimistic and optimistic scenarios, we use $w \in \{ 3, 1/3 \}$ for the AL loss and $w \in \{-0.75, 0.75\}$ for the linex loss. The only exception is the {\bf en-de} dataset, where we report results for $w \in {-0.25, 0.75}$ for linex\footnote{Using $w = -0.75$ in this case resulted in loss values on the order of $10^7$. In fact, as it will be discussed in the next Section, the results for the linex loss in the pessimistic scenario were inconclusive. However, we report results using a higher $w$ in this case for completeness and to clarify the inconclusive trends we found.}. We also report results only for models using the Mat\`{e}rn52 kernel. While we did experiment with different kernels and weighting schemes\footnote{We also tried $w \in \{1/9, 1/7, 1/5, 5, 7, 9\}$ for the AL loss and $w \in \{-0.5, -0.25, 0.25, 0.5\}$ for the linex loss.} our findings showed similar trends so we omit them for the sake of clarity. \subsection{Results and Discussion} \label{sec:results2} Results are shown on Table \ref{tab:asymm}. In the optimistic scenario the $\mathrm{tanh}$-based warped GP models give consistently better results than standard GPs. The $\mathrm{log}$-based models also gives good results for AL but for linex the results are mixed except for en-es. This is probably again related to the larger sizes of the fr-en and en-de datasets, which allows the $\mathrm{tanh}$-based models to learn richer representations. The pessimistic scenario shows interesting trends. While the results for AL follow a similar pattern when compared to the optimistic setting, the results for linex are consistently worse than the standard GP baseline. A key difference between AL and linex is that the latter depends on the variance of the predictive distribution. Since the warped models tend to have less variance, we believe the estimator is not being ``pushed'' towards the positive tails as much as in the standard GPs. This turns the resulting predictions not conservative enough (i.e. the post-editing time predictions are lower) and this is heavily (exponentially) penalised by the loss. This might be a case where a standard GP is preferred but can also indicate that this loss is biased towards models with high variance, even if it does that by assigning probability mass to nonsensical values (like negative time). We leave further investigation of this phenomenon for future work. \section{Related Work} \label{sec:relwork} Quality Estimation is generally framed as text regression task, similarly to many other applications such as movie revenue forecasting based on reviews \cite{Joshi2010,Bitvai2015} and detection of emotion strength in news headlines \cite{Strapparava2008,Beck2014a} and song lyrics \cite{Mihalcea2012}. In general, these applications are evaluated in terms of their point estimate predictions, arguably because not all of them employ probabilistic models. The NLPD is common and established metric used in the GP literature to evaluate new approaches. Examples include the original work on Warped GPs \cite{Snelson2004}, but also others like \newcite{Lazaro-Gredilla2012} and \newcite{Chalupka2013}. It has also been used to evaluate recent work on uncertainty propagation methods for neural networks \cite{Hernandez-Lobato2015}. Asymmetric loss functions are common in the econometrics literature and were studied by \newcite{Zellner1986} and \newcite{Koenker2005}, among others. Besides the AL and the linex, another well studied loss is the asymmetric quadratic, which in turn relates to the concept of {\em expectiles} \cite{Newey1987}. This loss generalises the commonly used squared error loss. In terms of applications, \newcite{Cain1995} gives an example in real estate assessment, where the consequences of under- and over-assessment are usually different depending on the specific scenario. An engineering example is given by \newcite{Zellner1986} in the context of dam construction, where an underestimate of peak water level is much more serious than an overestimate. Such real-world applications guided many developments in this field: we believe that translation and other language processing scenarios which rely on NLP technologies can heavily benefit from these advancements. \section{Conclusions} \label{sec:conc} This work explored new probabilistic models for machine translation QE that allow better uncertainty estimates. We proposed the use of NLPD, which can capture information on the whole predictive distribution, unlike usual point estimate-based metrics. By assessing models using NLPD we can make better informed decisions about which model to employ for different settings. Furthermore, we showed how information in the predictive distribution can be used in asymmetric loss scenarios and how the proposed models can be beneficial in these settings. Uncertainty estimates can be useful in many other settings beyond the ones explored in this work. Active Learning can benefit from variance information in their query methods and it has shown to be useful for QE \cite{Beck2013}. Exploratory analysis is another avenue for future work, where error bars can provide further insights about the task, as shown in recent work \cite{Nguyen2015}. This kind of analysis can be useful for tracking post-editor behaviour and assessing cost estimates for translation projects, for instance. Our main goal in this paper was to raise awareness about how different modelling aspects should be taken into account when building QE models. Decision making can be risky using simple point estimates and we believe that uncertainty information can be beneficial in such scenarios by providing more informed solutions. These ideas are not restricted to QE and we hope to see similar studies in other natural language applications in the future.% \section*{Acknowledgements} Daniel Beck was supported by funding from CNPq/Brazil (No. 237999/2012-9). Lucia Specia was supported by the QT21 project (H2020 No. 645452). Trevor Cohn is the recipient of an Australian Research Council Future Fellowship (project number FT130101105). The authors would like to thank James Hensman for his advice on Warped GPs and the three anonymous reviewers for their comments. \bibliographystyle{acl2016} \end{document}
Exploring Prediction Uncertainty in Machine Translation Quality Estimation
1606.09600
Table 1: Intrinsic evaluation results. The first three rows in each table correspond to standard GP models, while the remaining rows are Warped GP models with different warping functions. The number after the tanh models shows the number of terms in the warping function (see Equation 2.3). All r scores have p<0.05.
[ "[BOLD] English-Spanish - 858 instances", "[BOLD] English-Spanish - 858 instances NLL", "[BOLD] English-Spanish - 858 instances NLPD", "[BOLD] English-Spanish - 858 instances MAE", "[BOLD] English-Spanish - 858 instances [ITALIC] r" ]
[ [ "EQ", "1244.03", "1.632", "0.828", "0.362" ], [ "Mat32", "1237.48", "1.649", "0.862", "0.330" ], [ "Mat52", "1240.76", "1.637", "0.853", "0.340" ], [ "log EQ", "986.14", "1.277", "0.798", "0.368" ], [ "log Mat32", "982.71", "1.271", "0.793", "0.380" ], [ "log Mat52", "982.31", "1.272", "0.794", "0.376" ], [ "tanh1 EQ", "992.19", "1.274", "0.790", "0.375" ], [ "tanh1 Mat32", "991.39", "1.272", "0.790", "0.379" ], [ "tanh1 Mat52", "992.20", "1.274", "0.791", "0.376" ], [ "tanh2 EQ", "982.43", "1.275", "0.792", "0.376" ], [ "tanh2 Mat32", "982.40", "1.281", "0.791", "0.382" ], [ "tanh2 Mat52", "981.86", "1.282", "0.792", "0.278" ], [ "tanh3 EQ", "980.50", "1.282", "0.791", "0.380" ], [ "tanh3 Mat32", "981.20", "1.282", "0.791", "0.380" ], [ "tanh3 Mat52", "980.70", "1.275", "0.790", "0.385" ], [ "[BOLD] French-English - 2525 instances", "[BOLD] French-English - 2525 instances", "[BOLD] French-English - 2525 instances", "[BOLD] French-English - 2525 instances", "[BOLD] French-English - 2525 instances" ], [ "[EMPTY]", "NLL", "NLPD", "MAE", "[ITALIC] r" ], [ "EQ", "2334.17", "1.039", "0.491", "0.322" ], [ "Mat32", "2335.81", "1.040", "0.491", "0.320" ], [ "Mat52", "2344.86", "1.037", "0.490", "0.320" ], [ "log EQ", "1935.71", "0.855", "0.493", "0.314" ], [ "log Mat32", "1949.02", "0.857", "0.493", "0.310" ], [ "log Mat52", "1937.31", "0.855", "0.493", "0.313" ], [ "tanh1 EQ", "1884.82", "0.840", "0.482", "0.322" ], [ "tanh1 Mat32", "1890.34", "0.840", "0.482", "0.317" ], [ "tanh1 Mat52", "1887.41", "0.834", "0.482", "0.320" ], [ "tanh2 EQ", "1762.33", "0.775", "0.483", "0.323" ], [ "tanh2 Mat32", "1717.62", "0.754", "0.483", "0.313" ], [ "tanh2 Mat52", "1748.62", "0.768", "0.486", "0.306" ], [ "tanh3 EQ", "1814.99", "0.803", "0.484", "0.314" ], [ "tanh3 Mat32", "1723.89", "0.760", "0.486", "0.302" ], [ "tanh3 Mat52", "1706.28", "0.751", "0.482", "0.320" ], [ "[BOLD] English-German - 2828 instances", "[BOLD] English-German - 2828 instances", "[BOLD] English-German - 2828 instances", "[BOLD] English-German - 2828 instances", "[BOLD] English-German - 2828 instances" ], [ "[EMPTY]", "NLL", "NLPD", "MAE", "[ITALIC] r" ], [ "EQ", "4852.80", "1.865", "1.103", "0.359" ], [ "Mat32", "4850.27", "1.861", "1.098", "0.369" ], [ "Mat52", "4850.33", "1.861", "1.098", "0.369" ], [ "log EQ", "4053.43", "1.581", "1.063", "0.360" ], [ "log Mat32", "4054.51", "1.580", "1.063", "0.363" ], [ "log Mat52", "4054.39", "1.581", "1.064", "0.363" ], [ "tanh1 EQ", "4116.86", "1.597", "1.068", "0.343" ], [ "tanh1 Mat32", "4113.74", "1.593", "1.064", "0.351" ], [ "tanh1 Mat52", "4112.91", "1.595", "1.068", "0.349" ], [ "tanh2 EQ", "4032.70", "1.570", "1.060", "0.359" ], [ "tanh2 Mat32", "4031.42", "1.570", "1.060", "0.362" ], [ "tanh2 Mat52", "4032.06", "1.570", "1.060", "0.361" ], [ "tanh3 EQ", "4023.72", "1.569", "1.062", "0.359" ], [ "tanh3 Mat32", "4024.64", "1.567", "1.058", "0.364" ], [ "tanh3 Mat52", "4026.07", "1.566", "1.059", "0.365" ] ]
The first two columns shows an interesting finding in terms of model learning: using a warping function drastically decreases both NLL and NLPD. The main reason behind this is that standard GPs distribute probability mass over negative values, while the warped models do not. For the fr-en and en-de datasets, NLL and NLPD follow similar trends. This means that we can trust NLL as a measure of uncertainty for these datasets. However, this is not observed in the en-es dataset. Since this dataset is considerably smaller than the others, we believe this is evidence of overfitting, thus showing that NLL is not a reliable metric for small datasets.
\documentclass[11pt]{article} \aclfinalcopy % \def\aclpaperid{142} % \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\lucia}{\textcolor{blue}} \newcommand{\x}{\mathbf{x}} \newcommand{\fixme}[1]{{\bf \color{red} [*FIXME* }{\em #1}{\bf ]}} \newcommand{\trevor}[1]{{\bf \color{blue} [*FIXME* }{\em #1}{\bf ]}} \newcommand{\daniel}[1]{{\bf \color{green} [*FIXME* }{\em #1}{\bf ]}} \title{Exploring Prediction Uncertainty in Machine Translation \\ Quality Estimation} \author{Daniel Beck$^\dagger$ ~~~~ Lucia Specia$^\dagger$ ~~~~ Trevor Cohn$^\ddagger$\\ $^\dagger$Department of Computer Science\\ University of Sheffield, United Kingdom\\ $^\ddagger$Computing and Information Systems\\ University of Melbourne, Australia\\ {\tt \{debeck1,l.specia\}@sheffield.ac.uk, t.cohn@unimelb.edu.au} } \date{} \begin{document} \maketitle \begin{abstract} Machine Translation Quality Estimation is a notoriously difficult task, which lessens its usefulness in real-world translation environments. Such scenarios can be improved if quality predictions are accompanied by a measure of uncertainty. However, models in this task are traditionally evaluated only in terms of point estimate metrics, which do not take prediction uncertainty into account. We investigate probabilistic methods for Quality Estimation that can provide well-calibrated uncertainty estimates and evaluate them in terms of their full posterior predictive distributions. We also show how this posterior information can be useful in an asymmetric risk scenario, which aims to capture typical situations in translation workflows.% \end{abstract} \section{Introduction} Quality Estimation (QE) \cite{Blatz2004,Specia2009} models aim at predicting the quality of automatically translated text segments. Traditionally, these models provide point estimates and are evaluated using metrics like Mean Absolute Error (MAE), Root-Mean-Square Error (RMSE) and Pearson's $r$ correlation coefficient. However, in practice QE models are built for use in decision making in large workflows involving Machine Translation (MT). In these settings, relying on point estimates would mean that only very accurate prediction models can be useful in practice. A way to improve decision making based on quality predictions is to explore uncertainty estimates. Consider for example a post-editing scenario where professional translators use MT in an effort to speed-up the translation process. A QE model can be used to determine if an MT segment is good enough for post-editing or should be discarded and translated from scratch. But since QE models are not perfect they can end up allowing bad MT segments to go through for post-editing because of a prediction error. In such a scenario, having an uncertainty estimate for the prediction can provide additional information for the filtering decision. For instance, in order to ensure good user experience for the human translator and maximise translation productivity, an MT segment could be forwarded for post-editing only if a QE model assigns a high quality score with \emph{low uncertainty} (high confidence). Such a decision process is not possible with point estimates only. Good uncertainty estimates can be acquired from well-calibrated probability distributions over the quality predictions. In QE, arguably the most successful probabilistic models are Gaussian Processes (GPs) since they considered the state-of-the-art for regression \cite{Cohn2013,Hensman2013}, especially in the low-data regimes typical for this task. We focus our analysis in this paper on GPs since other common models used in QE can only provide point estimates as predictions. Another reason why we focus on probabilistic models is because this lets us employ the ideas proposed by \newcite{Quinonero-Candela2006}, which defined new evaluation metrics that take into account probability distributions over predictions. The remaining of this paper is organised as follows: \begin{itemize} \item In Section \ref{sec:probqe} we further motivate the use of GPs for uncertainty modelling in QE and revisit their underlying theory. We also propose some model extensions previously developed in the GP literature and argue they are more appropriate for the task. \item We intrinsically evaluate our proposed models in terms of their posterior distributions on training and test data in Section \ref{sec:intrinsic}. Specifically, we show that differences in uncertainty modelling are not captured by the usual point estimate metrics commonly used for this task. \item As an example of an application for predicitive distributions, in Section \ref{sec:asymmetric} we show how they can be useful in scenarios with asymmetric risk and how the proposed models can provide better performance in this case. \end{itemize} We discuss related work in Section \ref{sec:relwork} and give conclusions and avenues for future work in Section \ref{sec:conc}. While we focus on QE as application, the methods we explore in this paper can be applied to any text regression task where modelling predictive uncertainty is useful, either in human decision making or by propagating this information for further computational processing. \section{Probabilistic Models for QE} \label{sec:probqe} Traditionally, QE is treated as a regression task with hand-crafted features. Kernel methods are arguably the state-of-the-art in QE since they can easily model non-linearities in the data. Furthermore, the scalability issues that arise in kernel methods do not tend to affect QE in practice since the datasets are usually small, in the order of thousands of instances. The most popular method for QE is Support Vector Regression (SVR), as shown in the multiple instances of the WMT QE shared tasks \cite{Callison-Burch2012,Bojar2013,Bojar2014,Bojar2015}. While SVR models can generate competitive predictions for this task, they lack a probabilistic interpretation, which makes it hard to extract uncertainty estimates using them. Bootstrapping approaches like bagging \cite{Abe1998} can be applied, but this requires setting and optimising hyperparameters like bag size and number of bootstraps. There is also no guarantee these estimates come from a well-calibrated probabilistic distribution. Gaussian Processes (GPs) \cite{Rasmussen2006} is an alternative kernel-based framework that gives competitive results for point estimates \cite{Cohn2013,Shah2013,Beck2014b}. Unlike SVR, they explicitly model uncertainty in the data and in the predictions. This makes GPs very applicable when well-calibrated uncertainty estimates are required. Furthermore, they are very flexible in terms of modelling decisions by allowing the use of a variety of kernels and likelihoods while providing efficient ways of doing model selection. Therefore, in this work we focus on GPs for probabilistic modelling of QE. In what follows we briefly describe the GPs framework for regression. \subsection{Gaussian Process Regression} \label{sec:gpr} Here we follow closely the definition of GPs given by \newcite{Rasmussen2006}. Let $\mathcal{X} = \{(\x_1, y_1),(\x_2, y_2), \dots, (\x_n, y_n) \}$ be our data, where each $\x \in \mathbb{R}^D$ is a $D$-dimensional input and $y$ is its corresponding response variable. A GP is defined as a stochastic model over the latent function $f$ that generates the data $\mathcal{X}$: \begin{equation*} \label{eq:gp} f(\mathbf{x}) \sim \mathcal{GP} (m(\mathbf{x}), k(\mathbf{x},\mathbf{x'})), \end{equation*} where $m(\x)$ is the \emph{mean} function, which is usually the $0$ constant, and $k(\x,\x')$ is the kernel or \emph{covariance} function, which describes the covariance between values of $f$ at the different locations of $\x$ and $\x'$. The prior is combined with a likelihood via Bayes' rule to obtain a posterior over the latent function: \begin{equation*} \label{eq:fposterior} p(f|\mathcal{X}) = \frac{p(\mathbf{y}|\mathbf{X},f) p(f)}{p(\mathbf{y}|\mathbf{X})} , \end{equation*} where $\mathbf{X}$ and $\mathbf{y}$ are the training inputs and response variables, respectively. For regression, we assume that each $y_i = f(\mathbf{x_i}) + \eta$, where $\eta \sim \mathcal{N}(0,\sigma_n^2)$ is added white noise. Having a Gaussian likelihood results in a closed form solution for the posterior. Training a GP involves the optimisation of model hyperparameters, which is done by maximising the marginal likelihood $p(\mathbf{y}|\mathbf{X})$ via gradient ascent. Predictive posteriors for unseen $\x_*$ are obtained by integrating over the latent function evaluations at $\x_*$. GPs can be extended in many different ways by applying different kernels, likelihoods and modifying the posterior, for instance. In the next Sections, we explain in detail some sensible modelling choices in applying GPs for QE. \subsection{Mat\`{e}rn Kernels} \label{sec:matern-kernels} Choosing an appropriate kernel is a crucial step in defining a GP model (and any other kernel method). A common choice is to employ the exponentiated quadratic (EQ) kernel\footnote{Also known as Radial Basis Function (RBF) kernel.}: \begin{align*} \label{eq:2} k_{\text{EQ}}(\x, \x') &= \sigma_v \; \mathrm{exp}(-\frac{r^2}{2}) \, , \\ \mbox{where~} r^2 &= \sum\limits_{i=1}^D\frac{(x_i - x_i')^2}{l_i^2} \end{align*} is the scaled distance between the two inputs, $\sigma_v$ is a scale hyperparameter and $\mathbf{l}$ is a vector of lengthscales. Most kernel methods tie all lengthscale to a single value, resulting in an isotropic kernel. However, since in GPs hyperparameter optimisation can be done efficiently, it is common to employ one lengthscale per feature, a method called Automatic Relevance Determination (ARD). The EQ kernel allows the modelling of non-linearities between the inputs and the response variables but it makes a strong assumption: it generates smooth, infinitely differentiable functions. This assumption can be too strong for noisy data. An alternative is the Mat\`{e}rn class of kernels, which relax the smoothness assumption by modelling functions which are $\nu$-times differentiable only. Common values for $\nu$ are the half-integers $3/2$ and $5/2$, resulting in the following Mat\`{e}rn kernels: \begin{align*} k_{\text{M32}} &= \sigma_v (1 + \sqrt{3r^2}) \; \mathrm{exp}(-\sqrt{3r^2}) \\ k_{\text{M52}} &= \sigma_v \left(1 + \sqrt{5r^2} + \frac{5r^2}{3}\right) \mathrm{exp}(-\sqrt{5r^2}) \, , \end{align*} where we have omitted the dependence of $k_{\text{M32}}$ and $k_{\text{M52}}$ on the inputs $(\x, \x')$ for brevity. Higher values for $\nu$ are usually not very useful since the resulting behaviour is hard to distinguish from limit case $\nu \rightarrow \infty$, which retrieves the EQ kernel \cite[Sec. 4.2]{Rasmussen2006}. The relaxed smoothness assumptions from the Mat\`{e}rn kernels makes them promising candidates for QE datasets, which tend to be very noisy. We expect that employing them will result in a better models for this application. \subsection{Warped Gaussian Processes} \label{sec:wgp} The Gaussian likelihood of standard GPs has support over the entire real number line. However, common quality scores are strictly positive values, which means that the Gaussian assumption is not ideal. A usual way to deal with this problem is model the logarithm of the response variables, since this transformation maps strictly positive values to the real line. However, there is no reason to believe this is the best possible mapping: a better idea would be to learn it from the data. Warped GPs \cite{Snelson2004} are an extension of GPs that allows the learning of arbitrary mappings. It does that by placing a monotonic \emph{warping function} over the observations and modelling the warped values inside a standard GP. The posterior distribution is obtained by applying a change of variables: \begin{equation*} p(y_*|\x_*) = \frac{f'(y_*)}{\sqrt{2\pi\sigma_*^2}} \; \mathrm{exp} \left(\frac{f(y_*) - \mu_*}{2\sigma_*}\right), \end{equation*} where $\mu_*$ and $\sigma_*$ are the mean and standard deviation of the latent (warped) response variable and $f$ and $f'$ are the warping function and its derivative. Point predictions from this model depend on the loss function to be minimised. For absolute error, the median is the optimal value while for squared error it is the mean of the posterior. In standard GPs, since the posterior is Gaussian the median and mean coincide but this in general is not the case for a Warped GP posterior. The median can be easily obtained by applying the inverse warping function to the latent median: \begin{equation*} y^{\mathrm{med}}_* = f^{-1}(\mu_*). \end{equation*} While the inverse of the warping function is usually not available in closed form, we can use its gradient to have a numerical estimate. The mean is obtained by integrating $y^*$ over the latent density: \begin{equation*} \mathbb{E}[y_*] = \int f^{-1}(z) \mathcal{N}_z(\mu_*, \sigma^2_*) dz, \end{equation*} where $z$ is the latent variable. This can be easily approximated using Gauss-Hermite quadrature since it is a one dimensional integral over a Gaussian density. The warping function should be flexible enough to allow the learning of complex mappings, but it needs to be monotonic. \newcite{Snelson2004} proposes a parametric form composed of a sum of $\mathrm{tanh}$ functions, similar to a neural network layer: \begin{equation*} \label{eq:warp} f(y) = y + \sum\limits_{i=1}^{I} a_i \; \mathrm{tanh} (b_i (y + c_i)) , \end{equation*} where $I$ is the number of $\mathrm{tanh}$ terms and $\mathbf{a}, \mathbf{b}$ and $\mathbf{c}$ are treated as model hyperparameters and optimised jointly with the kernel and likelihood hyperparameters. Large values for $I$ allow more complex mappings to be learned but raise the risk of overfitting. Warped GPs provide an easy and elegant way to model response variables with non-Gaussian behaviour within the GP framework. In our experiments we explore models employing warping functions with up to $3$ terms, which is the value recommended by \newcite{Snelson2004}. We also report results using the $f(y) = \log(y)$ warping function. \section{Intrinsic Uncertainty Evaluation} \label{sec:intrinsic} Given a set of different probabilistic QE models, we are interested in evaluating the performance of these models, while also taking their uncertainty into account, particularly to distinguish among models with seemingly same or similar performance. A straightforward way to measure the performance of a probabilistic model is to inspect its negative ($\mathrm{log}$) marginal likelihood. This measure, however, does not capture if a model overfit the training data. We can have a better generalisation measure by calculating the likelihood on \emph{test data} instead. This was proposed in previous work and it is called Negative Log Predictive Density (NLPD) \cite{Quinonero-Candela2006}: \begin{equation*} \label{eq:nlpd} \text{NLPD}(\mathbf{\hat{y}}, \mathbf{y}) = -\frac{1}{n} \sum\limits_{i=1}^n \mathrm{log}\; p(\hat{y}_i = y_i|\x_i). \end{equation*} where $\mathbf{\hat{y}}$ is a set of test predictions, $\mathbf{y}$ is the set of true labels and $n$ is the test set size. This metric has since been largely adopted by the ML community when evaluating GPs and other probabilistic models for regression (see Section \ref{sec:relwork} for some examples). As with other error metrics, lower values are better. Intuitively, if two models produce equally incorrect predictions but they have different uncertainty estimates, NLPD will penalise the overconfident model more than the underconfident one. On the other hand, if predictions are close to the true value then NLPD will penalise the underconfident model instead. In our first set of experiments we evaluate models proposed in Section \ref{sec:probqe} according to their negative $\mathrm{log}$ likelihood (NLL) and the NLPD on test data. We also report two point estimate metrics on test data: Mean Absolute Error (MAE), the most commonly used evaluation metric in QE, and Pearson's $r$, which has recently proposed by \newcite{Graham2015} as a more robust alternative. \subsection{Experimental Settings} \label{sec:exp} Our experiments comprise datasets containing three different language pairs, where the label to predict is post-editing time: \begin{description} \item[English-Spanish (en-es)] This dataset was used in the WMT14 QE shared task \cite{Bojar2014}. It contains $858$ sentences translated by one MT system and post-edited by a professional translator. \item[French-English (fr-en)] Described in \cite{Specia2011}, this dataset contains $2,525$ sentences translated by one MT system and post-edited by a professional translator. \item[English-German (en-de)] This dataset is part of the WMT16 QE shared task\footnote{\url{www.statmt.org/wmt16}}. It was translated by one MT system for consistency we use a subset of $2,828$ instances post-edited by a single professional translator. \end{description} As part of the process of creating these datasets, post-editing time was logged on an sentence basis for all datasets. Following common practice, we normalise the post-editing time by the length of the machine translated sentence to obtain post-editing {\em rates} and use these as our response variables. Technically our approach could be used with any other numeric quality labels from the literature, including the commonly used Human Translation Error Rate (HTER) \cite{Snover2006}. Our decision to focus on post-editing time was based on the fact that time is a more complete measure of post-editing effort, capturing not only technical effort like HTER, but also cognitive effort \cite{Koponen2012}. Additionally, time is more directly applicable in real translation environments -- where uncertainty estimates could be useful, as it relates directly to productivity measures. For model building, we use a standard set of $17$ features from the QuEst framework \cite{Specia2015}. These features are used in the strong baseline models provided by the WMT QE shared tasks. While the best performing systems in the shared tasks use larger feature sets, these are mostly resource-intensive and language-dependent, and therefore not equally applicable to all our language pairs. Moreover, our goal is to compare probabilistic QE models through the predictive uncertainty perspective, rather than improving the state-of-the-art in terms of point predictions. We perform $10$-fold cross validation instead of using a single train/test splits and report averaged metric scores. The model hyperparameters were optimised by maximising the likelihood on the training data. We perform a two-pass procedure similar to that in \cite{Cohn2013}: first we employ an isotropic kernel and optimise all hyperparameters using $10$ random restarts; then we move to an ARD equivalent kernel and perform a final optimisation step to fine tune feature {\em lengthscales}. Point predictions were fixed as the median of the distribution. \subsection{Results and Discussion} \label{sec:iresults} Table \ref{tab:intrinsic} shows the results obtained for all datasets. The first two columns shows an interesting finding in terms of model learning: using a warping function drastically decreases both NLL and NLPD. The main reason behind this is that standard GPs distribute probability mass over negative values, while the warped models do not. For the {\bf fr-en} and {\bf en-de} datasets, NLL and NLPD follow similar trends. This means that we can trust NLL as a measure of uncertainty for these datasets. However, this is not observed in the {\bf en-es} dataset. Since this dataset is considerably smaller than the others, we believe this is evidence of overfitting, thus showing that NLL is not a reliable metric for small datasets. In terms of different warping functions, using the parametric $\mathrm{tanh}$ function with $3$ terms performs better than the $\mathrm{log}$ for the {\bf fr-en} and {\bf en-de} datasets. This is not the case of the {\bf en-es} dataset, where the $\mathrm{log}$ function tends to perform better. We believe that this is again due to the smaller dataset size. The gains from using a Mat\`{e}rn kernel over EQ are less conclusive. While they tend to perform better for {\bf fr-en}, there does not seem to be any difference in the other datasets. % Different kernels can be more appropriate depending on the language pair, but more experiments are needed to verify this, which we leave for future work. The differences in uncertainty modelling are by and large not captured by the point estimate metrics. While MAE does show gains from standard to Warped GPs, it does not reflect the difference found between warping functions for {\bf fr-en}. Pearson's $r$ is also quite inconclusive in this sense, except for some observed gains for {\bf en-es}. This shows that NLPD indeed should be preferred as a evaluation metric when proper prediction uncertainty estimates are required by a QE model. \subsection{Qualitative Analysis} \label{sec:analysis} To obtain more insights about the performance in uncertainty modelling we inspected the predictive distributions for two sentence pairs in the {\bf fr-en} dataset. We show the distributions for a standard GP and a Warped GP with a $\mathrm{tanh3}$ function in Figure \ref{fig:example1}. In the first case, where both models give accurate predictions, we see that the Warped GP distribution is peaked around the predicted value, as it should be. It also gives more probability mass to positive values, showing that the model is able to learn that the label is non-negative. In the second case we analyse the distributions when both models make inaccurate predictions. We can see that the Warped GP is able to give a broader distribution in this case, while still keeping most of the mass outside the negative range. We also report above each plot in Figure \ref{fig:example1} the NLPD for each prediction. Comparing only the Warped GP predictions, we can see that their values reflect the fact that we prefer sharp distributions when predictions are accurate and broader ones when predictions are not accurate. However, it is interesting to see that the metric also penalises predictions when their distributions are too broad, as it is the case with the standard GPs since they can not discriminate between positive and negative values as well as the Warped GPs. Inspecting the resulting warping functions can bring additional modelling insights. In Figure \ref{fig:warps} we show instances of $\mathrm{tanh3}$ warping functions learned from the three datasets and compare them with the $\mathrm{log}$ warping function. % We can see that the parametric $\mathrm{tanh3}$ model is able to learn non-trivial mappings. For instance, in the {\bf en-es} case % the learned function is roughly logarithmic in the low scales but it switches to a linear mapping after $y = 4$. Notice also the difference in the scales, which means that the optimal model uses a latent Gaussian with a larger variance.% \section{Asymmetric Risk Scenarios} \label{sec:asymmetric} Evaluation metrics for QE, including those used in the WMT QE shared tasks, are assumed to be symmetric, i.e., they penalise over and underestimates equally. This assumption is however too simplistic for many possible applications of QE. For example: \begin{itemize} \item In a {\em post-editing} scenario, a project manager may have translators with limited expertise in post-editing. In this case, automatic translations should not be provided to the translator unless they are highly likely to have very good quality. This can be enforced this by increasing the penalisation weight for underestimates. We call this the \emph{pessimistic} scenario. \item In a {\em gisting} scenario, a company wants to automatically translate their product reviews so that they can be published in a foreign language without human intervention. The company would prefer to publish only the reviews translated well enough, but having more reviews published will increase the chances of selling products. In this case, having better recall is more important and thus only reviews with very poor translation quality should be discarded. We can accomplish this by heavier penalisation on overestimates, a scenario we call \emph{optimistic}. \end{itemize} In this Section we show how these scenarios can be addressed by well-calibrated predictive distributions and by employing {\em asymmetric} loss functions. An example of such a function is the asymmetric linear (henceforth, AL) loss, which is a generalisation of the absolute error: \begin{equation*} \label{eq:asymmae} L(\hat{y}, y) = \begin{cases} w(\hat{y} - y) &\text{if } \hat{y} > y\\ y - \hat{y} &\text{if } \hat{y} \le y , \end{cases} \end{equation*} where $w > 0$ is the weight given to overestimates. If $w > 1$ we have the pessimistic scenario, and the optimistic one can be obtained using $0 < w < 1$. For $w = 1$ we retrieve the original absolute error loss. Another asymmetric loss is the linear exponential or {\em linex} loss \cite{Zellner1986}: \begin{equation*} \label{eq:linex} L(\hat{y}, y) = \mathrm{exp}[w(\hat{y} - y)] - (\hat{y} - y) - 1 \end{equation*} where $w \in \mathbb{R}$ is the weight. This loss attempts to keep a linear penalty in lesser risk regions, while imposing an exponential penalty in the higher risk ones. Negative values for $w$ will result in a pessimistic setting, while positive values will result in the optimistic one. For $w = 0$, the loss approximates a squared error loss. Usual values for $w$ tend to be close to $1$ or $-1$ since for higher weights the loss can quickly reach very large scores. Both losses are shown on Figure \ref{fig:losses}. \subsection{Bayes Risk for Asymmetric Losses} \label{sec:risk} The losses introduced above can be incorporated directly into learning algorithms to obtain models for a given scenario. In the context of the AL loss this is called {\em quantile regression} \cite{Koenker2005}, since optimal estimators for this loss are posterior quantiles. However, in a production environment the loss can change over time. For instance, in the gisting scenario discussed above the parameter $w$ could be changed based on feedback from indicators of sales revenue or user experience. If the loss is attached to the underlying learning algorithms, a change in $w$ would require full model retraining, which can be costly. Instead of retraining the model every time there is a different loss, we can train a single probabilistic model and derive Bayes risk estimators for the loss we are interested in. This allows estimates to be obtained without having to retrain models when the loss changes. Additionally, this allows different losses/scenarios to be employed at the same time using the same model.% Minimum Bayes risk estimators for asymmetric losses were proposed by \newcite{Christoffersen1997} and we follow their derivations in our experiments. The best estimator for the AL loss is equivalent to the $\frac{w}{w + 1}$ quantile of the predictive distribution. Note that we retrieve the median when $w = 1$, as expected. The best estimator for the linex loss can be easily derived and results in: \begin{equation*} \hat{y} = \mu_y - \frac{w \sigma^2_y}{2} \end{equation*} where $\mu_y$ and $\sigma^2_y$ are the mean and the variance of the predictive posterior. \subsection{Experimental Settings} \label{sec:exp2} Here we assess the models and datasets used in Section \ref{sec:exp} in terms of their performance in the asymmetric setting. Following the explanation in the previous Section, we do not perform any retraining: we collect the predictions obtained using the 10-fold cross-validation protocol and apply different Bayes estimators corresponding to the asymmetric losses. Evaluation is performed using the same loss employed in the estimator (for instance, when using the linex estimator with $w = 0.75$ we report the results using the linex loss with same $w$) and averaged over the 10 folds. To simulate both pessimistic and optimistic scenarios, we use $w \in \{ 3, 1/3 \}$ for the AL loss and $w \in \{-0.75, 0.75\}$ for the linex loss. The only exception is the {\bf en-de} dataset, where we report results for $w \in {-0.25, 0.75}$ for linex\footnote{Using $w = -0.75$ in this case resulted in loss values on the order of $10^7$. In fact, as it will be discussed in the next Section, the results for the linex loss in the pessimistic scenario were inconclusive. However, we report results using a higher $w$ in this case for completeness and to clarify the inconclusive trends we found.}. We also report results only for models using the Mat\`{e}rn52 kernel. While we did experiment with different kernels and weighting schemes\footnote{We also tried $w \in \{1/9, 1/7, 1/5, 5, 7, 9\}$ for the AL loss and $w \in \{-0.5, -0.25, 0.25, 0.5\}$ for the linex loss.} our findings showed similar trends so we omit them for the sake of clarity. \subsection{Results and Discussion} \label{sec:results2} Results are shown on Table \ref{tab:asymm}. In the optimistic scenario the $\mathrm{tanh}$-based warped GP models give consistently better results than standard GPs. The $\mathrm{log}$-based models also gives good results for AL but for linex the results are mixed except for en-es. This is probably again related to the larger sizes of the fr-en and en-de datasets, which allows the $\mathrm{tanh}$-based models to learn richer representations. The pessimistic scenario shows interesting trends. While the results for AL follow a similar pattern when compared to the optimistic setting, the results for linex are consistently worse than the standard GP baseline. A key difference between AL and linex is that the latter depends on the variance of the predictive distribution. Since the warped models tend to have less variance, we believe the estimator is not being ``pushed'' towards the positive tails as much as in the standard GPs. This turns the resulting predictions not conservative enough (i.e. the post-editing time predictions are lower) and this is heavily (exponentially) penalised by the loss. This might be a case where a standard GP is preferred but can also indicate that this loss is biased towards models with high variance, even if it does that by assigning probability mass to nonsensical values (like negative time). We leave further investigation of this phenomenon for future work. \section{Related Work} \label{sec:relwork} Quality Estimation is generally framed as text regression task, similarly to many other applications such as movie revenue forecasting based on reviews \cite{Joshi2010,Bitvai2015} and detection of emotion strength in news headlines \cite{Strapparava2008,Beck2014a} and song lyrics \cite{Mihalcea2012}. In general, these applications are evaluated in terms of their point estimate predictions, arguably because not all of them employ probabilistic models. The NLPD is common and established metric used in the GP literature to evaluate new approaches. Examples include the original work on Warped GPs \cite{Snelson2004}, but also others like \newcite{Lazaro-Gredilla2012} and \newcite{Chalupka2013}. It has also been used to evaluate recent work on uncertainty propagation methods for neural networks \cite{Hernandez-Lobato2015}. Asymmetric loss functions are common in the econometrics literature and were studied by \newcite{Zellner1986} and \newcite{Koenker2005}, among others. Besides the AL and the linex, another well studied loss is the asymmetric quadratic, which in turn relates to the concept of {\em expectiles} \cite{Newey1987}. This loss generalises the commonly used squared error loss. In terms of applications, \newcite{Cain1995} gives an example in real estate assessment, where the consequences of under- and over-assessment are usually different depending on the specific scenario. An engineering example is given by \newcite{Zellner1986} in the context of dam construction, where an underestimate of peak water level is much more serious than an overestimate. Such real-world applications guided many developments in this field: we believe that translation and other language processing scenarios which rely on NLP technologies can heavily benefit from these advancements. \section{Conclusions} \label{sec:conc} This work explored new probabilistic models for machine translation QE that allow better uncertainty estimates. We proposed the use of NLPD, which can capture information on the whole predictive distribution, unlike usual point estimate-based metrics. By assessing models using NLPD we can make better informed decisions about which model to employ for different settings. Furthermore, we showed how information in the predictive distribution can be used in asymmetric loss scenarios and how the proposed models can be beneficial in these settings. Uncertainty estimates can be useful in many other settings beyond the ones explored in this work. Active Learning can benefit from variance information in their query methods and it has shown to be useful for QE \cite{Beck2013}. Exploratory analysis is another avenue for future work, where error bars can provide further insights about the task, as shown in recent work \cite{Nguyen2015}. This kind of analysis can be useful for tracking post-editor behaviour and assessing cost estimates for translation projects, for instance. Our main goal in this paper was to raise awareness about how different modelling aspects should be taken into account when building QE models. Decision making can be risky using simple point estimates and we believe that uncertainty information can be beneficial in such scenarios by providing more informed solutions. These ideas are not restricted to QE and we hope to see similar studies in other natural language applications in the future.% \section*{Acknowledgements} Daniel Beck was supported by funding from CNPq/Brazil (No. 237999/2012-9). Lucia Specia was supported by the QT21 project (H2020 No. 645452). Trevor Cohn is the recipient of an Australian Research Council Future Fellowship (project number FT130101105). The authors would like to thank James Hensman for his advice on Warped GPs and the three anonymous reviewers for their comments. \bibliographystyle{acl2016} \end{document}
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
1606.01305
Table 1: Validation and test results of different models on the three language modelling tasks. Results are reported for the best-performing settings. Performance on Char-PTB and Text8 is measured in bits-per-character (BPC); Word-PTB is measured in perplexity. For Char-PTB and Text8 all models are 1-layer unless otherwise noted; for Word-PTB all models are 2-layer. Results above the line are from our own implementation and experiments. Models below the line are: NR-dropout (non-recurrent dropout), V-Dropout (variational dropout), RBN (recurrent batchnorm), H-LSTM+LN (HyperLSTM + LayerNorm), 3-HM-LSTM+LN (3-layer Hierarchical Multiscale LSTM + LayerNorm).
[ "[BOLD] Model", "[BOLD] Char-PTB [BOLD] Valid", "[BOLD] Char-PTB [BOLD] Test", "[BOLD] Word-PTB [BOLD] Valid", "[BOLD] Word-PTB [BOLD] Test", "[BOLD] Text8 [BOLD] Valid", "[BOLD] Text8 [BOLD] Test" ]
[ [ "Unregularized LSTM", "1.466", "1.356", "120.7", "114.5", "1.396", "1.408" ], [ "Weight noise", "1.507", "1.344", "–", "–", "1.356", "1.367" ], [ "Norm stabilizer", "1.459", "1.352", "–", "–", "1.382", "1.398" ], [ "Stochastic depth", "1.432", "1.343", "–", "–", "1.337", "1.343" ], [ "Recurrent dropout", "1.396", "1.286", "091.6", "087.0", "1.386", "1.401" ], [ "Zoneout", "1.362", "1.252", "081.4", "077.4", "1.331", "1.336" ], [ "NR-dropout (Zaremba et al., 2014 )", "–", "–", "082.2", "078.4", "–", "–" ], [ "V-dropout (Gal, 2015 )", "–", "–", "–", "0 [BOLD] 73.4", "–", "–" ], [ "RBN (Cooijmans et al., 2016 )", "–", "1.32", "–", "–", "–", "1.36" ], [ "H-LSTM + LN (Ha et al., 2016 )", "1.281", "1.250", "–", "–", "–", "–" ], [ "3-HM-LSTM + LN (Chung et al., 2016 )", "–", "[BOLD] 1.24", "–", "–", "–", "[BOLD] 1.29" ] ]
We focus our investigation on LSTM units, where the dynamics of zoning out states, cells, or both provide interesting insight into zoneout’s behaviour. Zoneout on cells with probability 0.5 or zoneout on states with probability 0.05 both outperform the best-performing recurrent dropout (p=0.25). in a recurrent setting (analagous to zoning out an entire timestep). We also tried a shared-mask variant of zoneout as used in pMNIST experiments, where the same mask is used for both cells and hiddens. Neither stochastic depth or shared-mask zoneout performed as well as separate masks, sampled per unit. With no dropout on the non-recurrent connections (i.e. zoneout as the only regularization), we do not achieve competitive results. We did not perform any search over models, and conjecture that the large model size requires regularization of the feed-forward connections. Adding zoneout (zc=0.25 and zh=0.025) on the recurrent connections to the model optimized for dropout on the non-recurrent connections however, we are able to improve test perplexity from 78.4 to 77.4. We use a single-layer network of 2000 units, initialized orthogonally, with batch size 128, learning rate 0.001, and sequence length 180.
\documentclass{article} % For LaTeX2e \newcommand{\elephant}{Recurrent dropout without memory loss} \newcommand{\pz}{{\phantom{0}}} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. % microtypography \usepackage[export]{adjustbox} \graphicspath{{./},{./figures/}} \let\oldcitep\citep \let\oldcitet\citet \title{Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations} \author{David Krueger$^{1,\star}$, Tegan Maharaj$^{2,\star}$, J\'anos Kram\'ar$^{2}$\\ {\bf Mohammad Pezeshki$^{1}$} {\bf Nicolas Ballas$^1$}, {\bf Nan Rosemary Ke$^2$}, {\bf Anirudh Goyal$^1$}\\ {\bf Yoshua Bengio$^{1\dagger}$}, {\bf Aaron Courville$^{1\ddagger}$}, {\bf Christopher Pal$^2$}\\ $^1$ MILA, Universit\'e de Montr\'eal, \texttt{firstname.lastname@umontreal.ca}.\\ $^2$ \'Ecole Polytechnique de Montr\'eal, \texttt{firstname.lastname@polymtl.ca}. \\ $^\star$ Equal contributions. $^\dagger$CIFAR Senior Fellow. $^\ddagger$CIFAR Fellow.\\ } \begin{document} \maketitle \begin{abstract} We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization \citep{rnn_batchnorm2} yields state-of-the-art results on permuted sequential MNIST. \end{abstract} \section{Introduction} Regularizing neural nets can significantly improve performance, as indicated by the widespread %TM use of early stopping, and success of regularization methods such as dropout and its recurrent variants \citep{hinton2012improving,srivastava2014dropout,zaremba2014recurrent,yarvin}. In this paper, we address the issue of regularization in recurrent neural networks (RNNs) with a novel method called \textbf{zoneout}. RNNs sequentially construct fixed-length representations of arbitrary-length sequences by folding new observations into their hidden state using an input-dependent transition operator. The repeated application of the same transition operator at the different time steps of the sequence, however, can make the dynamics of an RNN sensitive to minor perturbations in the hidden state; the transition dynamics can magnify components of these perturbations exponentially. Zoneout aims to improve RNNs' robustness to perturbations in the hidden state in order to regularize transition dynamics. Like dropout, zoneout injects noise during training. But instead of setting some units' activations to 0 as in dropout, zoneout randomly replaces some units' activations with their activations from the previous timestep. As in dropout, we use the expectation of the random noise at test time. This results in a simple regularization approach which can be applied through time for any RNN architecture, and can be conceptually extended to any model whose state varies over time. Compared with dropout, zoneout is appealing because it preserves information flow forwards and backwards through the network. This helps combat the vanishing gradient problem \citep{hochreiter1991untersuchungen,vanishing_gradient}, as we observe experimentally. We also empirically evaluate zoneout on classification using the permuted sequential MNIST dataset, and on language modelling using the Penn Treebank and Text8 datasets, demonstrating competitive or state of the art performance across tasks. In particular, we show that zoneout performs competitively with other proposed regularization methods for RNNs, including recently-proposed dropout variants. Code for replicating all experiments can be found at: \texttt{http://github.com/teganmaharaj/zoneout} \section{Related work} \subsection{Relationship to dropout} Zoneout can be seen as a selective application of dropout to some of the nodes in a modified computational graph, as shown in Figure~\ref{fig:zoneout_as_dropout}. In zoneout, instead of dropping out (being set to 0), units \emph{zone out} and are set to their previous value ($h_t = h_{t-1}$). Zoneout, like dropout, can be viewed as a way to train a pseudo-ensemble \citep{pseudo_ensembles}, injecting noise using a stochastic ``identity-mask'' rather than a zero-mask. %TODO: MASK/MAP? We conjecture that identity-masking is more appropriate for RNNs, since it makes it easier for the network to preserve information from previous timesteps going forward, and facilitates, rather than hinders, the flow of gradient information going backward, as we demonstrate experimentally. \subsection{Dropout in RNNs} Initially successful applications of dropout in RNNs \citep{pham13, zaremba2014recurrent} only applied %TM apply to applied dropout to feed-forward connections (``up the stack''), and not recurrent connections (``forward through time''), but several recent works \citep{elephant,rnnDrop,yarvin} propose methods that are not limited in this way. \citet{rnn_fast_dropout} successfully apply fast dropout \citep{fast_dropout}, a deterministic approximation of dropout, to RNNs. \citet{elephant} apply \textbf{recurrent dropout} to the {\it updates} to LSTM memory cells (or GRU states), i.e.\ they drop out the input/update gate in LSTM/GRU. Like zoneout, their approach prevents the loss of long-term memories built up in the states/cells of GRUs/LSTMS, but zoneout does this by preserving units' activations \emph{exactly}. This difference is most salient when zoning out the hidden states (not the memory cells) of an LSTM, for which there is no analogue in recurrent dropout. Whereas saturated output gates or output nonlinearities would cause recurrent dropout to suffer from vanishing gradients \citep{vanishing_gradient}, zoned-out units still propagate gradients effectively in this situation. Furthermore, while the recurrent dropout method is specific to LSTMs and GRUs, zoneout generalizes to any model that sequentially builds distributed representations of its input, including vanilla RNNs. Also motivated by preventing memory loss, \citet{rnnDrop} propose \textbf{rnnDrop}. This technique amounts to using the same dropout mask at every timestep, which the authors show results in improved performance on speech recognition in their experiments. \citet{elephant} show, however, that past states' influence vanishes exponentially as a function of dropout probability when taking the expectation at test time in rnnDrop; this is problematic for tasks involving longer-term dependencies. \citet{yarvin} propose another technique which uses the same mask at each timestep. Motivated by variational inference, they drop out the rows of weight matrices in the input and output embeddings and LSTM gates, instead of dropping units' activations. The proposed \textbf{variational RNN} technique achieves single-model state-of-the-art test perplexity of $73.4$ on word-level language modelling of Penn Treebank. \subsection{Relationship to Stochastic Depth} Zoneout can also be viewed as a per-unit version of \textbf{stochastic depth} \citep{stochastic_depth}, which randomly drops entire layers of feed-forward residual networks (ResNets \citep{resnet}). This is equivalent to zoning out all of the units of a layer at the same time. In a typical RNN, there is a new input at each timestep, causing issues for a naive implementation of stochastic depth. Zoning out an entire layer in an RNN means the input at the corresponding timestep is completely ignored, whereas zoning out individual units allows the RNN to take each element of its input sequence into account. We also found that using residual connections in recurrent nets led to instability, presumably due to the parameter sharing in RNNs. Concurrent with our work, \citet{swapout} propose zoneout for ResNets, calling it {\bf SkipForward}. In their experiments, zoneout is outperformed by stochastic depth, dropout, and their proposed {\bf Swapout} technique, which randomly drops either or both of the identity or residual connections. Unlike \citet{swapout}, we apply zoneout to RNNs, and find it outperforms stochastic depth and recurrent dropout. \subsection{Selectively updating hidden units} Like zoneout, {\bf clockwork RNNs}~\citep{koutnik2014clockwork} and {\bf hierarchical RNNs}~\citep{hierarchical_rnn} update only some units' activations at every timestep, but their updates are periodic, whereas zoneout's are stochastic. Inspired by clockwork RNNs, we experimented with zoneout variants that target different update rates or schedules for different units, but did not find any performance benefit. \textbf{Hierarchical multiscale LSTMs} \citep{hmrnn} learn update probabilities for different units using the straight-through estimator \citep{straightthrough1, straightthrough2}, and combined with recently-proposed Layer Normalization \citep{layernorm}, achieve competitive results on a variety of tasks. As the authors note, their method can be interpreted as an input-dependent form of adaptive zoneout. In recent work, \citet{hypernets} use a hypernetwork to dynamically rescale the row-weights of a primary LSTM network, achieving state-of-the-art 1.21 BPC on character-level Penn Treebank when combined with layer normalization \citep{layernorm} in a two-layer network. This scaling can be viewed as an adaptive, differentiable version of the variational LSTM \citep{yarvin}, and could similarly be used to create an adaptive, differentiable version of zoneout. Very recent work conditions zoneout probabilities on suprisal (a measure of the discrepancy between the predicted and actual state), and sets a new state of the art on enwik8 \citep{suprisalzoneout}. \section{Zoneout and preliminaries} We now explain zoneout in full detail, and compare with other forms of dropout in RNNs. We start by reviewing recurrent neural networks (RNNs). \subsection{Recurrent Neural Networks} Recurrent neural networks process data $x_1, x_2, \dots, x_T$ sequentially, constructing a corresponding sequence of representations, $h_1, h_2, \dots, h_T$. Each hidden state is trained (implicitly) to remember and emphasize all task-relevant aspects of the preceding inputs, and to incorporate new inputs via a transition operator, $\mathcal{T}$, which converts the present hidden state and input into a new hidden state: $h_{t} = \mathcal{T} (h_{t-1}, x_t)$. Zoneout modifies these dynamics by mixing the original transition operator $\mathcal{\tilde T}$ with the identity operator (as opposed to the null operator used in dropout), according to a vector of Bernoulli masks, $d_t$: \begin{align*} \mbox{Zoneout:}&&\mathcal{T}&=d_t\odot\mathcal{\tilde T}+(1-d_t)\odot 1%h_t &= (d_t \mathcal{T} + (1-d_t) \mathbf{1}) h_{t-1} &\mbox{Dropout:}&& \mathcal{T}&=d_t\odot\mathcal{\tilde T}+(1-d_t)\odot 0 %h_t &= (d_t \mathcal{T} + (1-d_t) \mathbf{0}) h_{t-1} \end{align*} \subsection{Long short-term memory} In long short-term memory RNNs (LSTMs) \citep{LSTM}, the hidden state is divided into memory cell $c_t$, intended for internal long-term storage, and hidden state $h_t$, used as a transient representation of state at timestep $t$. In the most widely used formulation of an LSTM \citep{LSTM_forgetgate}, $c_t$ and $h_t$ are computed via a set of four ``gates'', including the forget gate, $f_t$, which directly connects $c_t$ to the memories of the previous timestep $c_{t-1}$, via an element-wise multiplication. Large values of the forget gate cause the cell to remember most (not all) of its previous value. The other gates control the flow of information in ($i_t, g_t$) and out ($o_t$) of the cell. Each gate has a weight matrix and bias vector; for example the forget gate has $W_{xf}$, $W_{hf}$, and $b_f$. For brevity, we will write these as $W_x,W_h,b$. An LSTM is defined as follows: \begin{align*} i_t, f_t, o_t &= \sigma(W_{x}x_t+W_{h}h_{t-1}+b)\\ g_t &= \tanh(W_{xg}x_t+W_{hg}h_{t-1}+b_g)\\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t \\ h_t &= o_t \odot \tanh(c_t) \end{align*} A naive application of dropout in LSTMs would zero-mask either or both of the memory cells and hidden states, without changing the computation of the gates ($i,f,o,g$). Dropping memory cells, for example, changes the computation of $c_t$ as follows: \begin{align*} c_t &= d_t\odot (f_t \odot c_{t-1} + i_t \odot g_t) \end{align*} Alternatives abound, however; masks can be applied to any subset of the gates, cells, and states. \citet{elephant}, for instance, zero-mask the input gate: \begin{align*} c_t &= (f_t \odot c_{t-1} + d_t \odot i_t \odot g_t) \end{align*} When the input gate is masked like this, there is no additive contribution from the input or hidden state, and the value of the memory cell simply decays according to the forget gate. In \textbf{zoneout}, the values of the hidden state and memory cell randomly either maintain their previous value or are updated as usual. This introduces stochastic identity connections between subsequent time steps: \begin{align*} c_t &= d^c_t\odot c_{t-1} + (1-d^c_t)\odot \big(f_t \odot c_{t-1} + i_t \odot g_t\big)\\ h_t &= d^h_t\odot h_{t-1} + (1-d^h_t)\odot \big(o_t \odot \tanh\big(f_t \odot c_{t-1} + i_t \odot g_t\big)\big) \end{align*} We usually use different zoneout masks for cells and hiddens. We also experiment with a variant of recurrent dropout that reuses the input dropout mask to zoneout the corresponding output gates: \begin{align*} c_t &= (f_t \odot c_{t-1} + d_t \odot i_t \odot g_t) \\ h_t &= ((1 - d_t) \odot o_t + d_t \odot o_{t-1})\odot \tanh(c_t) \end{align*} The motivation for this variant is to prevent the network from being forced (by the output gate) to expose a memory cell which has not been updated, and hence may contain misleading information. \section{Experiments and Discussion} We evaluate zoneout's performance on the following tasks: (1) Character-level language modelling on the Penn Treebank corpus \citep{PTB}; (2) Word-level language modelling on the Penn Treebank corpus \citep{PTB}; (3) Character-level language modelling on the Text8 corpus~\citep{Text8}; (4) Classification of hand-written digits on permuted sequential MNIST ($p$MNIST) \citep{IRNN}. We also investigate the gradient flow to past hidden states, using $p$MNIST. \subsection{Penn Treebank Language Modelling Dataset} The Penn Treebank language model corpus contains 1 million words. The model is trained to predict the next word (evaluated on perplexity) or character (evaluated on BPC: bits per character) in a sequence. \footnote{ These metrics are deterministic functions of negative log-likelihood (NLL). Specifically, perplexity is exponentiated NLL, and BPC (entropy) is NLL divided by the natural logarithm of 2. } \subsubsection{Character-level} For the character-level task, we train networks with one layer of 1000 hidden units. We train LSTMs with a learning rate of 0.002 on overlapping sequences of 100 in batches of 32, optimize using Adam, and clip gradients with threshold 1. These settings match those used in \citet{rnn_batchnorm2}. We also train GRUs and tanh-RNNs with the same parameters as above, except sequences are non-overlapping and we use learning rates of 0.001, and 0.0003 for GRUs and tanh-RNNs respectively. Small values (0.1, 0.05) of zoneout significantly improve generalization performance for all three models. % TODO: reference tables/figures. Intriguingly, we find zoneout increases training time for GRU and tanh-RNN, but \emph{decreases} training time for LSTMs. We focus our investigation on LSTM units, where the dynamics of zoning out states, cells, or both provide interesting insight into zoneout's behaviour. Figure~\ref{charchar} shows our exploration of zoneout in LSTMs, for various zoneout probabilities of cells and/or hiddens. Zoneout on cells with probability 0.5 or zoneout on states with probability 0.05 both outperform the best-performing recurrent dropout ($p=0.25$). Combining $z_c=0.5$ and $z_h=0.05$ leads to our best-performing model, which achieves 1.27 BPC, competitive with recent state-of-the-art set by \citep{hypernets}. We compare zoneout to recurrent dropout (for $p \in \{0.05, 0.2, 0.25, 0.5, 0.7\}$), weight noise ($\sigma=0.075$), norm stabilizer ($\beta=50$) \citep{norm_stabilizer}, and explore stochastic depth \citep{stochastic_depth} in a recurrent setting (analagous to zoning out an entire timestep). We also tried a shared-mask variant of zoneout as used in $p$MNIST experiments, where the same mask is used for both cells and hiddens. Neither stochastic depth or shared-mask zoneout performed as well as separate masks, sampled per unit. Figure~\ref{charchar} shows the best performance achieved with each regularizer, as well as an unregularized LSTM baseline. Results are reported in Table \ref{tab:all_results}, and learning curves shown in Figure \ref{char_and_text8_results}. Low zoneout probabilities (0.05-0.25) also improve over baseline in GRUs and tanh-RNNs, reducing BPC from 1.53 to 1.41 for GRU and 1.67 to 1.52 for tanh-RNN. Similarly, low zoneout probabilities work best on the hidden states of LSTMs. For memory cells in LSTMs, however, higher probabilities (around 0.5) work well, perhaps because large forget-gate values approximate the effect of cells zoning out. We conjecture that best performance is achieved with zoneout LSTMs because of the stability of having both state and cell. The probability that both will be zoned out is very low, but having one or the other zoned out carries information from the previous timestep forward, while having the other react 'normally' to new information. { }%\FloatBarrier { }%\FloatBarrier \subsubsection{Word-level} For the word-level task, we replicate settings from \citet{zaremba2014recurrent}'s best single-model performance. This network has 2 layers of 1500 units, with weights initialized uniformly [-0.04, +0.04]. The model is trained for 14 epochs with learning rate 1, after which the learning rate is reduced by a factor of 1.15 after each epoch. Gradient norms are clipped at 10. With no dropout on the non-recurrent connections (i.e. zoneout as the only regularization), we do not achieve competitive results. We did not perform any search over models, and conjecture that the large model size requires regularization of the feed-forward connections. Adding zoneout ($z_c=0.25$ and $z_h=0.025$) on the recurrent connections to the model optimized for dropout on the non-recurrent connections however, we are able to improve test perplexity from 78.4 to 77.4. We report the best performance achieved with a given technique in Table~\ref{tab:all_results}. \subsection{Text8} Enwik8 is a corpus made from the first $10^9$ bytes of Wikipedia dumped on Mar. 3, 2006. Text8 is a "clean text" version of this corpus; with html tags removed, numbers spelled out, symbols converted to spaces, all lower-cased. Both datasets were created and are hosted by \citet{Text8}. We use a single-layer network of 2000 units, initialized orthogonally, with batch size 128, learning rate 0.001, and sequence length 180. We optimize with Adam \citep{adam}, clip gradients to a maximum norm of 1 \citep{pascanu2013construct}, and use early stopping, again matching the settings of \citet{rnn_batchnorm2}. Results are reported in Table~\ref{tab:all_results}, and Figure~\ref{char_and_text8_results} shows training and validation learning curves for zoneout ($z_c=0.5, z_h=0.05$) compared to an unregularized LSTM and to recurrent dropout. \subsection{Permuted sequential MNIST} In sequential MNIST, pixels of an image representing a number [0-9] are presented one at a time, left to right, top to bottom. The task is to classify the number shown in the image. In $p$MNIST , the pixels are presented in a (fixed) random order. We compare recurrent dropout and zoneout to an unregularized LSTM baseline. All models have a single layer of 100 units, and are trained for 150 epochs using RMSProp \citep{rmsprop} with a decay rate of 0.5 for the moving average of gradient norms. The learning rate is set to 0.001 and the gradients are clipped to a maximum norm of 1 \citep{pascanu2013construct}. As shown in Figure~\ref{fig:mnist_results} and Table~\ref{tab:mnist_results}, zoneout gives a significant performance boost compared to the LSTM baseline and outperforms recurrent dropout~\citep{elephant}, although recurrent batch normalization~\citep{rnn_batchnorm2} outperforms all three. However, by adding zoneout to the recurrent batch normalized LSTM, we achieve state of the art performance. For this setting, the zoneout mask is shared between cells and states, and the recurrent dropout probability and zoneout probabilities are both set to 0.15. \FloatBarrier \subsection{Gradient flow} We investigate the hypothesis that identity connections introduced by zoneout facilitate gradient flow to earlier timesteps. Vanishing gradients are a perennial issue in RNNs. As effective as many techniques are for mitigating vanishing gradients (notably the LSTM architecture \cite{LSTM}), we can always imagine a longer sequence to train on, or a longer-term dependence we want to capture. We compare gradient flow in an unregularized LSTM to zoning out (stochastic identity-mapping) and dropping out (stochastic zero-mapping) the recurrent connections after one epoch of training on $p$MNIST. We compute the average gradient norms $\|{\frac{\partial L}{\partial c_t}}\|$ of loss $L$ with respect to cell activations $c_t$ at each timestep $t$, and for each method, normalize the average gradient norms by the sum of average gradient norms for all timesteps. Figure~\ref{fig:grads_norm} shows that zoneout propagates gradient information to early timesteps much more effectively than dropout on the recurrent connections, and even more effectively than an unregularized LSTM. The same effect was observed for hidden states $h_t$.%; we describe and plot only $c_t$ for brevity. \FloatBarrier \FloatBarrier \section{Conclusion} We have introduced zoneout, a novel and simple regularizer for RNNs, which stochastically preserves hidden units' activations. Zoneout improves performance across tasks, outperforming many alternative regularizers to achieve results competitive with state of the art on the Penn Treebank and Text8 datasets, and state of the art results on $p$MNIST. While searching over zoneout probabilites allows us to tune zoneout to each task, low zoneout probabilities (0.05 - 0.2) on states reliably improve performance of existing models. % on these tasks. We perform no hyperparameter search to achieve these results, simply using settings from the previous state of the art. Results on $p$MNIST and word-level Penn Treebank suggest that Zoneout works well in combination with other regularizers, such as recurrent batch normalization, and dropout on feedforward/embedding layers. We conjecture that the benefits of zoneout arise from two main factors: (1) Introducing stochasticity makes the network more robust to changes in the hidden state; (2) The identity connections improve the flow of information forward and backward through the network. \subsubsection*{Acknowledgments} We are grateful to Hugo Larochelle, Jan Chorowski, and students at MILA, especially \c{C}a\u{g}lar G\"ul\c{c}ehre, Marcin Moczulski, Chiheb Trabelsi, and Christopher Beckham, for helpful feedback and discussions. We thank the developers of Theano \citep{Theano}, Fuel, and Blocks \citep{Blocks}. We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. We also thank IBM and Samsung for their support. We would also like to acknowledge the work of Pranav Shyam on learning RNN hierarchies. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL). The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. \bibliographystyle{iclr2017_conference} \newpage \section{Appendix} \subsection{Static identity connections experiment} This experiment was suggested by AnonReviewer2 during the ICLR review process with the goal of disentangling the effects zoneout has (1) through noise injection in the training process and (2) through identity connections. Based on these results, we observe that noise injection is essential for obtaining the regularization benefits of zoneout. In this experiment, one zoneout mask is sampled at the beginning of training, and used for all examples. This means the identity connections introduced are static across training examples (but still different for each timestep). Using static identity connections resulted in slightly lower {\it training} (but not validation) error than zoneout, but worse performance than an unregularized LSTM on both train and validation sets, as shown in Figure ~\ref{fig:static_mask}. \end{document}
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
1606.01305
Table 2: Error rates on the pMNIST digit classification task. Zoneout outperforms recurrent dropout, and sets state of the art when combined with recurrent batch normalization.
[ "[BOLD] Model", "[BOLD] Valid", "[BOLD] Test" ]
[ [ "Unregularized LSTM", "0.092", "0.102" ], [ "Recurrent dropout [ITALIC] p=0.5", "0.083", "0.075" ], [ "Zoneout [ITALIC] zc= [ITALIC] zh=0.15", "0.063", "0.069" ], [ "Recurrent batchnorm", "-", "0.046" ], [ "Recurrent batchnorm & Zoneout [ITALIC] zc= [ITALIC] zh=0.15", "0.045", "[BOLD] 0.041" ] ]
However, by adding zoneout to the recurrent batch normalized LSTM, we achieve state of the art performance. For this setting, the zoneout mask is shared between cells and states, and the recurrent dropout probability and zoneout probabilities are both set to 0.15.
\documentclass{article} % For LaTeX2e \newcommand{\elephant}{Recurrent dropout without memory loss} \newcommand{\pz}{{\phantom{0}}} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. % microtypography \usepackage[export]{adjustbox} \graphicspath{{./},{./figures/}} \let\oldcitep\citep \let\oldcitet\citet \title{Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations} \author{David Krueger$^{1,\star}$, Tegan Maharaj$^{2,\star}$, J\'anos Kram\'ar$^{2}$\\ {\bf Mohammad Pezeshki$^{1}$} {\bf Nicolas Ballas$^1$}, {\bf Nan Rosemary Ke$^2$}, {\bf Anirudh Goyal$^1$}\\ {\bf Yoshua Bengio$^{1\dagger}$}, {\bf Aaron Courville$^{1\ddagger}$}, {\bf Christopher Pal$^2$}\\ $^1$ MILA, Universit\'e de Montr\'eal, \texttt{firstname.lastname@umontreal.ca}.\\ $^2$ \'Ecole Polytechnique de Montr\'eal, \texttt{firstname.lastname@polymtl.ca}. \\ $^\star$ Equal contributions. $^\dagger$CIFAR Senior Fellow. $^\ddagger$CIFAR Fellow.\\ } \begin{document} \maketitle \begin{abstract} We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization \citep{rnn_batchnorm2} yields state-of-the-art results on permuted sequential MNIST. \end{abstract} \section{Introduction} Regularizing neural nets can significantly improve performance, as indicated by the widespread %TM use of early stopping, and success of regularization methods such as dropout and its recurrent variants \citep{hinton2012improving,srivastava2014dropout,zaremba2014recurrent,yarvin}. In this paper, we address the issue of regularization in recurrent neural networks (RNNs) with a novel method called \textbf{zoneout}. RNNs sequentially construct fixed-length representations of arbitrary-length sequences by folding new observations into their hidden state using an input-dependent transition operator. The repeated application of the same transition operator at the different time steps of the sequence, however, can make the dynamics of an RNN sensitive to minor perturbations in the hidden state; the transition dynamics can magnify components of these perturbations exponentially. Zoneout aims to improve RNNs' robustness to perturbations in the hidden state in order to regularize transition dynamics. Like dropout, zoneout injects noise during training. But instead of setting some units' activations to 0 as in dropout, zoneout randomly replaces some units' activations with their activations from the previous timestep. As in dropout, we use the expectation of the random noise at test time. This results in a simple regularization approach which can be applied through time for any RNN architecture, and can be conceptually extended to any model whose state varies over time. Compared with dropout, zoneout is appealing because it preserves information flow forwards and backwards through the network. This helps combat the vanishing gradient problem \citep{hochreiter1991untersuchungen,vanishing_gradient}, as we observe experimentally. We also empirically evaluate zoneout on classification using the permuted sequential MNIST dataset, and on language modelling using the Penn Treebank and Text8 datasets, demonstrating competitive or state of the art performance across tasks. In particular, we show that zoneout performs competitively with other proposed regularization methods for RNNs, including recently-proposed dropout variants. Code for replicating all experiments can be found at: \texttt{http://github.com/teganmaharaj/zoneout} \section{Related work} \subsection{Relationship to dropout} Zoneout can be seen as a selective application of dropout to some of the nodes in a modified computational graph, as shown in Figure~\ref{fig:zoneout_as_dropout}. In zoneout, instead of dropping out (being set to 0), units \emph{zone out} and are set to their previous value ($h_t = h_{t-1}$). Zoneout, like dropout, can be viewed as a way to train a pseudo-ensemble \citep{pseudo_ensembles}, injecting noise using a stochastic ``identity-mask'' rather than a zero-mask. %TODO: MASK/MAP? We conjecture that identity-masking is more appropriate for RNNs, since it makes it easier for the network to preserve information from previous timesteps going forward, and facilitates, rather than hinders, the flow of gradient information going backward, as we demonstrate experimentally. \subsection{Dropout in RNNs} Initially successful applications of dropout in RNNs \citep{pham13, zaremba2014recurrent} only applied %TM apply to applied dropout to feed-forward connections (``up the stack''), and not recurrent connections (``forward through time''), but several recent works \citep{elephant,rnnDrop,yarvin} propose methods that are not limited in this way. \citet{rnn_fast_dropout} successfully apply fast dropout \citep{fast_dropout}, a deterministic approximation of dropout, to RNNs. \citet{elephant} apply \textbf{recurrent dropout} to the {\it updates} to LSTM memory cells (or GRU states), i.e.\ they drop out the input/update gate in LSTM/GRU. Like zoneout, their approach prevents the loss of long-term memories built up in the states/cells of GRUs/LSTMS, but zoneout does this by preserving units' activations \emph{exactly}. This difference is most salient when zoning out the hidden states (not the memory cells) of an LSTM, for which there is no analogue in recurrent dropout. Whereas saturated output gates or output nonlinearities would cause recurrent dropout to suffer from vanishing gradients \citep{vanishing_gradient}, zoned-out units still propagate gradients effectively in this situation. Furthermore, while the recurrent dropout method is specific to LSTMs and GRUs, zoneout generalizes to any model that sequentially builds distributed representations of its input, including vanilla RNNs. Also motivated by preventing memory loss, \citet{rnnDrop} propose \textbf{rnnDrop}. This technique amounts to using the same dropout mask at every timestep, which the authors show results in improved performance on speech recognition in their experiments. \citet{elephant} show, however, that past states' influence vanishes exponentially as a function of dropout probability when taking the expectation at test time in rnnDrop; this is problematic for tasks involving longer-term dependencies. \citet{yarvin} propose another technique which uses the same mask at each timestep. Motivated by variational inference, they drop out the rows of weight matrices in the input and output embeddings and LSTM gates, instead of dropping units' activations. The proposed \textbf{variational RNN} technique achieves single-model state-of-the-art test perplexity of $73.4$ on word-level language modelling of Penn Treebank. \subsection{Relationship to Stochastic Depth} Zoneout can also be viewed as a per-unit version of \textbf{stochastic depth} \citep{stochastic_depth}, which randomly drops entire layers of feed-forward residual networks (ResNets \citep{resnet}). This is equivalent to zoning out all of the units of a layer at the same time. In a typical RNN, there is a new input at each timestep, causing issues for a naive implementation of stochastic depth. Zoning out an entire layer in an RNN means the input at the corresponding timestep is completely ignored, whereas zoning out individual units allows the RNN to take each element of its input sequence into account. We also found that using residual connections in recurrent nets led to instability, presumably due to the parameter sharing in RNNs. Concurrent with our work, \citet{swapout} propose zoneout for ResNets, calling it {\bf SkipForward}. In their experiments, zoneout is outperformed by stochastic depth, dropout, and their proposed {\bf Swapout} technique, which randomly drops either or both of the identity or residual connections. Unlike \citet{swapout}, we apply zoneout to RNNs, and find it outperforms stochastic depth and recurrent dropout. \subsection{Selectively updating hidden units} Like zoneout, {\bf clockwork RNNs}~\citep{koutnik2014clockwork} and {\bf hierarchical RNNs}~\citep{hierarchical_rnn} update only some units' activations at every timestep, but their updates are periodic, whereas zoneout's are stochastic. Inspired by clockwork RNNs, we experimented with zoneout variants that target different update rates or schedules for different units, but did not find any performance benefit. \textbf{Hierarchical multiscale LSTMs} \citep{hmrnn} learn update probabilities for different units using the straight-through estimator \citep{straightthrough1, straightthrough2}, and combined with recently-proposed Layer Normalization \citep{layernorm}, achieve competitive results on a variety of tasks. As the authors note, their method can be interpreted as an input-dependent form of adaptive zoneout. In recent work, \citet{hypernets} use a hypernetwork to dynamically rescale the row-weights of a primary LSTM network, achieving state-of-the-art 1.21 BPC on character-level Penn Treebank when combined with layer normalization \citep{layernorm} in a two-layer network. This scaling can be viewed as an adaptive, differentiable version of the variational LSTM \citep{yarvin}, and could similarly be used to create an adaptive, differentiable version of zoneout. Very recent work conditions zoneout probabilities on suprisal (a measure of the discrepancy between the predicted and actual state), and sets a new state of the art on enwik8 \citep{suprisalzoneout}. \section{Zoneout and preliminaries} We now explain zoneout in full detail, and compare with other forms of dropout in RNNs. We start by reviewing recurrent neural networks (RNNs). \subsection{Recurrent Neural Networks} Recurrent neural networks process data $x_1, x_2, \dots, x_T$ sequentially, constructing a corresponding sequence of representations, $h_1, h_2, \dots, h_T$. Each hidden state is trained (implicitly) to remember and emphasize all task-relevant aspects of the preceding inputs, and to incorporate new inputs via a transition operator, $\mathcal{T}$, which converts the present hidden state and input into a new hidden state: $h_{t} = \mathcal{T} (h_{t-1}, x_t)$. Zoneout modifies these dynamics by mixing the original transition operator $\mathcal{\tilde T}$ with the identity operator (as opposed to the null operator used in dropout), according to a vector of Bernoulli masks, $d_t$: \begin{align*} \mbox{Zoneout:}&&\mathcal{T}&=d_t\odot\mathcal{\tilde T}+(1-d_t)\odot 1%h_t &= (d_t \mathcal{T} + (1-d_t) \mathbf{1}) h_{t-1} &\mbox{Dropout:}&& \mathcal{T}&=d_t\odot\mathcal{\tilde T}+(1-d_t)\odot 0 %h_t &= (d_t \mathcal{T} + (1-d_t) \mathbf{0}) h_{t-1} \end{align*} \subsection{Long short-term memory} In long short-term memory RNNs (LSTMs) \citep{LSTM}, the hidden state is divided into memory cell $c_t$, intended for internal long-term storage, and hidden state $h_t$, used as a transient representation of state at timestep $t$. In the most widely used formulation of an LSTM \citep{LSTM_forgetgate}, $c_t$ and $h_t$ are computed via a set of four ``gates'', including the forget gate, $f_t$, which directly connects $c_t$ to the memories of the previous timestep $c_{t-1}$, via an element-wise multiplication. Large values of the forget gate cause the cell to remember most (not all) of its previous value. The other gates control the flow of information in ($i_t, g_t$) and out ($o_t$) of the cell. Each gate has a weight matrix and bias vector; for example the forget gate has $W_{xf}$, $W_{hf}$, and $b_f$. For brevity, we will write these as $W_x,W_h,b$. An LSTM is defined as follows: \begin{align*} i_t, f_t, o_t &= \sigma(W_{x}x_t+W_{h}h_{t-1}+b)\\ g_t &= \tanh(W_{xg}x_t+W_{hg}h_{t-1}+b_g)\\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t \\ h_t &= o_t \odot \tanh(c_t) \end{align*} A naive application of dropout in LSTMs would zero-mask either or both of the memory cells and hidden states, without changing the computation of the gates ($i,f,o,g$). Dropping memory cells, for example, changes the computation of $c_t$ as follows: \begin{align*} c_t &= d_t\odot (f_t \odot c_{t-1} + i_t \odot g_t) \end{align*} Alternatives abound, however; masks can be applied to any subset of the gates, cells, and states. \citet{elephant}, for instance, zero-mask the input gate: \begin{align*} c_t &= (f_t \odot c_{t-1} + d_t \odot i_t \odot g_t) \end{align*} When the input gate is masked like this, there is no additive contribution from the input or hidden state, and the value of the memory cell simply decays according to the forget gate. In \textbf{zoneout}, the values of the hidden state and memory cell randomly either maintain their previous value or are updated as usual. This introduces stochastic identity connections between subsequent time steps: \begin{align*} c_t &= d^c_t\odot c_{t-1} + (1-d^c_t)\odot \big(f_t \odot c_{t-1} + i_t \odot g_t\big)\\ h_t &= d^h_t\odot h_{t-1} + (1-d^h_t)\odot \big(o_t \odot \tanh\big(f_t \odot c_{t-1} + i_t \odot g_t\big)\big) \end{align*} We usually use different zoneout masks for cells and hiddens. We also experiment with a variant of recurrent dropout that reuses the input dropout mask to zoneout the corresponding output gates: \begin{align*} c_t &= (f_t \odot c_{t-1} + d_t \odot i_t \odot g_t) \\ h_t &= ((1 - d_t) \odot o_t + d_t \odot o_{t-1})\odot \tanh(c_t) \end{align*} The motivation for this variant is to prevent the network from being forced (by the output gate) to expose a memory cell which has not been updated, and hence may contain misleading information. \section{Experiments and Discussion} We evaluate zoneout's performance on the following tasks: (1) Character-level language modelling on the Penn Treebank corpus \citep{PTB}; (2) Word-level language modelling on the Penn Treebank corpus \citep{PTB}; (3) Character-level language modelling on the Text8 corpus~\citep{Text8}; (4) Classification of hand-written digits on permuted sequential MNIST ($p$MNIST) \citep{IRNN}. We also investigate the gradient flow to past hidden states, using $p$MNIST. \subsection{Penn Treebank Language Modelling Dataset} The Penn Treebank language model corpus contains 1 million words. The model is trained to predict the next word (evaluated on perplexity) or character (evaluated on BPC: bits per character) in a sequence. \footnote{ These metrics are deterministic functions of negative log-likelihood (NLL). Specifically, perplexity is exponentiated NLL, and BPC (entropy) is NLL divided by the natural logarithm of 2. } \subsubsection{Character-level} For the character-level task, we train networks with one layer of 1000 hidden units. We train LSTMs with a learning rate of 0.002 on overlapping sequences of 100 in batches of 32, optimize using Adam, and clip gradients with threshold 1. These settings match those used in \citet{rnn_batchnorm2}. We also train GRUs and tanh-RNNs with the same parameters as above, except sequences are non-overlapping and we use learning rates of 0.001, and 0.0003 for GRUs and tanh-RNNs respectively. Small values (0.1, 0.05) of zoneout significantly improve generalization performance for all three models. % TODO: reference tables/figures. Intriguingly, we find zoneout increases training time for GRU and tanh-RNN, but \emph{decreases} training time for LSTMs. We focus our investigation on LSTM units, where the dynamics of zoning out states, cells, or both provide interesting insight into zoneout's behaviour. Figure~\ref{charchar} shows our exploration of zoneout in LSTMs, for various zoneout probabilities of cells and/or hiddens. Zoneout on cells with probability 0.5 or zoneout on states with probability 0.05 both outperform the best-performing recurrent dropout ($p=0.25$). Combining $z_c=0.5$ and $z_h=0.05$ leads to our best-performing model, which achieves 1.27 BPC, competitive with recent state-of-the-art set by \citep{hypernets}. We compare zoneout to recurrent dropout (for $p \in \{0.05, 0.2, 0.25, 0.5, 0.7\}$), weight noise ($\sigma=0.075$), norm stabilizer ($\beta=50$) \citep{norm_stabilizer}, and explore stochastic depth \citep{stochastic_depth} in a recurrent setting (analagous to zoning out an entire timestep). We also tried a shared-mask variant of zoneout as used in $p$MNIST experiments, where the same mask is used for both cells and hiddens. Neither stochastic depth or shared-mask zoneout performed as well as separate masks, sampled per unit. Figure~\ref{charchar} shows the best performance achieved with each regularizer, as well as an unregularized LSTM baseline. Results are reported in Table \ref{tab:all_results}, and learning curves shown in Figure \ref{char_and_text8_results}. Low zoneout probabilities (0.05-0.25) also improve over baseline in GRUs and tanh-RNNs, reducing BPC from 1.53 to 1.41 for GRU and 1.67 to 1.52 for tanh-RNN. Similarly, low zoneout probabilities work best on the hidden states of LSTMs. For memory cells in LSTMs, however, higher probabilities (around 0.5) work well, perhaps because large forget-gate values approximate the effect of cells zoning out. We conjecture that best performance is achieved with zoneout LSTMs because of the stability of having both state and cell. The probability that both will be zoned out is very low, but having one or the other zoned out carries information from the previous timestep forward, while having the other react 'normally' to new information. { }%\FloatBarrier { }%\FloatBarrier \subsubsection{Word-level} For the word-level task, we replicate settings from \citet{zaremba2014recurrent}'s best single-model performance. This network has 2 layers of 1500 units, with weights initialized uniformly [-0.04, +0.04]. The model is trained for 14 epochs with learning rate 1, after which the learning rate is reduced by a factor of 1.15 after each epoch. Gradient norms are clipped at 10. With no dropout on the non-recurrent connections (i.e. zoneout as the only regularization), we do not achieve competitive results. We did not perform any search over models, and conjecture that the large model size requires regularization of the feed-forward connections. Adding zoneout ($z_c=0.25$ and $z_h=0.025$) on the recurrent connections to the model optimized for dropout on the non-recurrent connections however, we are able to improve test perplexity from 78.4 to 77.4. We report the best performance achieved with a given technique in Table~\ref{tab:all_results}. \subsection{Text8} Enwik8 is a corpus made from the first $10^9$ bytes of Wikipedia dumped on Mar. 3, 2006. Text8 is a "clean text" version of this corpus; with html tags removed, numbers spelled out, symbols converted to spaces, all lower-cased. Both datasets were created and are hosted by \citet{Text8}. We use a single-layer network of 2000 units, initialized orthogonally, with batch size 128, learning rate 0.001, and sequence length 180. We optimize with Adam \citep{adam}, clip gradients to a maximum norm of 1 \citep{pascanu2013construct}, and use early stopping, again matching the settings of \citet{rnn_batchnorm2}. Results are reported in Table~\ref{tab:all_results}, and Figure~\ref{char_and_text8_results} shows training and validation learning curves for zoneout ($z_c=0.5, z_h=0.05$) compared to an unregularized LSTM and to recurrent dropout. \subsection{Permuted sequential MNIST} In sequential MNIST, pixels of an image representing a number [0-9] are presented one at a time, left to right, top to bottom. The task is to classify the number shown in the image. In $p$MNIST , the pixels are presented in a (fixed) random order. We compare recurrent dropout and zoneout to an unregularized LSTM baseline. All models have a single layer of 100 units, and are trained for 150 epochs using RMSProp \citep{rmsprop} with a decay rate of 0.5 for the moving average of gradient norms. The learning rate is set to 0.001 and the gradients are clipped to a maximum norm of 1 \citep{pascanu2013construct}. As shown in Figure~\ref{fig:mnist_results} and Table~\ref{tab:mnist_results}, zoneout gives a significant performance boost compared to the LSTM baseline and outperforms recurrent dropout~\citep{elephant}, although recurrent batch normalization~\citep{rnn_batchnorm2} outperforms all three. However, by adding zoneout to the recurrent batch normalized LSTM, we achieve state of the art performance. For this setting, the zoneout mask is shared between cells and states, and the recurrent dropout probability and zoneout probabilities are both set to 0.15. \FloatBarrier \subsection{Gradient flow} We investigate the hypothesis that identity connections introduced by zoneout facilitate gradient flow to earlier timesteps. Vanishing gradients are a perennial issue in RNNs. As effective as many techniques are for mitigating vanishing gradients (notably the LSTM architecture \cite{LSTM}), we can always imagine a longer sequence to train on, or a longer-term dependence we want to capture. We compare gradient flow in an unregularized LSTM to zoning out (stochastic identity-mapping) and dropping out (stochastic zero-mapping) the recurrent connections after one epoch of training on $p$MNIST. We compute the average gradient norms $\|{\frac{\partial L}{\partial c_t}}\|$ of loss $L$ with respect to cell activations $c_t$ at each timestep $t$, and for each method, normalize the average gradient norms by the sum of average gradient norms for all timesteps. Figure~\ref{fig:grads_norm} shows that zoneout propagates gradient information to early timesteps much more effectively than dropout on the recurrent connections, and even more effectively than an unregularized LSTM. The same effect was observed for hidden states $h_t$.%; we describe and plot only $c_t$ for brevity. \FloatBarrier \FloatBarrier \section{Conclusion} We have introduced zoneout, a novel and simple regularizer for RNNs, which stochastically preserves hidden units' activations. Zoneout improves performance across tasks, outperforming many alternative regularizers to achieve results competitive with state of the art on the Penn Treebank and Text8 datasets, and state of the art results on $p$MNIST. While searching over zoneout probabilites allows us to tune zoneout to each task, low zoneout probabilities (0.05 - 0.2) on states reliably improve performance of existing models. % on these tasks. We perform no hyperparameter search to achieve these results, simply using settings from the previous state of the art. Results on $p$MNIST and word-level Penn Treebank suggest that Zoneout works well in combination with other regularizers, such as recurrent batch normalization, and dropout on feedforward/embedding layers. We conjecture that the benefits of zoneout arise from two main factors: (1) Introducing stochasticity makes the network more robust to changes in the hidden state; (2) The identity connections improve the flow of information forward and backward through the network. \subsubsection*{Acknowledgments} We are grateful to Hugo Larochelle, Jan Chorowski, and students at MILA, especially \c{C}a\u{g}lar G\"ul\c{c}ehre, Marcin Moczulski, Chiheb Trabelsi, and Christopher Beckham, for helpful feedback and discussions. We thank the developers of Theano \citep{Theano}, Fuel, and Blocks \citep{Blocks}. We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. We also thank IBM and Samsung for their support. We would also like to acknowledge the work of Pranav Shyam on learning RNN hierarchies. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL). The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. \bibliographystyle{iclr2017_conference} \newpage \section{Appendix} \subsection{Static identity connections experiment} This experiment was suggested by AnonReviewer2 during the ICLR review process with the goal of disentangling the effects zoneout has (1) through noise injection in the training process and (2) through identity connections. Based on these results, we observe that noise injection is essential for obtaining the regularization benefits of zoneout. In this experiment, one zoneout mask is sampled at the beginning of training, and used for all examples. This means the identity connections introduced are static across training examples (but still different for each timestep). Using static identity connections resulted in slightly lower {\it training} (but not validation) error than zoneout, but worse performance than an unregularized LSTM on both train and validation sets, as shown in Figure ~\ref{fig:static_mask}. \end{document}
NewsQA: A Machine Comprehension Dataset
1611.09830
Table 2: Reasoning mechanisms needed to answer questions. For each we show an example question with the sentence that contains the answer span. Words relevant to the reasoning type are in bold. The corresponding proportion in the human-evaluated subset of both NewsQA and SQuAD (1,000 samples each) is also given.
[ "Reasoning", "Example", "Proportion (%) [ITALIC] NewsQA", "Proportion (%) [ITALIC] SQuAD" ]
[ [ "Word Matching", "Q: [BOLD] When were the [BOLD] findings published? S: Both sets of research [BOLD] findings were published Thursday…", "32.7", "39.8" ], [ "Paraphrasing", "Q: [BOLD] Who is the [BOLD] struggle between in Rwanda? S: The [BOLD] struggle pits ethnic Tutsis, supported by Rwanda, [BOLD] against ethnic Hutu, backed by Congo.", "27.0", "34.3" ], [ "Inference", "Q: [BOLD] Who drew [BOLD] inspiration from [BOLD] presidents? S: [BOLD] Rudy Ruiz says the lives of US [BOLD] presidents can make them [BOLD] positive role models for students.", "13.2", "8.6" ], [ "Synthesis", "Q: [BOLD] Where is [BOLD] Brittanee Drexel from? S: The mother of a 17-year-old [BOLD] Rochester, [BOLD] New York high school student … says she did not give her daughter permission to go on the trip. [BOLD] Brittanee Marie [BOLD] Drexel’s mom says…", "20.7", "11.9" ], [ "Ambiguous/Insufficient", "Q: [BOLD] Whose mother is [BOLD] moving to the White House? S: … [BOLD] Barack Obama’s mother-in-law, Marian Robinson, will [BOLD] join the Obamas at the [BOLD] family’s private quarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned]", "6.4", "5.4" ] ]
Some examples fall into more than one category, in which case we defaulted to the more challenging type. We can see from the table that word matching, the easiest type, makes up the largest subset in both datasets (32.7% for NewsQA and 39.8% for SQuAD). Paraphrasing constitutes a larger proportion in SQuAD than in NewsQA (34.3% vs 27.0%), possibly a result from the explicit encouragement of lexical variety in SQuAD question sourcing. However, NewsQA significantly outnumbers SQuAD on the distribution of the more difficult forms of reasoning: synthesis and inference make up a combined 33.9% of the data in contrast to 20.5% in SQuAD.
\documentclass{article} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % hyperlinks % simple URL typesetting % professional-quality tables % blackboard math symbols % microtypography \usepackage[centertags]{amsmath} \def\x{\times} \def\S{\mathbf{S}} \def\H{\mathbf{H}} \renewcommand{\b}[1]{\mathbf{#1}} \newcommand{\eye}{\mathbf{I}} \newcommand{\CC}{\mathcal{C}} \newcommand{\DD}{\mathcal{D}} \newcommand{\FF}{\mathcal{F}} \newcommand{\GG}{\mathcal{G}} \newcommand{\HH}{\mathcal{H}} \newcommand{\II}{\mathcal{I}} \newcommand{\KK}{\mathcal{K}} \newcommand{\LL}{\mathcal{L}} \newcommand{\MM}{\mathcal{M}} \newcommand{\NN}{\mathcal{N}} \newcommand{\OO}{\mathcal{O}} \newcommand{\PP}{\mathcal{P}} \newcommand{\QQ}{\mathcal{Q}} \newcommand{\RR}{\mathcal{R}} \newcommand{\SSS}{\mathcal{S}} \newcommand{\TT}{\mathcal{T}} \newcommand{\VV}{\mathcal{V}} \newcommand{\WW}{\mathcal{W}} \newcommand{\XX}{\mathcal{X}} \newcommand{\YY}{\mathcal{Y}} \newcommand{\ZZ}{\mathcal{Z}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\expect}{\mathbb{E}} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator*{\xent}{xent} \DeclareMathOperator*{\ent}{ent} \DeclareMathOperator*{\softmax}{softmax} \DeclareMathOperator*{\KL}{KL} \DeclareMathOperator*{\lrelu}{lrelu} \DeclareMathOperator*{\relu}{relu} \DeclareMathOperator*{\conv}{conv} \newcommand*\equalcontr[1][\value{footnote}]{\footnotemark[#1]} \def\hevalnewsqaf1{0.694}\xspace \def\hevalnewsqaem{0.465}\xspace \def\hevalsquad{0.807}\xspace \def\hcgap{0.198}\xspace \def\aibest{0.496}\xspace \title{NewsQA: A Machine Comprehension Dataset} \author{Adam Trischler\thanks{These three authors contributed equally.}\qquad\qquad Tong Wang\equalcontr\qquad\qquad Xingdi Yuan\equalcontr\qquad\qquad Justin Harris\\\ \vspace{-2mm}\\{\bf Alessandro Sordoni\qquad\qquad Philip Bachman\qquad\qquad Kaheer Suleman} \\\ \\ {\tt \{adam.trischler, tong.wang, eric.yuan, justin.harris,}\\ {\tt \ alessandro.sordoni, phil.bachman, k.suleman\}@maluuba.com} \\ Maluuba Research \\ Montr\'{e}al, Qu\'{e}bec, Canada } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} We present \emph{NewsQA}, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that \emph{NewsQA} demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (\hcgap~in F1) indicates that significant progress can be made on \emph{NewsQA} through future research. The dataset is freely available at \url{https://datasets.maluuba.com/NewsQA}. \end{abstract} \section{Introduction} Almost all human knowledge is recorded in the medium of text. As such, comprehension of written language by machines, at a near-human level, would enable a broad class of artificial intelligence applications. In human students we evaluate reading comprehension by posing questions based on a text passage and then assessing a student's answers. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic understanding to causal reasoning to inference~\citep{richardson2013}. To teach literacy to machines, the research community has taken a similar approach with machine comprehension (MC). Recent years have seen the release of a host of MC datasets. Generally, these consist of (document, question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size, difficulty, and collection methodology; however, as pointed out by~\citet{squad}, most suffer from one of two shortcomings: those that are designed explicitly to test comprehension~\citep{richardson2013} are too small for training data-intensive deep learning models, while those that are sufficiently large for deep learning~\citep{hermann2015,hill2015,kadlecdata} are generated synthetically, yielding questions that are not posed in natural language and that may not test comprehension directly~\citep{chenCNN}. More recently, \citet{squad} sought to overcome these deficiencies with their crowdsourced dataset, \emph{SQuAD}. Here we present a challenging new largescale dataset for machine comprehension: \emph{NewsQA}. \emph{NewsQA} contains 119,633 natural language questions posed by crowdworkers on 12,744 news articles from CNN. Answers to these questions consist of spans of text within the corresponding article highlighted also by crowdworkers. To build \emph{NewsQA} we utilized a four-stage collection process designed to encourage exploratory, curiosity-based questions that reflect human information seeking. CNN articles were chosen as the source material because they have been used in the past~\citep{hermann2015} and, in our view, machine comprehension systems are particularly suited to high-volume, rapidly changing information sources like news. As~\citet{trischler2016},~\citet{chenCNN}, and others have argued, it is important for datasets to be sufficiently challenging to teach models the abilities we wish them to learn. Thus, in line with~\citet{richardson2013}, our goal with \emph{NewsQA} was to construct a corpus of questions that necessitates reasoning-like behaviors -- for example, synthesis of information across different parts of an article. We designed our collection methodology explicitly to capture such questions. The challenging characteristics of \emph{NewsQA} that distinguish it from most previous comprehension tasks are as follows: \begin{enumerate} \item Answers are spans of arbitrary length within an article, rather than single words or entities. \item Some questions have no answer in the corresponding article (the \emph{null} span). \item There are no candidate answers from which to choose. \item Our collection process encourages lexical and syntactic divergence between questions and answers. \item A significant proportion of questions requires reasoning beyond simple word- and context-matching (as shown in our analysis). \end{enumerate} Some of these characteristics are present also in \emph{SQuAD}, the MC dataset most similar to \emph{NewsQA}. However, we demonstrate through several metrics that \emph{NewsQA} offers a greater challenge to existing models. In this paper we describe the collection methodology for \emph{NewsQA}, provide a variety of statistics to characterize it and contrast it with previous datasets, and assess its difficulty. In particular, we measure human performance and compare it to that of two strong neural-network baselines. Humans significantly outperform powerful question-answering models. This suggests there is room for improvement through further advances in machine comprehension research. \section{Related Datasets} \label{sec:related} \emph{NewsQA} follows in the tradition of several recent comprehension datasets. These vary in size, difficulty, and collection methodology, and each has its own distinguishing characteristics. We agree with~\citet{kadlecdata} who have said ``models could certainly benefit from as diverse a collection of datasets as possible.'' We discuss this collection below. \subsection{MCTest} {\it MCTest}~\citep{richardson2013} is a crowdsourced collection of 660 elementary-level children's stories with associated questions and answers. The stories are fictional, to ensure that the answer must be found in the text itself, and carefully limited to what a young child can understand. Each question comes with a set of 4 candidate answers that range from single words to full explanatory sentences. The questions are designed to require rudimentary reasoning and synthesis of information across sentences, making the dataset quite challenging. This is compounded by the dataset's size, which limits the training of expressive statistical models. Nevertheless, recent comprehension models have performed well on {\it MCTest}~\citep{sachan2015,wangMC}, including a highly structured neural model~\citep{trischler2016}. These models all rely on access to the small set of candidate answers, a crutch that \emph{NewsQA} does not provide. \subsection{CNN/Daily Mail} The \emph{CNN/Daily Mail} corpus~\citep{hermann2015} consists of news articles scraped from those outlets with corresponding cloze-style questions. Cloze questions are constructed synthetically by deleting a single entity from abstractive summary points that accompany each article (written presumably by human authors). As such, determining the correct answer relies mostly on recognizing textual entailment between the article and the question. The named entities within an article are identified and anonymized in a preprocessing step and constitute the set of candidate answers; contrast this with \emph{NewsQA} in which answers often include longer phrases and no candidates are given. Because the cloze process is automatic, it is straightforward to collect a significant amount of data to support deep-learning approaches: \emph{CNN/Daily Mail} contains about 1.4 million question-answer pairs. However,~\citet{chenCNN} demonstrated that the task requires only limited reasoning and, in fact, performance of the strongest models~\citep{kadlec2016,epireader,iaa} nearly matches that of humans. \subsection{Children's Book Test} The \emph{Children's Book Test} (\emph{CBT})~\citep{hill2015} was collected using a process similar to that of \emph{CNN/Daily Mail}. Text passages are 20-sentence excerpts from children's books available through Project Gutenberg; questions are generated by deleting a single word in the next ({\it i.e.},~21st) sentence. Consequently, \emph{CBT} evaluates word prediction based on context. It is a comprehension task insofar as comprehension is likely necessary for this prediction, but comprehension may be insufficient and other mechanisms may be more important. \subsection{BookTest} \citet{kadlecdata} convincingly argue that, because existing datasets are not large enough, we have yet to reach the full capacity of existing comprehension models. As a remedy they present \emph{BookTest}. This is an extension to the named-entity and common-noun strata of \emph{CBT} that increases their size by over 60 times. \citet{kadlecdata} demonstrate that training on the augmented dataset yields a model~\citep{kadlec2016} that matches human performance on \emph{CBT}. This is impressive and suggests that much is to be gained from more data, but we repeat our concerns about the relevance of story prediction as a comprehension task. We also wish to encourage more efficient learning from less data. \subsection{SQuAD} The comprehension dataset most closely related to \emph{NewsQA} is \emph{SQuAD}~\citep{squad}. It consists of natural language questions posed by crowdworkers on paragraphs from high-PageRank Wikipedia articles. As in \emph{NewsQA}, each answer consists of a span of text from the related paragraph and no candidates are provided. Despite the effort of manual labelling, \emph{SQuAD}'s size is significant and amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles. Although \emph{SQuAD} is a more realistic and more challenging comprehension task than the other largescale MC datasets, machine performance has rapidly improved towards that of humans in recent months. The \emph{SQuAD} authors measured human accuracy at 0.905 in F1 (we measured human F1 at \hevalsquad~using a different methodology); at the time of writing, the strongest published model to date achieves 0.778 F1~\citep{wang2016multi}. This suggests that new, more difficult alternatives like \emph{NewsQA} could further push the development of more intelligent MC systems. \section{Collection methodology} \label{sec:method} We collected \emph{NewsQA} through a four-stage process: article curation, question sourcing, answer sourcing, and validation. We also applied a post-processing step with answer agreement consolidation and span merging to enhance the usability of the dataset. These steps are detailed below. \subsection{Article curation} We retrieve articles from CNN using the script created by~\citet{hermann2015} for \emph{CNN/Daily Mail}. From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover a wide range of topics that includes politics, economics, and current events. Articles are partitioned at random into a training set (90\%), a development set (5\%), and a test set (5\%). \subsection{Question sourcing} It was important to us to collect challenging questions that could not be answered using straightforward word- or context-matching. Like~\citet{richardson2013} we want to encourage reasoning in comprehension models. We are also interested in questions that, in some sense, model human curiosity and reflect actual human use-cases of information seeking. Along a similar line, we consider it an important (though as yet overlooked) capacity of a comprehension model to recognize when given information is inadequate, so we are also interested in questions that may not have sufficient evidence in the text. Our question sourcing stage was designed to solicit questions of this nature, and deliberately separated from the answer sourcing stage for the same reason. {\it Questioners} (a distinct set of crowdworkers) see \emph{only} a news article's headline and its summary points (also available from CNN); they do not see the full article itself. They are asked to formulate a question from this incomplete information. This encourages curiosity about the contents of the full article and prevents questions that are simple reformulations of sentences in the text. It also increases the likelihood of questions whose answers do not exist in the text. We reject questions that have significant word overlap with the summary points to ensure that crowdworkers do not treat the summaries as mini-articles, and further discouraged this in the instructions. During collection each Questioner is solicited for up to three questions about an article. They are provided with positive and negative examples to prompt and guide them (detailed instructions are shown in Figure~\ref{fig:turk-q-source-instructions}). \subsection{Answer sourcing} A second set of crowdworkers ({\it Answerers}) provide answers. Although this separation of question and answer increases the overall cognitive load, we hypothesized that unburdening Questioners in this way would encourage more complex questions. Answerers receive a full article along with a crowdsourced question and are tasked with determining the answer. They may also reject the question as nonsensical, or select the {\it null} answer if the article contains insufficient information. Answers are submitted by clicking on and highlighting words in the article, while instructions encourage the set of answer words to consist of a single continuous span (again, we give an example prompt in the Appendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with the aim of achieving agreement between at least two Answerers. \subsection{Validation} Crowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested or malicious workers). To obtain a dataset of the highest possible quality we use a validation process that mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, a question, and the set of unique answers to that question. We task these workers with choosing the best answer from the candidate set or rejecting all answers. Each article-question pair is validated by an average of 2.48 crowdworkers. Validation was used on those questions \emph{without} answer-agreement after the previous stage, amounting to 43.2\% of all questions. \subsection{Answer marking and cleanup} After validation, 86.0\% of all questions in \emph{NewsQA} have answers agreed upon by at least two separate crowdworkers---either at the initial answer sourcing stage or in the top-answer selection. This improves the dataset's quality. We choose to include the questions without agreed answers in the corpus also, but they are specially marked. Such questions could be treated as having the \emph{null} answer and used to train models that are aware of poorly posed questions. As a final cleanup step we combine answer spans that are less than 3 words apart (punctuation is discounted). We find that 5.68\% of answers consist of multiple spans, while 71.3\% of multi-spans are within the 3-word threshold. Looking more closely at the data reveals that the multi-span answers often represent lists. These may present an interesting challenge for comprehension models moving forward. \section{Data analysis} \label{sec:anal} We provide a thorough analysis of \emph{NewsQA} to demonstrate its challenge and its usefulness as a machine comprehension benchmark. The analysis focuses on the types of answers that appear in the dataset and the various forms of reasoning required to solve it.\footnote{Additional statistics are available at \url{https://datasets.maluuba.com/NewsQA/stats}.} \subsection{Answer types} \label{sec:answer-types} Following~\citet{squad}, we categorize answers based on their linguistic type (see Table~\ref{tab:a-type}). This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER tags for answer spans (see~\citet{squad} for more details). From the table we see that the majority of answers (22.2\%) are common noun phrases. Thereafter, answers are fairly evenly spread among the clause phrase (18.3\%), person (14.8\%), numeric (9.8\%), and other (11.2\%) types. Clearly, answers in \emph{NewsQA} are linguistically diverse. The proportions in Table~\ref{tab:a-type} only account for cases when an answer span exists. The complement of this set comprises questions with an agreed \emph{null} answer (9.5\% of the full corpus) and answers without agreement after validation (4.5\% of the full corpus). \subsection{Reasoning types} \label{sec:reasoning-types} The forms of reasoning required to solve \emph{NewsQA} directly influence the abilities that models will learn from the dataset. We stratified reasoning types using a variation on the taxonomy presented by~\citet{chenCNN} in their analysis of the \emph{CNN/Daily Mail} dataset. Types are as follows, in ascending order of difficulty: \begin{enumerate} \item {\bf Word Matching:} Important words in the question exactly match words in the immediate context of an answer span, such that a keyword search algorithm could perform well on this subset. \item {\bf Paraphrasing:} A single sentence in the article entails or paraphrases the question. Paraphrase recognition may require synonymy and world knowledge. \item {\bf Inference:} The answer must be inferred from incomplete information in the article or by recognizing conceptual overlap. This typically draws on world knowledge. \item {\bf Synthesis:} The answer can only be inferred by synthesizing information distributed across multiple sentences. \item {\bf Ambiguous/Insufficient:} The question has no answer or no unique answer in the article. \end{enumerate} For both \emph{NewsQA} and \emph{SQuAD}, we manually labelled 1,000 examples (drawn randomly from the respective development sets) according to these types and compiled the results in Table~\ref{tab:r-type}. Some examples fall into more than one category, in which case we defaulted to the more challenging type. We can see from the table that word matching, the easiest type, makes up the largest subset in both datasets (32.7\% for \emph{NewsQA} and 39.8\% for \emph{SQuAD}). Paraphrasing constitutes a larger proportion in \emph{SQuAD} than in \emph{NewsQA} (34.3\% vs 27.0\%), possibly a result from the explicit encouragement of lexical variety in \emph{SQuAD} question sourcing. However, \emph{NewsQA} significantly outnumbers \emph{SQuAD} on the distribution of the more difficult forms of reasoning: synthesis and inference make up a combined 33.9\% of the data in contrast to 20.5\% in \emph{SQuAD}. \section{Baseline models} \label{sec:models} We test the performance of three comprehension systems on \emph{NewsQA}: human data analysts and two neural models. The first neural model is the match-LSTM (mLSTM) system of~\citet{wangsquad}. The second is a model of our own design that is similar but computationally cheaper. We describe these models below but omit the personal details of our analysts. Implementation details of the models are described in Appendix~\ref{apd:impl-details}. \subsection{Match-LSTM} We selected the mLSTM model because it is straightforward to implement and offers strong, though not state-of-the-art, performance on the similar \emph{SQuAD} dataset. There are three stages involved in the mLSTM. First, LSTM networks encode the document and question (represented by GloVe word embeddings~\citep{pennington2014}) as sequences of hidden states. Second, an mLSTM network~\citep{wang2015snli} compares the document encodings with the question encodings. This network processes the document sequentially and at each token uses an attention mechanism to obtain a weighted vector representation of the question; the weighted combination is concatenated with the encoding of the current token and fed into a standard LSTM. Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of the answer span. We refer the reader to~\citet{wang2015snli,wangsquad} for full details. \subsection{The Bilinear Annotation Re-encoding Boundary (BARB) Model} The match-LSTM is computationally intensive since it computes an attention over the entire question at each document token in the recurrence. To facilitate faster experimentation with \emph{NewsQA} we developed a lighter-weight model (BARB) that achieves similar results on \emph{SQuAD}\footnote{With the configurations for the results reported in Section~\ref{sec:model-perf}, one epoch of training on \emph{NewsQA} takes about 3.9k seconds for \emph{BARB} and 8.1k seconds for \emph{mLSTM}.}. Our model consists of four stages: \paragraph{Encoding} All words in the document and question are mapped to real-valued vectors using the GloVe embeddings ${\bf W} \in \mathbb{R}^{|V| \times d}$. This yields ${\bf d}_1, \ldots, {\bf d}_n \in \mathbb{R}^d$ and ${\bf q}_1, \ldots, {\bf q}_m \in \mathbb{R}^d$. A bidirectional GRU network~\citep{bahdanau2014} encodes ${\bf d}_i$ into contextual states ${\bf h}_i \in \mathbb{R}^{D_1}$ for the document. The same encoder is applied to ${\bf q}_j$ to derive contextual states ${\bf k}_j \in \mathbb{R}^{D_1}$ for the question.\footnote{A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions. Each of these has hidden size $\frac{1}{2}D_1$.} \paragraph{Bilinear Annotation} Next we compare the document and question encodings using a set of $C$ bilinear transformations, \begin{equation*} {\bf g}_{ij} = {\bf h}_i^T {\bf T}^{[1:C]} {\bf k}_j, \quad {\bf T}^c \in \mathbb{R}^{D_1 \times D_1},~{\bf g}_{ij} \in \mathbb{R}^C, \end{equation*} which we use to produce an $(n \times m \times C)$-dimensional tensor of annotation scores, ${\bf G} = [{\bf g}_{ij}]$. We take the maximum over the question-token (second) dimension and call the columns of the resulting matrix ${\bf g}_i \in \mathbb{R}^C$. We use this matrix as an annotation over the document word dimension. In contrast with the more typical multiplicative application of attention vectors, this annotation matrix is concatenated to the encoder RNN input in the re-encoding stage. \paragraph{Re-encoding} For each document word, the input of the re-encoding RNN (another biGRU) consists of three components: the document encodings $\bf h_i$, the annotation vectors $\bf g_i$, and a binary feature $q_i$ indicating whether the document word appears in the question. The resulting vectors ${\bf f}_i = [{\bf h}_i; {\bf g}_i; q_i]$ are fed into the re-encoding RNN to produce $D_2$-dimensional encodings ${\bf e}_i$ for the boundary-pointing stage. \paragraph{Boundary pointing} Finally, we search for the boundaries of the answer span using a convolutional network (in a process similar to edge detection). Encodings ${\bf e}_i$ are arranged in matrix ${\bf E} \in \mathbb{R}^{D_2 \times n}$. ${\bf E}$ is convolved with a bank of $n_f$ filters, $\mathbf{F}_k^\ell \in \mathbb{R}^{D_2 \times w}$, where $w$ is the filter width, $k$ indexes the different filters, and $\ell$ indexes the layer of the convolutional network. Each layer has the same number of filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) following each convolution, with the result an $(n_f \times n)$-dimensional matrix ${\bf B}_\ell$. We use two convolutional layers in the boundary-pointing stage. Given ${\bf B}_1$ and ${\bf B}_2$, the answer span's start- and end-location probabilities are computed using $p(s) \propto \exp \left( {\bf v}_s^T {\bf B}_1 + b_s\right) $ and $p(e) \propto \exp \left( {\bf v}_e^T {\bf B}_2 + b_e \right)$, respectively. We also concatenate $p(s)$ to the input of the second convolutional layer (along the $n_f$-dimension) so as to condition the end-boundary pointing on the start-boundary. Vectors ${\bf v}_s$, ${\bf v}_e \in \mathbb{R}^{n_f}$ and scalars $b_s$, $b_e \in \mathbb{R}$ are trainable parameters. We also provide an intermediate level of ``guidance'' to the annotation mechanism by first reducing the feature dimension $C$ in $\bf G$ with mean-pooling, then maximizing the softmax probabilities in the resulting ($n$-dimensional) vector corresponding to the answer word positions in each document. This auxiliary task is observed empirically to improve performance. \section{Experiments\protect\footnote{All experiments in this section use the subset of \emph{NewsQA} dataset with answer agreements (92,549 samples for training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerable questions for future work.}} \label{sec:exp} \subsection{Human evaluation} We tested four English speakers on a total of 1,000 questions from the \emph{NewsQA} development set. We used four performance measures: F1 and exact match (EM) scores (the same measures used by \emph{SQuAD}), as well as BLEU and CIDEr\footnote{We use \url{https://github.com/tylin/coco-caption} to calculate these two scores.}. BLEU is a precision-based metric popular in machine translation that uses a weighted average of variable length phrase matches ($n$-grams) against the reference sentence~\citep{papineni2002bleu}. CIDEr was designed to correlate better with human judgements of sentence similarity, and uses \emph{tf-idf} scores over $n$-grams~\citep{vedantam2015cider}. As given in Table~\ref{tab:datasetresults}, humans averaged \hevalnewsqaf1~ F1 on \emph{NewsQA}. The human EM scores are relatively low at \hevalnewsqaem. These lower scores are a reflection of the fact that, particularly in a dataset as complex as \emph{NewsQA}, there are multiple ways to select semantically equivalent answers, {\it e.g.}, ``1996'' versus ``in 1996''. Although these answers are equally correct they would be measured at 0.5 F1 and 0.0 EM. This suggests that simpler automatic metrics are not equal to the task of complex MC evaluation, a problem that has been noted in other domains~\citep{liu2016}. Therefore we also measure according to BLEU and CIDEr: humans score 0.560 and 3.596 on these metrics, respectively. The original \emph{SQuAD} evaluation of human performance compares distinct answers given by crowdworkers according to EM and F1; for a closer comparison with \emph{NewsQA}, we replicated our human test on the same number of validation data (1,000) with the same humans. We measured human answers against the second group of crowdsourced responses in \emph{SQuAD}'s development set, yielding \hevalsquad~ F1, 0.625 BLEU, and 3.998 CIDEr. Note that the F1 score is close to the top single-model performance of 0.778 achieved in~\cite{wang2016multi}. We finally compared human performance on the answers that had crowdworker agreement with and without validation, finding a difference of only 1.4 percentage points F1. This suggests our validation stage yields good-quality answers. \subsection{Model performance} \label{sec:model-perf} Performance of the baseline models and humans is measured by EM and F1 with the official evaluation script from \emph{SQuAD} and listed in Table~\ref{tab:datasetresults}. We supplement these with BLEU and CIDEr measures on the 1,000 human-annotated dev questions. Unless otherwise stated, hyperparameters are determined by \texttt{hyperopt} (Appendix~\ref{apd:impl-details}). The gap between human and machine performance on \emph{NewsQA} is a striking \hcgap~points F1 --- much larger than the gap on \emph{SQuAD} (0.098) under the same human evaluation scheme. The gaps suggest a large margin for improvement with machine comprehension methods. Figure~\ref{fig:stratification-at-rt} stratifies model (BARB) performance according to answer type (left) and reasoning type (right) as defined in Sections~\ref{sec:answer-types} and~\ref{sec:reasoning-types}, respectively. The answer-type stratification suggests that the model is better at pointing to named entities compared to other types of answers. The reasoning-type stratification, on the other hand, shows that questions requiring \emph{inference} and \emph{synthesis} are, not surprisingly, more difficult for the model. Consistent with observations in Table~\ref{tab:datasetresults}, stratified performance on \emph{NewsQA} is significantly lower than on \emph{SQuAD}. The difference is smallest on word matching and largest on synthesis. We postulate that the longer stories in \emph{NewsQA} make synthesizing information from separate sentences more difficult, since the relevant sentences may be farther apart. This requires the model to track longer-term dependencies. It is also interesting to observe that on \emph{SQuAD}, BARB outperforms human annotators in answering ambiguous questions or those with incomplete information. \subsection{Sentence-level scoring} We propose a simple sentence-level subtask as an additional quantitative demonstration of the relative difficulty of \emph{NewsQA}. Given a document and a question, the goal is to find the sentence containing the answer span. We hypothesize that simple techniques like word-matching are inadequate to this task owing to the more involved reasoning required by \emph{NewsQA}. We employ a technique that resembles inverse document frequency (\emph{idf}), which we call inverse sentence frequency (\emph{isf}). Given a sentence $\SSS_i$ from an article and its corresponding question $\QQ$, the \emph{isf} score is given by the sum of the \emph{idf} scores of the words common to $\SSS_i$ and $\QQ$ (each sentence is treated as a document for the \emph{idf} computation). The sentence with the highest \emph{isf} is taken as the answer sentence $\SSS_*$, that is, \[ \SSS_* = \argmax_i \sum_{w \in \SSS_i \cap \QQ} \mathit{isf}(w) .\] The \emph{isf} method achieves an impressive 79.4\% sentence-level accuracy on \emph{SQuAD}'s development set but only 35.4\% accuracy on \emph{NewsQA}'s development set, highlighting the comparative difficulty of the latter. To eliminate the difference in article length as a possible cause of the performance gap, we also artificially increased the article lengths in \emph{SQuAD} by concatenating adjacent \emph{SQuAD} articles {\it from the same Wikipedia article}. Accuracy decreases as expected with the increased \emph{SQuAD} article length, yet remains significantly higher than on \emph{NewsQA} with comparable or even greater article length (see Table~\ref{tab:lengthy-squad}). \section{Conclusion} \label{sec:conc} We have introduced a challenging new comprehension dataset: \emph{NewsQA}. We collected the 100,000+ examples of \emph{NewsQA} using teams of crowdworkers, who variously read CNN articles or highlights, posed questions about them, and determined answers. Our methodology yields diverse answer types and a significant proportion of questions that require some reasoning ability to solve. This makes the corpus challenging, as confirmed by the large performance gap between humans and deep neural models (\hcgap~F1, 0.479 BLEU, 1.165 CIDEr). By its size and complexity, \emph{NewsQA} makes a significant extension to the existing body of comprehension datasets. We hope that our corpus will spur further advances in machine comprehension and guide the development of literate artificial intelligence. \section*{Acknowledgments} The authors would like to thank \c{C}a\u{g}lar G\"{u}l\c{c}ehre, Sandeep Subramanian and Saizheng Zhang for helpful discussions. \bibliographystyle{iclr2017_conference} \newpage \section*{Appendices} \appendix \section{Implementation details} \label{apd:impl-details} Both mLSTM and BARB are implemented with the Keras framework \citep{keras} using the Theano \citep{theano10} backend. Word embeddings are initialized using GloVe vectors \citep{pennington2014} pre-trained on the 840-billion \emph{Common Crawl} corpus. The word embeddings are not updated during training. Embeddings for out-of-vocabulary words are initialized with zero. For both models, the training objective is to maximize the log likelihood of the boundary pointers. Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAM optimizer \citep{kingma2014}. The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB. The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of each epoch. Gradient clipping \citep{pascanu2013difficulty} is applied with a threshold of 5. Parameter tuning is performed on both models using \texttt{hyperopt}\footnote{\url{https://github.com/hyperopt/hyperopt}}. For each model, configurations for the best observed performance are as follows: \textbf{mLSTM} Both the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hidden size of 192. These settings are consistent with those used by \citet{wangsquad}. Model parameters are initialized with either the normal distribution ($\NN(0,0.05)$) or the orthogonal initialization ($\OO$, \citealt{saxe2013exact}) in Keras. All weight matrices in the LSTMs are initialized with $\OO$. In the Match-LSTM layer, $W^q$, $W^p$, and $W^r$ are initialized with $\OO$, $b^p$ and $w$ are initialized with $\NN$, and $b$ is initialized as 1. In the answer-pointing layer, $V$ and $W^a$ are initialized with $\OO$, $b^a$ and $v$ are initialized with $\NN$, and $c$ is initialized as 1. \textbf{BARB} For BARB, the following hyperparameters are used on both \emph{SQuAD} and \emph{NewsQA}: $d=300$, $D_1=128$, $C=64$, $D_2=256$, $w=3$, and $n_f=128$. Weight matrices in the GRU, the bilinear models, as well as the boundary decoder (${\bf v_s}$ and ${\bf v_e}$) are initialized with $\OO$. The filter weights in the boundary decoder are initialized with \emph{glorot\_uniform} (\citealt{glorot2010understanding}, default in Keras). The bilinear biases are initialized with $\NN$, and the boundary decoder biases are initialized with 0. \section{Data collection user interface} Here we present the user interfaces used in question sourcing, answer sourcing, and question/answer validation. \end{document}
NewsQA: A Machine Comprehension Dataset
1611.09830
Table 4: Human performance on SQuAD and NewsQA datasets. The first row is taken from Rajpurkar et al. (2016), and the last two rows correspond to machine performance (BARB) on the human-evaluated subsets.
[ "Dataset", "Exact Match", "F1", "BLEU", "CIDEr" ]
[ [ "[ITALIC] SQuAD", "0.803", "0.905", "-", "-" ], [ "[ITALIC] SQuAD (ours)", "0.650", "0.807", "0.625", "3.998" ], [ "[ITALIC] NewsQA", "0.465", "0.694", "0.560", "3.596" ], [ "[ITALIC] SQuADBARB", "0.553", "0.685", "0.366", "2.845" ], [ "[ITALIC] NewsQABARB", "0.340", "0.501", "0.081", "2.431" ] ]
The human EM scores are relatively low at 0.465. These lower scores are a reflection of the fact that, particularly in a dataset as complex as NewsQA, there are multiple ways to select semantically equivalent answers, e.g., “1996” versus “in 1996”. Although these answers are equally correct they would be measured at 0.5 F1 and 0.0 EM. Therefore we also measure according to BLEU and CIDEr: humans score 0.560 and 3.596 on these metrics, respectively. We supplement these with BLEU and CIDEr measures on the 1,000 human-annotated dev questions. Unless otherwise stated, hyperparameters are determined by hyperopt The gap between human and machine performance on NewsQA is a striking 0.198 points F1 — much larger than the gap on SQuAD (0.098) under the same human evaluation scheme. The gaps suggest a large margin for improvement with machine comprehension methods. The answer-type stratification suggests that the model is better at pointing to named entities compared to other types of answers. The reasoning-type stratification, on the other hand, shows that questions requiring inference and synthesis are, not surprisingly, more difficult for the model. The difference is smallest on word matching and largest on synthesis. We postulate that the longer stories in NewsQA make synthesizing information from separate sentences more difficult, since the relevant sentences may be farther apart. This requires the model to track longer-term dependencies. It is also interesting to observe that on SQuAD, BARB outperforms human annotators in answering ambiguous questions or those with incomplete information.
\documentclass{article} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % hyperlinks % simple URL typesetting % professional-quality tables % blackboard math symbols % microtypography \usepackage[centertags]{amsmath} \def\x{\times} \def\S{\mathbf{S}} \def\H{\mathbf{H}} \renewcommand{\b}[1]{\mathbf{#1}} \newcommand{\eye}{\mathbf{I}} \newcommand{\CC}{\mathcal{C}} \newcommand{\DD}{\mathcal{D}} \newcommand{\FF}{\mathcal{F}} \newcommand{\GG}{\mathcal{G}} \newcommand{\HH}{\mathcal{H}} \newcommand{\II}{\mathcal{I}} \newcommand{\KK}{\mathcal{K}} \newcommand{\LL}{\mathcal{L}} \newcommand{\MM}{\mathcal{M}} \newcommand{\NN}{\mathcal{N}} \newcommand{\OO}{\mathcal{O}} \newcommand{\PP}{\mathcal{P}} \newcommand{\QQ}{\mathcal{Q}} \newcommand{\RR}{\mathcal{R}} \newcommand{\SSS}{\mathcal{S}} \newcommand{\TT}{\mathcal{T}} \newcommand{\VV}{\mathcal{V}} \newcommand{\WW}{\mathcal{W}} \newcommand{\XX}{\mathcal{X}} \newcommand{\YY}{\mathcal{Y}} \newcommand{\ZZ}{\mathcal{Z}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\expect}{\mathbb{E}} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator*{\xent}{xent} \DeclareMathOperator*{\ent}{ent} \DeclareMathOperator*{\softmax}{softmax} \DeclareMathOperator*{\KL}{KL} \DeclareMathOperator*{\lrelu}{lrelu} \DeclareMathOperator*{\relu}{relu} \DeclareMathOperator*{\conv}{conv} \newcommand*\equalcontr[1][\value{footnote}]{\footnotemark[#1]} \def\hevalnewsqaf1{0.694}\xspace \def\hevalnewsqaem{0.465}\xspace \def\hevalsquad{0.807}\xspace \def\hcgap{0.198}\xspace \def\aibest{0.496}\xspace \title{NewsQA: A Machine Comprehension Dataset} \author{Adam Trischler\thanks{These three authors contributed equally.}\qquad\qquad Tong Wang\equalcontr\qquad\qquad Xingdi Yuan\equalcontr\qquad\qquad Justin Harris\\\ \vspace{-2mm}\\{\bf Alessandro Sordoni\qquad\qquad Philip Bachman\qquad\qquad Kaheer Suleman} \\\ \\ {\tt \{adam.trischler, tong.wang, eric.yuan, justin.harris,}\\ {\tt \ alessandro.sordoni, phil.bachman, k.suleman\}@maluuba.com} \\ Maluuba Research \\ Montr\'{e}al, Qu\'{e}bec, Canada } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} We present \emph{NewsQA}, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that \emph{NewsQA} demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (\hcgap~in F1) indicates that significant progress can be made on \emph{NewsQA} through future research. The dataset is freely available at \url{https://datasets.maluuba.com/NewsQA}. \end{abstract} \section{Introduction} Almost all human knowledge is recorded in the medium of text. As such, comprehension of written language by machines, at a near-human level, would enable a broad class of artificial intelligence applications. In human students we evaluate reading comprehension by posing questions based on a text passage and then assessing a student's answers. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic understanding to causal reasoning to inference~\citep{richardson2013}. To teach literacy to machines, the research community has taken a similar approach with machine comprehension (MC). Recent years have seen the release of a host of MC datasets. Generally, these consist of (document, question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size, difficulty, and collection methodology; however, as pointed out by~\citet{squad}, most suffer from one of two shortcomings: those that are designed explicitly to test comprehension~\citep{richardson2013} are too small for training data-intensive deep learning models, while those that are sufficiently large for deep learning~\citep{hermann2015,hill2015,kadlecdata} are generated synthetically, yielding questions that are not posed in natural language and that may not test comprehension directly~\citep{chenCNN}. More recently, \citet{squad} sought to overcome these deficiencies with their crowdsourced dataset, \emph{SQuAD}. Here we present a challenging new largescale dataset for machine comprehension: \emph{NewsQA}. \emph{NewsQA} contains 119,633 natural language questions posed by crowdworkers on 12,744 news articles from CNN. Answers to these questions consist of spans of text within the corresponding article highlighted also by crowdworkers. To build \emph{NewsQA} we utilized a four-stage collection process designed to encourage exploratory, curiosity-based questions that reflect human information seeking. CNN articles were chosen as the source material because they have been used in the past~\citep{hermann2015} and, in our view, machine comprehension systems are particularly suited to high-volume, rapidly changing information sources like news. As~\citet{trischler2016},~\citet{chenCNN}, and others have argued, it is important for datasets to be sufficiently challenging to teach models the abilities we wish them to learn. Thus, in line with~\citet{richardson2013}, our goal with \emph{NewsQA} was to construct a corpus of questions that necessitates reasoning-like behaviors -- for example, synthesis of information across different parts of an article. We designed our collection methodology explicitly to capture such questions. The challenging characteristics of \emph{NewsQA} that distinguish it from most previous comprehension tasks are as follows: \begin{enumerate} \item Answers are spans of arbitrary length within an article, rather than single words or entities. \item Some questions have no answer in the corresponding article (the \emph{null} span). \item There are no candidate answers from which to choose. \item Our collection process encourages lexical and syntactic divergence between questions and answers. \item A significant proportion of questions requires reasoning beyond simple word- and context-matching (as shown in our analysis). \end{enumerate} Some of these characteristics are present also in \emph{SQuAD}, the MC dataset most similar to \emph{NewsQA}. However, we demonstrate through several metrics that \emph{NewsQA} offers a greater challenge to existing models. In this paper we describe the collection methodology for \emph{NewsQA}, provide a variety of statistics to characterize it and contrast it with previous datasets, and assess its difficulty. In particular, we measure human performance and compare it to that of two strong neural-network baselines. Humans significantly outperform powerful question-answering models. This suggests there is room for improvement through further advances in machine comprehension research. \section{Related Datasets} \label{sec:related} \emph{NewsQA} follows in the tradition of several recent comprehension datasets. These vary in size, difficulty, and collection methodology, and each has its own distinguishing characteristics. We agree with~\citet{kadlecdata} who have said ``models could certainly benefit from as diverse a collection of datasets as possible.'' We discuss this collection below. \subsection{MCTest} {\it MCTest}~\citep{richardson2013} is a crowdsourced collection of 660 elementary-level children's stories with associated questions and answers. The stories are fictional, to ensure that the answer must be found in the text itself, and carefully limited to what a young child can understand. Each question comes with a set of 4 candidate answers that range from single words to full explanatory sentences. The questions are designed to require rudimentary reasoning and synthesis of information across sentences, making the dataset quite challenging. This is compounded by the dataset's size, which limits the training of expressive statistical models. Nevertheless, recent comprehension models have performed well on {\it MCTest}~\citep{sachan2015,wangMC}, including a highly structured neural model~\citep{trischler2016}. These models all rely on access to the small set of candidate answers, a crutch that \emph{NewsQA} does not provide. \subsection{CNN/Daily Mail} The \emph{CNN/Daily Mail} corpus~\citep{hermann2015} consists of news articles scraped from those outlets with corresponding cloze-style questions. Cloze questions are constructed synthetically by deleting a single entity from abstractive summary points that accompany each article (written presumably by human authors). As such, determining the correct answer relies mostly on recognizing textual entailment between the article and the question. The named entities within an article are identified and anonymized in a preprocessing step and constitute the set of candidate answers; contrast this with \emph{NewsQA} in which answers often include longer phrases and no candidates are given. Because the cloze process is automatic, it is straightforward to collect a significant amount of data to support deep-learning approaches: \emph{CNN/Daily Mail} contains about 1.4 million question-answer pairs. However,~\citet{chenCNN} demonstrated that the task requires only limited reasoning and, in fact, performance of the strongest models~\citep{kadlec2016,epireader,iaa} nearly matches that of humans. \subsection{Children's Book Test} The \emph{Children's Book Test} (\emph{CBT})~\citep{hill2015} was collected using a process similar to that of \emph{CNN/Daily Mail}. Text passages are 20-sentence excerpts from children's books available through Project Gutenberg; questions are generated by deleting a single word in the next ({\it i.e.},~21st) sentence. Consequently, \emph{CBT} evaluates word prediction based on context. It is a comprehension task insofar as comprehension is likely necessary for this prediction, but comprehension may be insufficient and other mechanisms may be more important. \subsection{BookTest} \citet{kadlecdata} convincingly argue that, because existing datasets are not large enough, we have yet to reach the full capacity of existing comprehension models. As a remedy they present \emph{BookTest}. This is an extension to the named-entity and common-noun strata of \emph{CBT} that increases their size by over 60 times. \citet{kadlecdata} demonstrate that training on the augmented dataset yields a model~\citep{kadlec2016} that matches human performance on \emph{CBT}. This is impressive and suggests that much is to be gained from more data, but we repeat our concerns about the relevance of story prediction as a comprehension task. We also wish to encourage more efficient learning from less data. \subsection{SQuAD} The comprehension dataset most closely related to \emph{NewsQA} is \emph{SQuAD}~\citep{squad}. It consists of natural language questions posed by crowdworkers on paragraphs from high-PageRank Wikipedia articles. As in \emph{NewsQA}, each answer consists of a span of text from the related paragraph and no candidates are provided. Despite the effort of manual labelling, \emph{SQuAD}'s size is significant and amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles. Although \emph{SQuAD} is a more realistic and more challenging comprehension task than the other largescale MC datasets, machine performance has rapidly improved towards that of humans in recent months. The \emph{SQuAD} authors measured human accuracy at 0.905 in F1 (we measured human F1 at \hevalsquad~using a different methodology); at the time of writing, the strongest published model to date achieves 0.778 F1~\citep{wang2016multi}. This suggests that new, more difficult alternatives like \emph{NewsQA} could further push the development of more intelligent MC systems. \section{Collection methodology} \label{sec:method} We collected \emph{NewsQA} through a four-stage process: article curation, question sourcing, answer sourcing, and validation. We also applied a post-processing step with answer agreement consolidation and span merging to enhance the usability of the dataset. These steps are detailed below. \subsection{Article curation} We retrieve articles from CNN using the script created by~\citet{hermann2015} for \emph{CNN/Daily Mail}. From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover a wide range of topics that includes politics, economics, and current events. Articles are partitioned at random into a training set (90\%), a development set (5\%), and a test set (5\%). \subsection{Question sourcing} It was important to us to collect challenging questions that could not be answered using straightforward word- or context-matching. Like~\citet{richardson2013} we want to encourage reasoning in comprehension models. We are also interested in questions that, in some sense, model human curiosity and reflect actual human use-cases of information seeking. Along a similar line, we consider it an important (though as yet overlooked) capacity of a comprehension model to recognize when given information is inadequate, so we are also interested in questions that may not have sufficient evidence in the text. Our question sourcing stage was designed to solicit questions of this nature, and deliberately separated from the answer sourcing stage for the same reason. {\it Questioners} (a distinct set of crowdworkers) see \emph{only} a news article's headline and its summary points (also available from CNN); they do not see the full article itself. They are asked to formulate a question from this incomplete information. This encourages curiosity about the contents of the full article and prevents questions that are simple reformulations of sentences in the text. It also increases the likelihood of questions whose answers do not exist in the text. We reject questions that have significant word overlap with the summary points to ensure that crowdworkers do not treat the summaries as mini-articles, and further discouraged this in the instructions. During collection each Questioner is solicited for up to three questions about an article. They are provided with positive and negative examples to prompt and guide them (detailed instructions are shown in Figure~\ref{fig:turk-q-source-instructions}). \subsection{Answer sourcing} A second set of crowdworkers ({\it Answerers}) provide answers. Although this separation of question and answer increases the overall cognitive load, we hypothesized that unburdening Questioners in this way would encourage more complex questions. Answerers receive a full article along with a crowdsourced question and are tasked with determining the answer. They may also reject the question as nonsensical, or select the {\it null} answer if the article contains insufficient information. Answers are submitted by clicking on and highlighting words in the article, while instructions encourage the set of answer words to consist of a single continuous span (again, we give an example prompt in the Appendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with the aim of achieving agreement between at least two Answerers. \subsection{Validation} Crowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested or malicious workers). To obtain a dataset of the highest possible quality we use a validation process that mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, a question, and the set of unique answers to that question. We task these workers with choosing the best answer from the candidate set or rejecting all answers. Each article-question pair is validated by an average of 2.48 crowdworkers. Validation was used on those questions \emph{without} answer-agreement after the previous stage, amounting to 43.2\% of all questions. \subsection{Answer marking and cleanup} After validation, 86.0\% of all questions in \emph{NewsQA} have answers agreed upon by at least two separate crowdworkers---either at the initial answer sourcing stage or in the top-answer selection. This improves the dataset's quality. We choose to include the questions without agreed answers in the corpus also, but they are specially marked. Such questions could be treated as having the \emph{null} answer and used to train models that are aware of poorly posed questions. As a final cleanup step we combine answer spans that are less than 3 words apart (punctuation is discounted). We find that 5.68\% of answers consist of multiple spans, while 71.3\% of multi-spans are within the 3-word threshold. Looking more closely at the data reveals that the multi-span answers often represent lists. These may present an interesting challenge for comprehension models moving forward. \section{Data analysis} \label{sec:anal} We provide a thorough analysis of \emph{NewsQA} to demonstrate its challenge and its usefulness as a machine comprehension benchmark. The analysis focuses on the types of answers that appear in the dataset and the various forms of reasoning required to solve it.\footnote{Additional statistics are available at \url{https://datasets.maluuba.com/NewsQA/stats}.} \subsection{Answer types} \label{sec:answer-types} Following~\citet{squad}, we categorize answers based on their linguistic type (see Table~\ref{tab:a-type}). This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER tags for answer spans (see~\citet{squad} for more details). From the table we see that the majority of answers (22.2\%) are common noun phrases. Thereafter, answers are fairly evenly spread among the clause phrase (18.3\%), person (14.8\%), numeric (9.8\%), and other (11.2\%) types. Clearly, answers in \emph{NewsQA} are linguistically diverse. The proportions in Table~\ref{tab:a-type} only account for cases when an answer span exists. The complement of this set comprises questions with an agreed \emph{null} answer (9.5\% of the full corpus) and answers without agreement after validation (4.5\% of the full corpus). \subsection{Reasoning types} \label{sec:reasoning-types} The forms of reasoning required to solve \emph{NewsQA} directly influence the abilities that models will learn from the dataset. We stratified reasoning types using a variation on the taxonomy presented by~\citet{chenCNN} in their analysis of the \emph{CNN/Daily Mail} dataset. Types are as follows, in ascending order of difficulty: \begin{enumerate} \item {\bf Word Matching:} Important words in the question exactly match words in the immediate context of an answer span, such that a keyword search algorithm could perform well on this subset. \item {\bf Paraphrasing:} A single sentence in the article entails or paraphrases the question. Paraphrase recognition may require synonymy and world knowledge. \item {\bf Inference:} The answer must be inferred from incomplete information in the article or by recognizing conceptual overlap. This typically draws on world knowledge. \item {\bf Synthesis:} The answer can only be inferred by synthesizing information distributed across multiple sentences. \item {\bf Ambiguous/Insufficient:} The question has no answer or no unique answer in the article. \end{enumerate} For both \emph{NewsQA} and \emph{SQuAD}, we manually labelled 1,000 examples (drawn randomly from the respective development sets) according to these types and compiled the results in Table~\ref{tab:r-type}. Some examples fall into more than one category, in which case we defaulted to the more challenging type. We can see from the table that word matching, the easiest type, makes up the largest subset in both datasets (32.7\% for \emph{NewsQA} and 39.8\% for \emph{SQuAD}). Paraphrasing constitutes a larger proportion in \emph{SQuAD} than in \emph{NewsQA} (34.3\% vs 27.0\%), possibly a result from the explicit encouragement of lexical variety in \emph{SQuAD} question sourcing. However, \emph{NewsQA} significantly outnumbers \emph{SQuAD} on the distribution of the more difficult forms of reasoning: synthesis and inference make up a combined 33.9\% of the data in contrast to 20.5\% in \emph{SQuAD}. \section{Baseline models} \label{sec:models} We test the performance of three comprehension systems on \emph{NewsQA}: human data analysts and two neural models. The first neural model is the match-LSTM (mLSTM) system of~\citet{wangsquad}. The second is a model of our own design that is similar but computationally cheaper. We describe these models below but omit the personal details of our analysts. Implementation details of the models are described in Appendix~\ref{apd:impl-details}. \subsection{Match-LSTM} We selected the mLSTM model because it is straightforward to implement and offers strong, though not state-of-the-art, performance on the similar \emph{SQuAD} dataset. There are three stages involved in the mLSTM. First, LSTM networks encode the document and question (represented by GloVe word embeddings~\citep{pennington2014}) as sequences of hidden states. Second, an mLSTM network~\citep{wang2015snli} compares the document encodings with the question encodings. This network processes the document sequentially and at each token uses an attention mechanism to obtain a weighted vector representation of the question; the weighted combination is concatenated with the encoding of the current token and fed into a standard LSTM. Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of the answer span. We refer the reader to~\citet{wang2015snli,wangsquad} for full details. \subsection{The Bilinear Annotation Re-encoding Boundary (BARB) Model} The match-LSTM is computationally intensive since it computes an attention over the entire question at each document token in the recurrence. To facilitate faster experimentation with \emph{NewsQA} we developed a lighter-weight model (BARB) that achieves similar results on \emph{SQuAD}\footnote{With the configurations for the results reported in Section~\ref{sec:model-perf}, one epoch of training on \emph{NewsQA} takes about 3.9k seconds for \emph{BARB} and 8.1k seconds for \emph{mLSTM}.}. Our model consists of four stages: \paragraph{Encoding} All words in the document and question are mapped to real-valued vectors using the GloVe embeddings ${\bf W} \in \mathbb{R}^{|V| \times d}$. This yields ${\bf d}_1, \ldots, {\bf d}_n \in \mathbb{R}^d$ and ${\bf q}_1, \ldots, {\bf q}_m \in \mathbb{R}^d$. A bidirectional GRU network~\citep{bahdanau2014} encodes ${\bf d}_i$ into contextual states ${\bf h}_i \in \mathbb{R}^{D_1}$ for the document. The same encoder is applied to ${\bf q}_j$ to derive contextual states ${\bf k}_j \in \mathbb{R}^{D_1}$ for the question.\footnote{A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions. Each of these has hidden size $\frac{1}{2}D_1$.} \paragraph{Bilinear Annotation} Next we compare the document and question encodings using a set of $C$ bilinear transformations, \begin{equation*} {\bf g}_{ij} = {\bf h}_i^T {\bf T}^{[1:C]} {\bf k}_j, \quad {\bf T}^c \in \mathbb{R}^{D_1 \times D_1},~{\bf g}_{ij} \in \mathbb{R}^C, \end{equation*} which we use to produce an $(n \times m \times C)$-dimensional tensor of annotation scores, ${\bf G} = [{\bf g}_{ij}]$. We take the maximum over the question-token (second) dimension and call the columns of the resulting matrix ${\bf g}_i \in \mathbb{R}^C$. We use this matrix as an annotation over the document word dimension. In contrast with the more typical multiplicative application of attention vectors, this annotation matrix is concatenated to the encoder RNN input in the re-encoding stage. \paragraph{Re-encoding} For each document word, the input of the re-encoding RNN (another biGRU) consists of three components: the document encodings $\bf h_i$, the annotation vectors $\bf g_i$, and a binary feature $q_i$ indicating whether the document word appears in the question. The resulting vectors ${\bf f}_i = [{\bf h}_i; {\bf g}_i; q_i]$ are fed into the re-encoding RNN to produce $D_2$-dimensional encodings ${\bf e}_i$ for the boundary-pointing stage. \paragraph{Boundary pointing} Finally, we search for the boundaries of the answer span using a convolutional network (in a process similar to edge detection). Encodings ${\bf e}_i$ are arranged in matrix ${\bf E} \in \mathbb{R}^{D_2 \times n}$. ${\bf E}$ is convolved with a bank of $n_f$ filters, $\mathbf{F}_k^\ell \in \mathbb{R}^{D_2 \times w}$, where $w$ is the filter width, $k$ indexes the different filters, and $\ell$ indexes the layer of the convolutional network. Each layer has the same number of filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) following each convolution, with the result an $(n_f \times n)$-dimensional matrix ${\bf B}_\ell$. We use two convolutional layers in the boundary-pointing stage. Given ${\bf B}_1$ and ${\bf B}_2$, the answer span's start- and end-location probabilities are computed using $p(s) \propto \exp \left( {\bf v}_s^T {\bf B}_1 + b_s\right) $ and $p(e) \propto \exp \left( {\bf v}_e^T {\bf B}_2 + b_e \right)$, respectively. We also concatenate $p(s)$ to the input of the second convolutional layer (along the $n_f$-dimension) so as to condition the end-boundary pointing on the start-boundary. Vectors ${\bf v}_s$, ${\bf v}_e \in \mathbb{R}^{n_f}$ and scalars $b_s$, $b_e \in \mathbb{R}$ are trainable parameters. We also provide an intermediate level of ``guidance'' to the annotation mechanism by first reducing the feature dimension $C$ in $\bf G$ with mean-pooling, then maximizing the softmax probabilities in the resulting ($n$-dimensional) vector corresponding to the answer word positions in each document. This auxiliary task is observed empirically to improve performance. \section{Experiments\protect\footnote{All experiments in this section use the subset of \emph{NewsQA} dataset with answer agreements (92,549 samples for training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerable questions for future work.}} \label{sec:exp} \subsection{Human evaluation} We tested four English speakers on a total of 1,000 questions from the \emph{NewsQA} development set. We used four performance measures: F1 and exact match (EM) scores (the same measures used by \emph{SQuAD}), as well as BLEU and CIDEr\footnote{We use \url{https://github.com/tylin/coco-caption} to calculate these two scores.}. BLEU is a precision-based metric popular in machine translation that uses a weighted average of variable length phrase matches ($n$-grams) against the reference sentence~\citep{papineni2002bleu}. CIDEr was designed to correlate better with human judgements of sentence similarity, and uses \emph{tf-idf} scores over $n$-grams~\citep{vedantam2015cider}. As given in Table~\ref{tab:datasetresults}, humans averaged \hevalnewsqaf1~ F1 on \emph{NewsQA}. The human EM scores are relatively low at \hevalnewsqaem. These lower scores are a reflection of the fact that, particularly in a dataset as complex as \emph{NewsQA}, there are multiple ways to select semantically equivalent answers, {\it e.g.}, ``1996'' versus ``in 1996''. Although these answers are equally correct they would be measured at 0.5 F1 and 0.0 EM. This suggests that simpler automatic metrics are not equal to the task of complex MC evaluation, a problem that has been noted in other domains~\citep{liu2016}. Therefore we also measure according to BLEU and CIDEr: humans score 0.560 and 3.596 on these metrics, respectively. The original \emph{SQuAD} evaluation of human performance compares distinct answers given by crowdworkers according to EM and F1; for a closer comparison with \emph{NewsQA}, we replicated our human test on the same number of validation data (1,000) with the same humans. We measured human answers against the second group of crowdsourced responses in \emph{SQuAD}'s development set, yielding \hevalsquad~ F1, 0.625 BLEU, and 3.998 CIDEr. Note that the F1 score is close to the top single-model performance of 0.778 achieved in~\cite{wang2016multi}. We finally compared human performance on the answers that had crowdworker agreement with and without validation, finding a difference of only 1.4 percentage points F1. This suggests our validation stage yields good-quality answers. \subsection{Model performance} \label{sec:model-perf} Performance of the baseline models and humans is measured by EM and F1 with the official evaluation script from \emph{SQuAD} and listed in Table~\ref{tab:datasetresults}. We supplement these with BLEU and CIDEr measures on the 1,000 human-annotated dev questions. Unless otherwise stated, hyperparameters are determined by \texttt{hyperopt} (Appendix~\ref{apd:impl-details}). The gap between human and machine performance on \emph{NewsQA} is a striking \hcgap~points F1 --- much larger than the gap on \emph{SQuAD} (0.098) under the same human evaluation scheme. The gaps suggest a large margin for improvement with machine comprehension methods. Figure~\ref{fig:stratification-at-rt} stratifies model (BARB) performance according to answer type (left) and reasoning type (right) as defined in Sections~\ref{sec:answer-types} and~\ref{sec:reasoning-types}, respectively. The answer-type stratification suggests that the model is better at pointing to named entities compared to other types of answers. The reasoning-type stratification, on the other hand, shows that questions requiring \emph{inference} and \emph{synthesis} are, not surprisingly, more difficult for the model. Consistent with observations in Table~\ref{tab:datasetresults}, stratified performance on \emph{NewsQA} is significantly lower than on \emph{SQuAD}. The difference is smallest on word matching and largest on synthesis. We postulate that the longer stories in \emph{NewsQA} make synthesizing information from separate sentences more difficult, since the relevant sentences may be farther apart. This requires the model to track longer-term dependencies. It is also interesting to observe that on \emph{SQuAD}, BARB outperforms human annotators in answering ambiguous questions or those with incomplete information. \subsection{Sentence-level scoring} We propose a simple sentence-level subtask as an additional quantitative demonstration of the relative difficulty of \emph{NewsQA}. Given a document and a question, the goal is to find the sentence containing the answer span. We hypothesize that simple techniques like word-matching are inadequate to this task owing to the more involved reasoning required by \emph{NewsQA}. We employ a technique that resembles inverse document frequency (\emph{idf}), which we call inverse sentence frequency (\emph{isf}). Given a sentence $\SSS_i$ from an article and its corresponding question $\QQ$, the \emph{isf} score is given by the sum of the \emph{idf} scores of the words common to $\SSS_i$ and $\QQ$ (each sentence is treated as a document for the \emph{idf} computation). The sentence with the highest \emph{isf} is taken as the answer sentence $\SSS_*$, that is, \[ \SSS_* = \argmax_i \sum_{w \in \SSS_i \cap \QQ} \mathit{isf}(w) .\] The \emph{isf} method achieves an impressive 79.4\% sentence-level accuracy on \emph{SQuAD}'s development set but only 35.4\% accuracy on \emph{NewsQA}'s development set, highlighting the comparative difficulty of the latter. To eliminate the difference in article length as a possible cause of the performance gap, we also artificially increased the article lengths in \emph{SQuAD} by concatenating adjacent \emph{SQuAD} articles {\it from the same Wikipedia article}. Accuracy decreases as expected with the increased \emph{SQuAD} article length, yet remains significantly higher than on \emph{NewsQA} with comparable or even greater article length (see Table~\ref{tab:lengthy-squad}). \section{Conclusion} \label{sec:conc} We have introduced a challenging new comprehension dataset: \emph{NewsQA}. We collected the 100,000+ examples of \emph{NewsQA} using teams of crowdworkers, who variously read CNN articles or highlights, posed questions about them, and determined answers. Our methodology yields diverse answer types and a significant proportion of questions that require some reasoning ability to solve. This makes the corpus challenging, as confirmed by the large performance gap between humans and deep neural models (\hcgap~F1, 0.479 BLEU, 1.165 CIDEr). By its size and complexity, \emph{NewsQA} makes a significant extension to the existing body of comprehension datasets. We hope that our corpus will spur further advances in machine comprehension and guide the development of literate artificial intelligence. \section*{Acknowledgments} The authors would like to thank \c{C}a\u{g}lar G\"{u}l\c{c}ehre, Sandeep Subramanian and Saizheng Zhang for helpful discussions. \bibliographystyle{iclr2017_conference} \newpage \section*{Appendices} \appendix \section{Implementation details} \label{apd:impl-details} Both mLSTM and BARB are implemented with the Keras framework \citep{keras} using the Theano \citep{theano10} backend. Word embeddings are initialized using GloVe vectors \citep{pennington2014} pre-trained on the 840-billion \emph{Common Crawl} corpus. The word embeddings are not updated during training. Embeddings for out-of-vocabulary words are initialized with zero. For both models, the training objective is to maximize the log likelihood of the boundary pointers. Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAM optimizer \citep{kingma2014}. The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB. The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of each epoch. Gradient clipping \citep{pascanu2013difficulty} is applied with a threshold of 5. Parameter tuning is performed on both models using \texttt{hyperopt}\footnote{\url{https://github.com/hyperopt/hyperopt}}. For each model, configurations for the best observed performance are as follows: \textbf{mLSTM} Both the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hidden size of 192. These settings are consistent with those used by \citet{wangsquad}. Model parameters are initialized with either the normal distribution ($\NN(0,0.05)$) or the orthogonal initialization ($\OO$, \citealt{saxe2013exact}) in Keras. All weight matrices in the LSTMs are initialized with $\OO$. In the Match-LSTM layer, $W^q$, $W^p$, and $W^r$ are initialized with $\OO$, $b^p$ and $w$ are initialized with $\NN$, and $b$ is initialized as 1. In the answer-pointing layer, $V$ and $W^a$ are initialized with $\OO$, $b^a$ and $v$ are initialized with $\NN$, and $c$ is initialized as 1. \textbf{BARB} For BARB, the following hyperparameters are used on both \emph{SQuAD} and \emph{NewsQA}: $d=300$, $D_1=128$, $C=64$, $D_2=256$, $w=3$, and $n_f=128$. Weight matrices in the GRU, the bilinear models, as well as the boundary decoder (${\bf v_s}$ and ${\bf v_e}$) are initialized with $\OO$. The filter weights in the boundary decoder are initialized with \emph{glorot\_uniform} (\citealt{glorot2010understanding}, default in Keras). The bilinear biases are initialized with $\NN$, and the boundary decoder biases are initialized with 0. \section{Data collection user interface} Here we present the user interfaces used in question sourcing, answer sourcing, and question/answer validation. \end{document}
NewsQA: A Machine Comprehension Dataset
1611.09830
Table 5: Sentence-level accuracy on artificially-lengthened SQuAD documents.
[ "# documents", "[ITALIC] SQuAD 1", "[ITALIC] SQuAD 3", "[ITALIC] SQuAD 5", "[ITALIC] SQuAD 7", "[ITALIC] SQuAD 9", "[ITALIC] NewsQA 1" ]
[ [ "Avg # sentences", "4.9", "14.3", "23.2", "31.8", "40.3", "30.7" ], [ "[ITALIC] isf", "79.6", "74.9", "73.0", "72.3", "71.0", "35.4" ] ]
The isf method achieves an impressive 79.4% sentence-level accuracy on SQuAD’s development set but only 35.4% accuracy on NewsQA’s development set, highlighting the comparative difficulty of the latter. To eliminate the difference in article length as a possible cause of the performance gap, we also artificially increased the article lengths in SQuAD by concatenating adjacent SQuAD articles from the same Wikipedia article.
\documentclass{article} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % hyperlinks % simple URL typesetting % professional-quality tables % blackboard math symbols % microtypography \usepackage[centertags]{amsmath} \def\x{\times} \def\S{\mathbf{S}} \def\H{\mathbf{H}} \renewcommand{\b}[1]{\mathbf{#1}} \newcommand{\eye}{\mathbf{I}} \newcommand{\CC}{\mathcal{C}} \newcommand{\DD}{\mathcal{D}} \newcommand{\FF}{\mathcal{F}} \newcommand{\GG}{\mathcal{G}} \newcommand{\HH}{\mathcal{H}} \newcommand{\II}{\mathcal{I}} \newcommand{\KK}{\mathcal{K}} \newcommand{\LL}{\mathcal{L}} \newcommand{\MM}{\mathcal{M}} \newcommand{\NN}{\mathcal{N}} \newcommand{\OO}{\mathcal{O}} \newcommand{\PP}{\mathcal{P}} \newcommand{\QQ}{\mathcal{Q}} \newcommand{\RR}{\mathcal{R}} \newcommand{\SSS}{\mathcal{S}} \newcommand{\TT}{\mathcal{T}} \newcommand{\VV}{\mathcal{V}} \newcommand{\WW}{\mathcal{W}} \newcommand{\XX}{\mathcal{X}} \newcommand{\YY}{\mathcal{Y}} \newcommand{\ZZ}{\mathcal{Z}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\expect}{\mathbb{E}} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator*{\xent}{xent} \DeclareMathOperator*{\ent}{ent} \DeclareMathOperator*{\softmax}{softmax} \DeclareMathOperator*{\KL}{KL} \DeclareMathOperator*{\lrelu}{lrelu} \DeclareMathOperator*{\relu}{relu} \DeclareMathOperator*{\conv}{conv} \newcommand*\equalcontr[1][\value{footnote}]{\footnotemark[#1]} \def\hevalnewsqaf1{0.694}\xspace \def\hevalnewsqaem{0.465}\xspace \def\hevalsquad{0.807}\xspace \def\hcgap{0.198}\xspace \def\aibest{0.496}\xspace \title{NewsQA: A Machine Comprehension Dataset} \author{Adam Trischler\thanks{These three authors contributed equally.}\qquad\qquad Tong Wang\equalcontr\qquad\qquad Xingdi Yuan\equalcontr\qquad\qquad Justin Harris\\\ \vspace{-2mm}\\{\bf Alessandro Sordoni\qquad\qquad Philip Bachman\qquad\qquad Kaheer Suleman} \\\ \\ {\tt \{adam.trischler, tong.wang, eric.yuan, justin.harris,}\\ {\tt \ alessandro.sordoni, phil.bachman, k.suleman\}@maluuba.com} \\ Maluuba Research \\ Montr\'{e}al, Qu\'{e}bec, Canada } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} We present \emph{NewsQA}, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that \emph{NewsQA} demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (\hcgap~in F1) indicates that significant progress can be made on \emph{NewsQA} through future research. The dataset is freely available at \url{https://datasets.maluuba.com/NewsQA}. \end{abstract} \section{Introduction} Almost all human knowledge is recorded in the medium of text. As such, comprehension of written language by machines, at a near-human level, would enable a broad class of artificial intelligence applications. In human students we evaluate reading comprehension by posing questions based on a text passage and then assessing a student's answers. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic understanding to causal reasoning to inference~\citep{richardson2013}. To teach literacy to machines, the research community has taken a similar approach with machine comprehension (MC). Recent years have seen the release of a host of MC datasets. Generally, these consist of (document, question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size, difficulty, and collection methodology; however, as pointed out by~\citet{squad}, most suffer from one of two shortcomings: those that are designed explicitly to test comprehension~\citep{richardson2013} are too small for training data-intensive deep learning models, while those that are sufficiently large for deep learning~\citep{hermann2015,hill2015,kadlecdata} are generated synthetically, yielding questions that are not posed in natural language and that may not test comprehension directly~\citep{chenCNN}. More recently, \citet{squad} sought to overcome these deficiencies with their crowdsourced dataset, \emph{SQuAD}. Here we present a challenging new largescale dataset for machine comprehension: \emph{NewsQA}. \emph{NewsQA} contains 119,633 natural language questions posed by crowdworkers on 12,744 news articles from CNN. Answers to these questions consist of spans of text within the corresponding article highlighted also by crowdworkers. To build \emph{NewsQA} we utilized a four-stage collection process designed to encourage exploratory, curiosity-based questions that reflect human information seeking. CNN articles were chosen as the source material because they have been used in the past~\citep{hermann2015} and, in our view, machine comprehension systems are particularly suited to high-volume, rapidly changing information sources like news. As~\citet{trischler2016},~\citet{chenCNN}, and others have argued, it is important for datasets to be sufficiently challenging to teach models the abilities we wish them to learn. Thus, in line with~\citet{richardson2013}, our goal with \emph{NewsQA} was to construct a corpus of questions that necessitates reasoning-like behaviors -- for example, synthesis of information across different parts of an article. We designed our collection methodology explicitly to capture such questions. The challenging characteristics of \emph{NewsQA} that distinguish it from most previous comprehension tasks are as follows: \begin{enumerate} \item Answers are spans of arbitrary length within an article, rather than single words or entities. \item Some questions have no answer in the corresponding article (the \emph{null} span). \item There are no candidate answers from which to choose. \item Our collection process encourages lexical and syntactic divergence between questions and answers. \item A significant proportion of questions requires reasoning beyond simple word- and context-matching (as shown in our analysis). \end{enumerate} Some of these characteristics are present also in \emph{SQuAD}, the MC dataset most similar to \emph{NewsQA}. However, we demonstrate through several metrics that \emph{NewsQA} offers a greater challenge to existing models. In this paper we describe the collection methodology for \emph{NewsQA}, provide a variety of statistics to characterize it and contrast it with previous datasets, and assess its difficulty. In particular, we measure human performance and compare it to that of two strong neural-network baselines. Humans significantly outperform powerful question-answering models. This suggests there is room for improvement through further advances in machine comprehension research. \section{Related Datasets} \label{sec:related} \emph{NewsQA} follows in the tradition of several recent comprehension datasets. These vary in size, difficulty, and collection methodology, and each has its own distinguishing characteristics. We agree with~\citet{kadlecdata} who have said ``models could certainly benefit from as diverse a collection of datasets as possible.'' We discuss this collection below. \subsection{MCTest} {\it MCTest}~\citep{richardson2013} is a crowdsourced collection of 660 elementary-level children's stories with associated questions and answers. The stories are fictional, to ensure that the answer must be found in the text itself, and carefully limited to what a young child can understand. Each question comes with a set of 4 candidate answers that range from single words to full explanatory sentences. The questions are designed to require rudimentary reasoning and synthesis of information across sentences, making the dataset quite challenging. This is compounded by the dataset's size, which limits the training of expressive statistical models. Nevertheless, recent comprehension models have performed well on {\it MCTest}~\citep{sachan2015,wangMC}, including a highly structured neural model~\citep{trischler2016}. These models all rely on access to the small set of candidate answers, a crutch that \emph{NewsQA} does not provide. \subsection{CNN/Daily Mail} The \emph{CNN/Daily Mail} corpus~\citep{hermann2015} consists of news articles scraped from those outlets with corresponding cloze-style questions. Cloze questions are constructed synthetically by deleting a single entity from abstractive summary points that accompany each article (written presumably by human authors). As such, determining the correct answer relies mostly on recognizing textual entailment between the article and the question. The named entities within an article are identified and anonymized in a preprocessing step and constitute the set of candidate answers; contrast this with \emph{NewsQA} in which answers often include longer phrases and no candidates are given. Because the cloze process is automatic, it is straightforward to collect a significant amount of data to support deep-learning approaches: \emph{CNN/Daily Mail} contains about 1.4 million question-answer pairs. However,~\citet{chenCNN} demonstrated that the task requires only limited reasoning and, in fact, performance of the strongest models~\citep{kadlec2016,epireader,iaa} nearly matches that of humans. \subsection{Children's Book Test} The \emph{Children's Book Test} (\emph{CBT})~\citep{hill2015} was collected using a process similar to that of \emph{CNN/Daily Mail}. Text passages are 20-sentence excerpts from children's books available through Project Gutenberg; questions are generated by deleting a single word in the next ({\it i.e.},~21st) sentence. Consequently, \emph{CBT} evaluates word prediction based on context. It is a comprehension task insofar as comprehension is likely necessary for this prediction, but comprehension may be insufficient and other mechanisms may be more important. \subsection{BookTest} \citet{kadlecdata} convincingly argue that, because existing datasets are not large enough, we have yet to reach the full capacity of existing comprehension models. As a remedy they present \emph{BookTest}. This is an extension to the named-entity and common-noun strata of \emph{CBT} that increases their size by over 60 times. \citet{kadlecdata} demonstrate that training on the augmented dataset yields a model~\citep{kadlec2016} that matches human performance on \emph{CBT}. This is impressive and suggests that much is to be gained from more data, but we repeat our concerns about the relevance of story prediction as a comprehension task. We also wish to encourage more efficient learning from less data. \subsection{SQuAD} The comprehension dataset most closely related to \emph{NewsQA} is \emph{SQuAD}~\citep{squad}. It consists of natural language questions posed by crowdworkers on paragraphs from high-PageRank Wikipedia articles. As in \emph{NewsQA}, each answer consists of a span of text from the related paragraph and no candidates are provided. Despite the effort of manual labelling, \emph{SQuAD}'s size is significant and amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles. Although \emph{SQuAD} is a more realistic and more challenging comprehension task than the other largescale MC datasets, machine performance has rapidly improved towards that of humans in recent months. The \emph{SQuAD} authors measured human accuracy at 0.905 in F1 (we measured human F1 at \hevalsquad~using a different methodology); at the time of writing, the strongest published model to date achieves 0.778 F1~\citep{wang2016multi}. This suggests that new, more difficult alternatives like \emph{NewsQA} could further push the development of more intelligent MC systems. \section{Collection methodology} \label{sec:method} We collected \emph{NewsQA} through a four-stage process: article curation, question sourcing, answer sourcing, and validation. We also applied a post-processing step with answer agreement consolidation and span merging to enhance the usability of the dataset. These steps are detailed below. \subsection{Article curation} We retrieve articles from CNN using the script created by~\citet{hermann2015} for \emph{CNN/Daily Mail}. From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover a wide range of topics that includes politics, economics, and current events. Articles are partitioned at random into a training set (90\%), a development set (5\%), and a test set (5\%). \subsection{Question sourcing} It was important to us to collect challenging questions that could not be answered using straightforward word- or context-matching. Like~\citet{richardson2013} we want to encourage reasoning in comprehension models. We are also interested in questions that, in some sense, model human curiosity and reflect actual human use-cases of information seeking. Along a similar line, we consider it an important (though as yet overlooked) capacity of a comprehension model to recognize when given information is inadequate, so we are also interested in questions that may not have sufficient evidence in the text. Our question sourcing stage was designed to solicit questions of this nature, and deliberately separated from the answer sourcing stage for the same reason. {\it Questioners} (a distinct set of crowdworkers) see \emph{only} a news article's headline and its summary points (also available from CNN); they do not see the full article itself. They are asked to formulate a question from this incomplete information. This encourages curiosity about the contents of the full article and prevents questions that are simple reformulations of sentences in the text. It also increases the likelihood of questions whose answers do not exist in the text. We reject questions that have significant word overlap with the summary points to ensure that crowdworkers do not treat the summaries as mini-articles, and further discouraged this in the instructions. During collection each Questioner is solicited for up to three questions about an article. They are provided with positive and negative examples to prompt and guide them (detailed instructions are shown in Figure~\ref{fig:turk-q-source-instructions}). \subsection{Answer sourcing} A second set of crowdworkers ({\it Answerers}) provide answers. Although this separation of question and answer increases the overall cognitive load, we hypothesized that unburdening Questioners in this way would encourage more complex questions. Answerers receive a full article along with a crowdsourced question and are tasked with determining the answer. They may also reject the question as nonsensical, or select the {\it null} answer if the article contains insufficient information. Answers are submitted by clicking on and highlighting words in the article, while instructions encourage the set of answer words to consist of a single continuous span (again, we give an example prompt in the Appendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with the aim of achieving agreement between at least two Answerers. \subsection{Validation} Crowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested or malicious workers). To obtain a dataset of the highest possible quality we use a validation process that mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, a question, and the set of unique answers to that question. We task these workers with choosing the best answer from the candidate set or rejecting all answers. Each article-question pair is validated by an average of 2.48 crowdworkers. Validation was used on those questions \emph{without} answer-agreement after the previous stage, amounting to 43.2\% of all questions. \subsection{Answer marking and cleanup} After validation, 86.0\% of all questions in \emph{NewsQA} have answers agreed upon by at least two separate crowdworkers---either at the initial answer sourcing stage or in the top-answer selection. This improves the dataset's quality. We choose to include the questions without agreed answers in the corpus also, but they are specially marked. Such questions could be treated as having the \emph{null} answer and used to train models that are aware of poorly posed questions. As a final cleanup step we combine answer spans that are less than 3 words apart (punctuation is discounted). We find that 5.68\% of answers consist of multiple spans, while 71.3\% of multi-spans are within the 3-word threshold. Looking more closely at the data reveals that the multi-span answers often represent lists. These may present an interesting challenge for comprehension models moving forward. \section{Data analysis} \label{sec:anal} We provide a thorough analysis of \emph{NewsQA} to demonstrate its challenge and its usefulness as a machine comprehension benchmark. The analysis focuses on the types of answers that appear in the dataset and the various forms of reasoning required to solve it.\footnote{Additional statistics are available at \url{https://datasets.maluuba.com/NewsQA/stats}.} \subsection{Answer types} \label{sec:answer-types} Following~\citet{squad}, we categorize answers based on their linguistic type (see Table~\ref{tab:a-type}). This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER tags for answer spans (see~\citet{squad} for more details). From the table we see that the majority of answers (22.2\%) are common noun phrases. Thereafter, answers are fairly evenly spread among the clause phrase (18.3\%), person (14.8\%), numeric (9.8\%), and other (11.2\%) types. Clearly, answers in \emph{NewsQA} are linguistically diverse. The proportions in Table~\ref{tab:a-type} only account for cases when an answer span exists. The complement of this set comprises questions with an agreed \emph{null} answer (9.5\% of the full corpus) and answers without agreement after validation (4.5\% of the full corpus). \subsection{Reasoning types} \label{sec:reasoning-types} The forms of reasoning required to solve \emph{NewsQA} directly influence the abilities that models will learn from the dataset. We stratified reasoning types using a variation on the taxonomy presented by~\citet{chenCNN} in their analysis of the \emph{CNN/Daily Mail} dataset. Types are as follows, in ascending order of difficulty: \begin{enumerate} \item {\bf Word Matching:} Important words in the question exactly match words in the immediate context of an answer span, such that a keyword search algorithm could perform well on this subset. \item {\bf Paraphrasing:} A single sentence in the article entails or paraphrases the question. Paraphrase recognition may require synonymy and world knowledge. \item {\bf Inference:} The answer must be inferred from incomplete information in the article or by recognizing conceptual overlap. This typically draws on world knowledge. \item {\bf Synthesis:} The answer can only be inferred by synthesizing information distributed across multiple sentences. \item {\bf Ambiguous/Insufficient:} The question has no answer or no unique answer in the article. \end{enumerate} For both \emph{NewsQA} and \emph{SQuAD}, we manually labelled 1,000 examples (drawn randomly from the respective development sets) according to these types and compiled the results in Table~\ref{tab:r-type}. Some examples fall into more than one category, in which case we defaulted to the more challenging type. We can see from the table that word matching, the easiest type, makes up the largest subset in both datasets (32.7\% for \emph{NewsQA} and 39.8\% for \emph{SQuAD}). Paraphrasing constitutes a larger proportion in \emph{SQuAD} than in \emph{NewsQA} (34.3\% vs 27.0\%), possibly a result from the explicit encouragement of lexical variety in \emph{SQuAD} question sourcing. However, \emph{NewsQA} significantly outnumbers \emph{SQuAD} on the distribution of the more difficult forms of reasoning: synthesis and inference make up a combined 33.9\% of the data in contrast to 20.5\% in \emph{SQuAD}. \section{Baseline models} \label{sec:models} We test the performance of three comprehension systems on \emph{NewsQA}: human data analysts and two neural models. The first neural model is the match-LSTM (mLSTM) system of~\citet{wangsquad}. The second is a model of our own design that is similar but computationally cheaper. We describe these models below but omit the personal details of our analysts. Implementation details of the models are described in Appendix~\ref{apd:impl-details}. \subsection{Match-LSTM} We selected the mLSTM model because it is straightforward to implement and offers strong, though not state-of-the-art, performance on the similar \emph{SQuAD} dataset. There are three stages involved in the mLSTM. First, LSTM networks encode the document and question (represented by GloVe word embeddings~\citep{pennington2014}) as sequences of hidden states. Second, an mLSTM network~\citep{wang2015snli} compares the document encodings with the question encodings. This network processes the document sequentially and at each token uses an attention mechanism to obtain a weighted vector representation of the question; the weighted combination is concatenated with the encoding of the current token and fed into a standard LSTM. Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of the answer span. We refer the reader to~\citet{wang2015snli,wangsquad} for full details. \subsection{The Bilinear Annotation Re-encoding Boundary (BARB) Model} The match-LSTM is computationally intensive since it computes an attention over the entire question at each document token in the recurrence. To facilitate faster experimentation with \emph{NewsQA} we developed a lighter-weight model (BARB) that achieves similar results on \emph{SQuAD}\footnote{With the configurations for the results reported in Section~\ref{sec:model-perf}, one epoch of training on \emph{NewsQA} takes about 3.9k seconds for \emph{BARB} and 8.1k seconds for \emph{mLSTM}.}. Our model consists of four stages: \paragraph{Encoding} All words in the document and question are mapped to real-valued vectors using the GloVe embeddings ${\bf W} \in \mathbb{R}^{|V| \times d}$. This yields ${\bf d}_1, \ldots, {\bf d}_n \in \mathbb{R}^d$ and ${\bf q}_1, \ldots, {\bf q}_m \in \mathbb{R}^d$. A bidirectional GRU network~\citep{bahdanau2014} encodes ${\bf d}_i$ into contextual states ${\bf h}_i \in \mathbb{R}^{D_1}$ for the document. The same encoder is applied to ${\bf q}_j$ to derive contextual states ${\bf k}_j \in \mathbb{R}^{D_1}$ for the question.\footnote{A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions. Each of these has hidden size $\frac{1}{2}D_1$.} \paragraph{Bilinear Annotation} Next we compare the document and question encodings using a set of $C$ bilinear transformations, \begin{equation*} {\bf g}_{ij} = {\bf h}_i^T {\bf T}^{[1:C]} {\bf k}_j, \quad {\bf T}^c \in \mathbb{R}^{D_1 \times D_1},~{\bf g}_{ij} \in \mathbb{R}^C, \end{equation*} which we use to produce an $(n \times m \times C)$-dimensional tensor of annotation scores, ${\bf G} = [{\bf g}_{ij}]$. We take the maximum over the question-token (second) dimension and call the columns of the resulting matrix ${\bf g}_i \in \mathbb{R}^C$. We use this matrix as an annotation over the document word dimension. In contrast with the more typical multiplicative application of attention vectors, this annotation matrix is concatenated to the encoder RNN input in the re-encoding stage. \paragraph{Re-encoding} For each document word, the input of the re-encoding RNN (another biGRU) consists of three components: the document encodings $\bf h_i$, the annotation vectors $\bf g_i$, and a binary feature $q_i$ indicating whether the document word appears in the question. The resulting vectors ${\bf f}_i = [{\bf h}_i; {\bf g}_i; q_i]$ are fed into the re-encoding RNN to produce $D_2$-dimensional encodings ${\bf e}_i$ for the boundary-pointing stage. \paragraph{Boundary pointing} Finally, we search for the boundaries of the answer span using a convolutional network (in a process similar to edge detection). Encodings ${\bf e}_i$ are arranged in matrix ${\bf E} \in \mathbb{R}^{D_2 \times n}$. ${\bf E}$ is convolved with a bank of $n_f$ filters, $\mathbf{F}_k^\ell \in \mathbb{R}^{D_2 \times w}$, where $w$ is the filter width, $k$ indexes the different filters, and $\ell$ indexes the layer of the convolutional network. Each layer has the same number of filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) following each convolution, with the result an $(n_f \times n)$-dimensional matrix ${\bf B}_\ell$. We use two convolutional layers in the boundary-pointing stage. Given ${\bf B}_1$ and ${\bf B}_2$, the answer span's start- and end-location probabilities are computed using $p(s) \propto \exp \left( {\bf v}_s^T {\bf B}_1 + b_s\right) $ and $p(e) \propto \exp \left( {\bf v}_e^T {\bf B}_2 + b_e \right)$, respectively. We also concatenate $p(s)$ to the input of the second convolutional layer (along the $n_f$-dimension) so as to condition the end-boundary pointing on the start-boundary. Vectors ${\bf v}_s$, ${\bf v}_e \in \mathbb{R}^{n_f}$ and scalars $b_s$, $b_e \in \mathbb{R}$ are trainable parameters. We also provide an intermediate level of ``guidance'' to the annotation mechanism by first reducing the feature dimension $C$ in $\bf G$ with mean-pooling, then maximizing the softmax probabilities in the resulting ($n$-dimensional) vector corresponding to the answer word positions in each document. This auxiliary task is observed empirically to improve performance. \section{Experiments\protect\footnote{All experiments in this section use the subset of \emph{NewsQA} dataset with answer agreements (92,549 samples for training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerable questions for future work.}} \label{sec:exp} \subsection{Human evaluation} We tested four English speakers on a total of 1,000 questions from the \emph{NewsQA} development set. We used four performance measures: F1 and exact match (EM) scores (the same measures used by \emph{SQuAD}), as well as BLEU and CIDEr\footnote{We use \url{https://github.com/tylin/coco-caption} to calculate these two scores.}. BLEU is a precision-based metric popular in machine translation that uses a weighted average of variable length phrase matches ($n$-grams) against the reference sentence~\citep{papineni2002bleu}. CIDEr was designed to correlate better with human judgements of sentence similarity, and uses \emph{tf-idf} scores over $n$-grams~\citep{vedantam2015cider}. As given in Table~\ref{tab:datasetresults}, humans averaged \hevalnewsqaf1~ F1 on \emph{NewsQA}. The human EM scores are relatively low at \hevalnewsqaem. These lower scores are a reflection of the fact that, particularly in a dataset as complex as \emph{NewsQA}, there are multiple ways to select semantically equivalent answers, {\it e.g.}, ``1996'' versus ``in 1996''. Although these answers are equally correct they would be measured at 0.5 F1 and 0.0 EM. This suggests that simpler automatic metrics are not equal to the task of complex MC evaluation, a problem that has been noted in other domains~\citep{liu2016}. Therefore we also measure according to BLEU and CIDEr: humans score 0.560 and 3.596 on these metrics, respectively. The original \emph{SQuAD} evaluation of human performance compares distinct answers given by crowdworkers according to EM and F1; for a closer comparison with \emph{NewsQA}, we replicated our human test on the same number of validation data (1,000) with the same humans. We measured human answers against the second group of crowdsourced responses in \emph{SQuAD}'s development set, yielding \hevalsquad~ F1, 0.625 BLEU, and 3.998 CIDEr. Note that the F1 score is close to the top single-model performance of 0.778 achieved in~\cite{wang2016multi}. We finally compared human performance on the answers that had crowdworker agreement with and without validation, finding a difference of only 1.4 percentage points F1. This suggests our validation stage yields good-quality answers. \subsection{Model performance} \label{sec:model-perf} Performance of the baseline models and humans is measured by EM and F1 with the official evaluation script from \emph{SQuAD} and listed in Table~\ref{tab:datasetresults}. We supplement these with BLEU and CIDEr measures on the 1,000 human-annotated dev questions. Unless otherwise stated, hyperparameters are determined by \texttt{hyperopt} (Appendix~\ref{apd:impl-details}). The gap between human and machine performance on \emph{NewsQA} is a striking \hcgap~points F1 --- much larger than the gap on \emph{SQuAD} (0.098) under the same human evaluation scheme. The gaps suggest a large margin for improvement with machine comprehension methods. Figure~\ref{fig:stratification-at-rt} stratifies model (BARB) performance according to answer type (left) and reasoning type (right) as defined in Sections~\ref{sec:answer-types} and~\ref{sec:reasoning-types}, respectively. The answer-type stratification suggests that the model is better at pointing to named entities compared to other types of answers. The reasoning-type stratification, on the other hand, shows that questions requiring \emph{inference} and \emph{synthesis} are, not surprisingly, more difficult for the model. Consistent with observations in Table~\ref{tab:datasetresults}, stratified performance on \emph{NewsQA} is significantly lower than on \emph{SQuAD}. The difference is smallest on word matching and largest on synthesis. We postulate that the longer stories in \emph{NewsQA} make synthesizing information from separate sentences more difficult, since the relevant sentences may be farther apart. This requires the model to track longer-term dependencies. It is also interesting to observe that on \emph{SQuAD}, BARB outperforms human annotators in answering ambiguous questions or those with incomplete information. \subsection{Sentence-level scoring} We propose a simple sentence-level subtask as an additional quantitative demonstration of the relative difficulty of \emph{NewsQA}. Given a document and a question, the goal is to find the sentence containing the answer span. We hypothesize that simple techniques like word-matching are inadequate to this task owing to the more involved reasoning required by \emph{NewsQA}. We employ a technique that resembles inverse document frequency (\emph{idf}), which we call inverse sentence frequency (\emph{isf}). Given a sentence $\SSS_i$ from an article and its corresponding question $\QQ$, the \emph{isf} score is given by the sum of the \emph{idf} scores of the words common to $\SSS_i$ and $\QQ$ (each sentence is treated as a document for the \emph{idf} computation). The sentence with the highest \emph{isf} is taken as the answer sentence $\SSS_*$, that is, \[ \SSS_* = \argmax_i \sum_{w \in \SSS_i \cap \QQ} \mathit{isf}(w) .\] The \emph{isf} method achieves an impressive 79.4\% sentence-level accuracy on \emph{SQuAD}'s development set but only 35.4\% accuracy on \emph{NewsQA}'s development set, highlighting the comparative difficulty of the latter. To eliminate the difference in article length as a possible cause of the performance gap, we also artificially increased the article lengths in \emph{SQuAD} by concatenating adjacent \emph{SQuAD} articles {\it from the same Wikipedia article}. Accuracy decreases as expected with the increased \emph{SQuAD} article length, yet remains significantly higher than on \emph{NewsQA} with comparable or even greater article length (see Table~\ref{tab:lengthy-squad}). \section{Conclusion} \label{sec:conc} We have introduced a challenging new comprehension dataset: \emph{NewsQA}. We collected the 100,000+ examples of \emph{NewsQA} using teams of crowdworkers, who variously read CNN articles or highlights, posed questions about them, and determined answers. Our methodology yields diverse answer types and a significant proportion of questions that require some reasoning ability to solve. This makes the corpus challenging, as confirmed by the large performance gap between humans and deep neural models (\hcgap~F1, 0.479 BLEU, 1.165 CIDEr). By its size and complexity, \emph{NewsQA} makes a significant extension to the existing body of comprehension datasets. We hope that our corpus will spur further advances in machine comprehension and guide the development of literate artificial intelligence. \section*{Acknowledgments} The authors would like to thank \c{C}a\u{g}lar G\"{u}l\c{c}ehre, Sandeep Subramanian and Saizheng Zhang for helpful discussions. \bibliographystyle{iclr2017_conference} \newpage \section*{Appendices} \appendix \section{Implementation details} \label{apd:impl-details} Both mLSTM and BARB are implemented with the Keras framework \citep{keras} using the Theano \citep{theano10} backend. Word embeddings are initialized using GloVe vectors \citep{pennington2014} pre-trained on the 840-billion \emph{Common Crawl} corpus. The word embeddings are not updated during training. Embeddings for out-of-vocabulary words are initialized with zero. For both models, the training objective is to maximize the log likelihood of the boundary pointers. Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAM optimizer \citep{kingma2014}. The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB. The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of each epoch. Gradient clipping \citep{pascanu2013difficulty} is applied with a threshold of 5. Parameter tuning is performed on both models using \texttt{hyperopt}\footnote{\url{https://github.com/hyperopt/hyperopt}}. For each model, configurations for the best observed performance are as follows: \textbf{mLSTM} Both the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hidden size of 192. These settings are consistent with those used by \citet{wangsquad}. Model parameters are initialized with either the normal distribution ($\NN(0,0.05)$) or the orthogonal initialization ($\OO$, \citealt{saxe2013exact}) in Keras. All weight matrices in the LSTMs are initialized with $\OO$. In the Match-LSTM layer, $W^q$, $W^p$, and $W^r$ are initialized with $\OO$, $b^p$ and $w$ are initialized with $\NN$, and $b$ is initialized as 1. In the answer-pointing layer, $V$ and $W^a$ are initialized with $\OO$, $b^a$ and $v$ are initialized with $\NN$, and $c$ is initialized as 1. \textbf{BARB} For BARB, the following hyperparameters are used on both \emph{SQuAD} and \emph{NewsQA}: $d=300$, $D_1=128$, $C=64$, $D_2=256$, $w=3$, and $n_f=128$. Weight matrices in the GRU, the bilinear models, as well as the boundary decoder (${\bf v_s}$ and ${\bf v_e}$) are initialized with $\OO$. The filter weights in the boundary decoder are initialized with \emph{glorot\_uniform} (\citealt{glorot2010understanding}, default in Keras). The bilinear biases are initialized with $\NN$, and the boundary decoder biases are initialized with 0. \section{Data collection user interface} Here we present the user interfaces used in question sourcing, answer sourcing, and question/answer validation. \end{document}
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
1703.10960
Table 1: Performance of each model on automatic measures. The highest score in each row is in bold. Note that our BLEU scores are normalized to [0,1].
[ "Metrics", "Baseline", "CVAE", "kgCVAE" ]
[ [ "perplexity (KL)", "35.4 (n/a)", "20.2 (11.36)", "16.02 (13.08)" ], [ "BLEU-1 prec", "0.405", "0.372", "[BOLD] 0.412" ], [ "BLEU-1 recall", "0.336", "0.381", "[BOLD] 0.411" ], [ "BLEU-2 prec", "0.300", "0.295", "[BOLD] 0.350" ], [ "BLEU-2 recall", "0.281", "0.322", "[BOLD] 0.356" ], [ "BLEU-3 prec", "0.272", "0.265", "[BOLD] 0.310" ], [ "BLEU-3 recall", "0.254", "0.292", "[BOLD] 0.318" ], [ "BLEU-4 prec", "0.226", "0.223", "[BOLD] 0.262" ], [ "BLEU-4 recall", "0.215", "0.248", "[BOLD] 0.272" ], [ "A-bow prec", "0.951", "0.954", "[BOLD] 0.961" ], [ "A-bow recall", "0.935", "0.943", "[BOLD] 0.944" ], [ "E-bow prec", "[BOLD] 0.827", "0.815", "0.804" ], [ "E-bow recall", "0.801", "[BOLD] 0.812", "0.807" ], [ "DA prec", "[BOLD] 0.736", "0.704", "0.721" ], [ "DA recall", "0.514", "[BOLD] 0.604", "0.598" ] ]
Automatically evaluating an open-domain generative dialog model is an open research challenge Liu et al. Following our one-to-many hypothesis, we propose the following metrics. We assume that for a given dialog context c, there exist Mc reference responses rj, j∈[1,Mc]. Meanwhile a model can generate N hypothesis responses hi, i∈[1,N]. The generalized response-level precision/recall for a given dialog context is: precision(c) =∑Ni=1maxj∈[1,Mc]d(rj,hi)N recall(c) =∑Mcj=1maxi∈[1,N]d(rj,hi))Mc where d(rj,hi) is a distance function which lies between 0 to 1 and measures the similarities between rj and hi. Papineni et al. We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale. Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences Forgues et al. Adi et al. The d(rj,hi) is the cosine distance of the two embedding vectors. The score is normalized to [0, 1]. We set d(rj,hi)=1 if rj and hi have the same dialog acts, otherwise d(rj,hi)=0. One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts. This impacts reliability of our measures. Inspired by Sordoni et al. Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier. The result is 6.69 extra references in average per context. The average number of distinct reference dialog acts is 4.2.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{256} % Enter the acl Paper ID here \newcommand{\cev}[1]{\reflectbox{\ensuremath{\vec{\reflectbox{\ensuremath{#1}}}}}} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders} \author{Tiancheng Zhao, Ran Zhao and Maxine Eskenazi \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, Pennsylvania, USA \\ {\tt \{tianchez,ranzhao1,max+\}@cs.cmu.edu}} \date{} \begin{document} \maketitle \begin{abstract} \vspace{-0.2cm} While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a \textit{bag-of-word} loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.\footnote{Data and an implementation of our model is avalaible at https://github.com/snakeztc/NeuralDialog-CVAE} \end{abstract} \vspace{-0.4cm} \section{Introduction} \label{sec:intro} \input{intro.tex} \vspace{-0.2cm} \section{Related Work} \label{sec:related} \input{related.tex} \vspace{-0.2cm} \section{Proposed Models} \vspace{-0.2cm} \label{sec:models} \input{model.tex} \vspace{-0.2cm} \section{Experiment Setup} \label{sec:corpra} \input{corpora.tex} \vspace{-0.2cm} \section{Results} \vspace{-0.2cm} \label{sec:exp} \input{experiments.tex} \section{Conclusion and Future Work} \vspace{-0.2cm} \label{sec:conclusion} \input{conclusion.tex} \section{Acknowledgements} This work was funded by NSF grant CNS-1512973. The opinions expressed in this paper do not necessarily reflect those of NSF. \bibliographystyle{acl_natbib} \appendix \section{Supplemental Material} \label{sec:supplemental} \input{appendix.tex} \end{document} \vspace{-0.2cm} The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process. Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions~\cite{bohus2003ravenclaw,williams2007partially}. Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g. different strategies to recover from non-understanding~\cite{yu2016strategy}. However, the conventional approach of designing a dialog manager~\cite{williams2007partially} does not scale well to open-domain conversation models because of the vast quantity of possible decisions. Thus, there has been a growing interest in applying encoder-decoder models~\cite{sutskever2014sequence} for modeling open-domain conversation~\cite{vinyals2015neural,serban2016building}. The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence. The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting. However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., \textit{I don't know}), rather than meaningful and specific answers~\cite{li2015diversity,serban2016hierarchical}. There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section~\ref{sec:related} for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response. Other features should be extracted and provided to the models as conditionals in order to generate more specific responses~\cite{xing2016topic,li2016persona}; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations~\cite{wiseman2016sequence}, encouraging responses that have long-term payoff~\cite{li2016deep}, etc. Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a \textit{one-to-many} problem at the discourse level. Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is non-trivial to extract all of them. Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input. To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable (Figure~\ref{fig:example}). This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network. \vspace{-0.3cm} Specifically, our contributions are three-fold: 1. We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE)~\cite{yan2015attribute2image,sohn2015learning}, which introduces a latent variable that can capture discourse-level variations as described above 2. We propose Knowledge-Guided CVAE (kgCVAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability. 3. We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation~\cite{bowman2015generating}. We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques. In conclusion, we identified the \textit{one-to-many} nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level. While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog. In turn, the output of this novel neural dialog model will be easier to explain and control by humans. In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc. Last but not least, the recognition network in our model will serve as the foundation for designing a data-driven dialog manager, which automatically discovers useful high-level intents. All of the above suggest a promising research direction.\textbf{Connection to Conventional Dialog System Pipeline}: Our models are connected to the conventional dialog system pipeline~\cite{bohus2007olympus}. The context encoding network corresponds the natural language understanding and dialog state tracker, which produces a summarized representation of the dialog state. The prior network corresponds to the dialog manager that outputs a high-level action of the next response based on the dialog state. Then the decoder network plays the role of a context-sensitive natural language generator that produces the surface form of the prior network's outputs. Lastly the recognition network can be understood as a data-driven designer of the dialog systems, which discover useful high-level factors form the data and encode them in the latent variable $z$. This implies a promising research direction for data-driven dialog system (see Section~\ref{sec:implication} for details).\subsection{Experiments Setup} We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE. The \textbf{baseline model} is an encoder-decoder neural dialog model without latent variables similar to~\cite{serban2016building}. The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure~\ref{fig:model}. The encoded context $c$ is directly fed into the decoder networks as the initial state. The hyperparameters of the baseline are the same as the ones reported in Section~\ref{sec:train} and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss. Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate $N$ responses from the baseline by sampling from the softmax. For CVAE/kgCVAE, we sample $N$ times from the latent $z$ and only use greedy decoders so that the randomness comes entirely from the latent variable $z$. \subsection{Quantitative Analysis} Automatically evaluating an open-domain generative dialog model is an open research challenge~\cite{liu2016not}. Following our \textit{one-to-many} hypothesis, we propose the following metrics. We assume that for a given dialog context $c$, there exist $M_c$ reference responses $r_j$, $j\in[1,M_c]$. Meanwhile a model can generate $N$ hypothesis responses $h_i$, $i\in[1,N]$. The generalized response-level precision/recall for a given dialog context is: \begin{align*} \text{precision(c)} &= \frac{\sum_{i=1}^N max_{j\in[1,M_c]}d(r_j, h_i)}{N} \\ \text{recall(c)} &= \frac{\sum_{j=1}^{M_c} max_{i\in[1,N]}d(r_j,h_i))}{M_c} \end{align*} where $d(r_j, h_i)$ is a distance function which lies between 0 to 1 and measures the similarities between $r_j$ and $h_i$. The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: \begin{enumerate} \item Smoothed Sentence-level BLEU~\cite{chen2014systematic}: BLEU is a popular metric that measures the geometric mean of modified n-gram precision with a length penalty~\cite{papineni2002bleu,li2015diversity}. We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale. \item Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences~\cite{forgues2014bootstrapping,adi2016fine}. The $d(r_j, h_i)$ is the cosine distance of the two embedding vectors. We used Glove embedding described in Section~\ref{sec:corpra} and denote the average method as A-bow and extrema method as E-bow. The score is normalized to [0, 1]. \item Dialog Act Match: to measure the similarity at the discourse level, the same dialog-act tagger from~\ref{sec:data} is applied to label all the generated responses of each model. We set $d(r_j,h_i)=1$ if $r_j$ and $h_i$ have the same dialog acts, otherwise $d(r_j, h_i)=0$. \end{enumerate} One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts. This impacts reliability of our measures. Inspired by~\cite{sordoni2015neural}, we utilized information retrieval techniques (see Appendix~\ref{sec:supplemental}) to gather 10 extra candidate reference responses/context from other conversations with the same topics. Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier. The result is 6.69 extra references in average per context. The average number of distinct reference dialog acts is 4.2. Table~\ref{tbl:diversity} shows the results. The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance. This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity. As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the $N$ hypotheses. However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, A-BOW). One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words. We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts. A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (high-entropy). Figure~\ref{fig:prec_recall} shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts. Also it shows that CVAE suffers from lower precision, especially in low entropy contexts. Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy. \vspace{-0.4cm} \subsection{Qualitative Analysis} Table~\ref{tbl:sample} shows the outputs generated from the baseline and kgCVAE. In example 1, caller A begins with an open-ended question. The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts. Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on $y$. On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e. "I'm". Example 2 is a situation where caller A is telling B stories. The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener. The baseline successfully predicts "uh-huh". The kgCVAE model is also able to generate various ways of back-channeling. This implies that the latent $z$ is able to capture context-sensitive variations, i.e. in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity. Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context. In addition, past work~\cite{kingma2013auto} has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior $z$ outputted from the recognition network should cluster the responses into meaningful groups. Figure~\ref{fig:tsne} visualizes the posterior $z$ of responses in the test dataset in 2D space using t-SNE~\cite{maaten2008visualizing}. We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption. \subsection{Results for Bag-of-Word Loss} \label{sec:bow} Finally, we evaluate the effectiveness of bag-of-word (BOW) loss for training VAE/CVAE with the RNN decoder. To compare with past work~\cite{bowman2015generating}, we conducted the same language modelling (LM) task on Penn Treebank using VAE. The network architecture is same except we use GRU instead of LSTM. We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA. Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost. For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches. Table~\ref{tbl:vae_ptb} shows the reconstruction perplexity and the KL cost on the test dataset. The standard VAE fails to learn a meaningful latent variable by having a KL cost close to $0$ and a reconstruction perplexity similar to a small LSTM LM~\cite{zaremba2014recurrent}. KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1. At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost. \vspace{-0.2cm} Figure~\ref{fig:ptb_kld} visualizes the evolution of the KL cost. We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers. On the contrary, the model with only KLA learns to encode substantial information in latent $z$ when the KL cost weight is small. However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent $z$ and falls back to the naive implementation. The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder. Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments. However, first of all, from practical reasons, direct decoding from their objective function $(1-\lambda)\log p(T|S)+\lambda \log p(S|T)$ is intractable due to the enormous search space for target response. Meanwhile, the model results in non-globally-optimal solution since it fully depends on system's success in generating a sufficiently diverse N-best list. Secondly, their models have been focused on the algorithmic dimensions of the problem and are unable to explicitly model holistic properties of the sentence such topic, sentiment or topic. Our work addressed these problems through introducing latent factors of a conversation in the proposed kgCVAE models, which could extensively capture different properties of dialogues. Beam search optimization is widely used to generate diversified outputs for neural conversational model~\cite{serban2016building}. Wiseman and Rush~\shortcite{wiseman2016sequence} attempted to solve two major biases of seq2seq models that lead to monotonous output: exposure bias and loss-evaluation mismatch. To remedy these, first of all, their modified seq2seq models produce scores for ranking sentences rather than predicting the probabilities of next word. Then they introduced search-based loss for RNN training and keep full gold sequence to be at the top of the beam through recording margin violation in forward pass and backpropagating errors in backward pass. This beam search algorithm is slow and has not been proved to be able to scale up. Pavel et al,.~\shortcite{sountsov2016length} proposed an alternative model to achieve globally conditioning the $p(y|x)$ distribution. They argued that seq2seq models exhibit a bias towards short sequences that surprisingly gets worse with increasing beam size. This counter-intuitive phenomena is due to a margin discrepancy in the training loss of encoder-decoder models. Their model includes two RNN encoders to project both inputs and outputs into fixed-length vectors, which produces better calibrated sequence-level probabilities and alleviate bias issues of outputs. However, their model requires a closed space of allowed outputs and their proposed architecture is not able to generate responses directly. \cite{li2016persona} intend to add more contexts in neural response generation by presenting persona-based models to address speaker consistency . They captured individual characteristics by encoding background information and speaking style into the distributed embeddings, which is used to rerank the generated response from sequence-to-sequence model. \cite{li2016deep} pointed out maximum-likelihood estimation(MLE) objective function of sequence-to-sequence model is unable to approximate the real-world goal of conversation. Thus, they initilized sequence-to-sequence model with MLE estimation and leveraged reinforcement learning to optimize three useful conversational properties such as informativity, coherence, and ease of answering. Moreover, in order to confirm that our training objective indeed strives to capture such discourse information, we collect the posterior $z$ for every utterance in the test dataset at each stage of the training and deploy a linear classifier as a probe~\cite{zhao2016towards} the amount of captured discourse information. Specifically, we use the posterior $z$ of $x$ as the inputs to a logistic regress or that is trained to predicts the dialog act and sentiment of $x$. Table~\ref{tbl:latent} shows the results. We can see after a model is better trained, the posterior $z$ indeed begins to capture more information that is beneficial for predicting the dialog act and sentiment of the predicting response $x$. \textbf{Connection to Conventional Dialog System Pipeline}: Our models are connected to the conventional dialog system pipeline~\cite{bohus2007olympus}. The context encoding network corresponds the natural language understanding and dialog state tracker, which produces a summarized representation of the dialog state. The prior network corresponds to the dialog manager that outputs a high-level action of the next response based on the dialog state. Then the decoder network plays the role of a context-sensitive natural language generator that produces the surface form of the prior network's outputs. Lastly the recognition network can be understood as a data-driven designer of the dialog systems, which discover useful high-level factors form the data and encode them in the latent variable $z$. This implies a promising research direction for data-driven dialog system (see Section~\ref{sec:implication} for details). \subsection*{Consistence of Natural Language Generation} Since kgCVAE first predicts the dialog act of the response, and then generate the response conditioned on both the context and its predicted dialog act, a natural evaluation is to measure the performance of kgCVAE in generating sentences that is consistent with its own dialog act prediction. We denote the predicted dialog act as $y\prime$ as the ground truth and the tagged dialog act based on the generated words from dialog act tagger as prediction. The accuracy on the test dataset is $80.5\%$, which suggest that kgCVAE is able to jointly model the relationship between the dialog act and its associated surface realization. \subsection{Latent Space Analysis} At last, we are interested in investigating the properties of the learned latent space $z$. We hypothesise that the posterior $z$ from the recognition network $q_\phi(z|x,y,c)$ captures important discourse information in the context response tuple ($c$, $x$). To verify this hypothesis, we first use t-SNE~\cite{maaten2008visualizing} to map the collected $z$ into 2-dimensional space and explored the potential clustering existed in the latent space. Figure~\ref{fig:tsne} shows our finding and they suggest that the latent space is correlated with the length, the sentiment and the dialog act of the encoding $x$. Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE. \vspace{-0.2cm} \subsection{Encoder-decoder Dialog Models} \vspace{-0.2cm} Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community. Ideal output responses should be both coherent and diverse. However, most models end up with generic and dull responses. To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more specific responses. Li et al.,~\shortcite{li2016persona} captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model. Xing et al.,~\shortcite{xing2016topic} maintain topic encoding based on Latent Dirichlet Allocation (LDA)~\cite{blei2003latent} of the conversation to encourage the model to output more topic coherent responses. On the other hand, many attempts have also been made to improve the architecture of encoder-decoder models. Li et al,.~\shortcite{li2015diversity} proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses. This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input. Wiseman and Rush~\shortcite{wiseman2016sequence} focused on improving the decoder network by alleviating the biases between training and testing. They introduced a search-based loss that directly optimizes the networks for beam search decoding. The resulting model achieves better performance on word ordering, parsing and machine translation. Besides improving beam search, Li et al.,~\shortcite{li2016deep} pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation. Thus, they initialized a encoder-decoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering. \vspace{-0.2cm} \subsection{Conditional Variational Autoencoder} \vspace{-0.2cm} The variational autoencoder (VAE)~\cite{kingma2013auto,rezende2014stochastic} is one of the most popular frameworks for image generation. The basic idea of VAE is to encode the input $x$ into a probability distribution $z$ instead of a point encoding in the autoencoder. Then VAE applies a decoder network to reconstruct the original input using samples from $z$. To generate images, VAE first obtains a sample of $z$ from the prior distribution, e.g. $\mathcal{N}(0,\mathbf{I})$, and then produces an image via the decoder network. A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g. generating different human faces given skin color~\cite{yan2015attribute2image,sohn2015learning}. Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images. Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial. Bowman et al.,~\shortcite{bowman2015generating} have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable. They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder. They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable. We refer to this issue as the \textit{vanishing latent variable problem}. Serban et al.,~\shortcite{serban2016hierarchical} have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses. To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem. \subsection*{Variational Lower Bound for kgCVAE} We assume that even with the presence of linguistic feature $y$ regarding $x$, the prediction of $x_{bow}$ still only depends on the $z$ and $c$. Therefore, we have: \begin{align} \mathcal{L}(\theta,\phi;x,c,y) &= -KL(q_\phi (z|x,c,y) \| P_\theta (z|c)) \nonumber\\ &+ \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(x|z, c, y)] \nonumber\\ & + \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(y|z, c)] \nonumber \\ & + \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(x_{bow}|z, c)] \end{align} \subsection*{Collection of Multiple Reference Responses} We collected multiple reference responses for each dialog context in the test set by information retrieval techniques combining with traditional a machine learning method. First, we encode the dialog history using Term Frequency-Inverse Document Frequency (TFIDF)~\cite{salton1988term} weighted bag-of-words into vector representation $h$. Then we denote the topic of the conversation as $t$ and denote $f$ as the conversation floor, i.e. if the speakers of the last utterance in the dialog history and response utterance are the same $f=1$ otherwise $f=0$. Then we computed the similarity $d(c_i, c_j)$ between two dialog contexts using: \begin{equation} d(c_i, c_j) = \mathbb{1}(t_i = t_j) \mathbb{1}(t_i = t_j) \frac{h_i \cdot h_j}{||h_i||||h_j||} \end{equation} Unlike past work~\cite{sordoni2015neural}, this similarity function only cares about the distance in the context and imposes no constraints on the response, therefore is suitbale for finding diverse responses regarding to the same dialog context. Secondly, for each dialog context in the test set, we retrieved the 10 nearest neighbors from the training set and treated the responses from the training set as candidate reference responses. Thirdly, we further sampled 240 context-responses pairs from 5481 pairs in the total test set and post-processed the selected candidate responses by two human computational linguistic experts who were told to give a binary label for each candidate response about whether the response is appropriate regarding its dialog context. The filtered lists then served as the ground truth to train our reference response classifier. For the next step, we extracted bigrams, part-of-speech bigrams and word part-of-speech pairs from both dialogue contexts and candidate reference responses with rare threshold for feature extraction being set to 20. Then L2-regularized logistic regression with 10-fold cross validation was applied as the machine learning algorithm. Cross validation accuracy on the human-labelled data was 71\%. Finally, we automatically annotated the rest of test set with this trained classifier and the resulting data were used for model evaluation.\vspace{-0.2cm} \subsection{Conditional Variational Autoencoder (CVAE) for Dialog Generation} Each dyadic conversation can be represented via three random variables: the dialog context $c$ (context window size $k-1$), the response utterance $x$ (the $k^{th}$ utterance) and a latent variable $z$, which is used to capture the latent distribution over the valid responses. Further, $c$ is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of $x$, otherwise 0) and meta features $m$ (e.g. the topic). We then define the conditional distribution $p(x,z|c) = p(x|z,c)p(z|c)$ and our goal is to use deep neural networks (parametrized by $\theta$) to approximate $p(z|c)$ and $p(x|z,c)$. We refer to $p_{\theta}(z|c)$ as the \textit{prior network} and $p_{\theta}(x,|z,c)$ as the \textit{response decoder}. Then the generative process of $x$ is (Figure~\ref{fig:graphic} (a)): \begin{enumerate} \item Sample a latent variable $z$ from the prior network $p_{\theta}(z|c)$. \item Generate $x$ through the response decoder $p_{\theta}(x|z, c)$. \end{enumerate} CVAE is trained to maximize the conditional log likelihood of $x$ given $c$, which involves an intractable marginalization over the latent variable $z$. As proposed in~\cite{sohn2015learning,yan2015attribute2image}, CVAE can be efficiently trained with the \textit{Stochastic Gradient Variational Bayes} (SGVB) framework~\cite{kingma2013auto} by maximizing the variational lower bound of the conditional log likelihood. We assume the $z$ follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a \textit{recognition network} $q_\phi(z|x,c)$ to approximate the true posterior distribution $p(z|x,c)$. Sohn and et al,.~\shortcite{sohn2015learning} have shown that the variational lower bound can be written as: \begin{align} \label{eq:elbo} \mathcal{L}(\theta,\phi;x,c) &= -KL(q_\phi (z|x,c) \| p_\theta (z|c)) \nonumber \\ &+ \mathbf{E}_{q_\phi (z|c,x)} [\log p_\theta(x|z, c)] \\ & \leq \log p (x|c) \nonumber \end{align} Figure~\ref{fig:model} demonstrates an overview of our model. The utterance encoder is a bidirectional recurrent neural network (BRNN)~\cite{schuster1997bidirectional} with a gated recurrent unit (GRU)~\cite{chung2014empirical} to encode each utterance into fixed-size vectors by concatenating the last hidden states of the forward and backward RNN $u_i = [\vec{h_i}, \cev{h_i}]$. $x$ is simply $u_k$. The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking $u_{1:k-1}$ and the corresponding conversation floor as inputs. The last hidden state $h^c$ of the context encoder is concatenated with meta features and $c=[h^{c}, m]$. Since we assume $z$ follows isotropic Gaussian distribution, the recognition network $q_\phi(z|x,c) \sim \mathcal{N}(\mu, \sigma^2\mathbf{I})$ and the prior network $p_\theta(z|c) \sim \mathcal{N}(\mu^\prime, \sigma^{\prime 2}\mathbf{I})$, and then we have: \begin{align} \begin{bmatrix} \mu \\ \log(\sigma^2) \end{bmatrix} &= W_r \begin{bmatrix} x\\ c \end{bmatrix} + b_r \\ \begin{bmatrix} \mu^\prime \\ \log(\sigma^{\prime 2}) \end{bmatrix} &= \text{MLP}_p(c) \end{align} We then use the reparametrization trick~\cite{kingma2013auto} to obtain samples of $z$ either from $\mathcal{N}(z; \mu, \sigma^2\mathbf{I})$ predicted by the recognition network (training) or $\mathcal{N}(z; \mu^\prime, \sigma^{\prime 2}\mathbf{I})$ predicted by the prior network (testing). Finally, the response decoder is a 1-layer GRU network with initial state $s_0 = W_i[z,c] +b_i$. The response decoder then predicts the words in $x$ sequentially. \subsection{Knowledge-Guided CVAE (kgCVAE)} \vspace{-0.2cm} In practice, training CVAE is a challenging optimization problem and often requires large amount of data. On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation. For example, dialog acts~\cite{poesio1998towards} have been widely used in the dialog managers~\cite{litman1987plan,raux2005let,zhao2016towards} to represent the propositional function of the system. Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent $z$ if it is provided with explicitly extracted discourse features during the training. In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as $y$. Then we assume that the generation of $x$ depends on $c$, $z$ and $y$. $y$ relies on $z$ and $c$ as shown in Figure~\ref{fig:graphic}. Specifically, during training the initial state of the response decoder is $s_0 = W_i[z,c,y]+b_i$ and the input at every step is $[e_t, y]$ where $e_t$ is the word embedding of $t^{th}$ word in $x$. In addition, there is an MLP to predict $y'=\text{MLP}_y(z,c)$ based on $z$ and $c$. In the testing stage, the predicted $y'$ is used by the response decoder instead of the oracle decoders. We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable $z$ to capture. KgCVAE model is trained by maximizing: \begin{align} \label{eq:kg_elbo} \mathcal{L}(\theta,\phi;x,c,y) &= -KL(q_\phi (z|x,c,y) \| P_\theta (z|c)) \nonumber\\ &+ \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(x|z, c, y)] \nonumber\\ & + \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(y|z, c)] \end{align} Since now the reconstruction of $y$ is a part of the loss function, kgCVAE can more efficiently encode $y$-related information into $z$ than discovering it only based on the surface-level $x$ and $c$. Another advantage of kgCVAE is that it can output a high-level label (e.g. dialog act) along with the word-level responses, which allows easier interpretation of the model's outputs. \subsection{Optimization Challenges} \vspace{-0.2cm} A straightforward VAE with RNN decoder fails to encode meaningful information in $z$ due to the \textit{vanishing latent variable problem}~\cite{bowman2015generating}. Bowman et al.,~\shortcite{bowman2015generating} proposed two solutions: (1) \textit{KL annealing}: gradually increasing the weight of the KL term from 0 to 1 during training; (2) \textit{word drop decoding}: setting a certain percentage of the target words to 0. We found that CVAE suffers from the same issue when the decoder is an RNN. Also we did not consider word drop decoding because Bowman et al,.~\shortcite{bowman2015generating} have shown that it may hurt the performance when the drop rate is too high. As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: \textit{bag-of-word loss}. The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response $x$ as shown in Figure~\ref{fig:model}(b). We decompose $x$ into two variables: $x_o$ with word order and $x_{bow}$ without order, and assume that $x_o$ and $x_{bow}$ are conditionally independent given $z$ and $c$: $p(x,z|c)=p(x_o|z,c)p(x_{bow}|z,c)p(z|c)$. Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response. Let $f =\text{MLP}_{b}(z,x) \in \mathcal{R}^V$ where $V$ is vocabulary size, and we have: \begin{equation} \log p(x_{bow}|z, c) = \log \prod_{t=1}^{|x|}\frac{e^{f_{x_t}}}{\sum_j^V e^{f_j}} \end{equation} where $|x|$ is the length of $x$ and $x_t$ is the word index of $t_{th}$ word in $x$. The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix~\ref{sec:supplemental} for kgCVAE): \begin{align} \label{eq:bow_kg_elbo} \mathcal{L}^\prime(\theta,\phi;x,c) &=\mathcal{L}(\theta,\phi;x,c)\nonumber\\ & + \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(x_{bow}|z, c)] \end{align} We will show that the bag-of-word loss in Equation~\ref{eq:bow_kg_elbo} is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.\vspace{-0.2cm} \subsection{Dataset} \label{sec:data} We chose the Switchboard (SW) 1 Release 2 Corpus~\cite{godfrey1997switchboard} to evaluate the proposed models. SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment. In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion. There are 70 available topics. We randomly split the data into $2316/60/62$ dialogs for train/validate/test. The pre-processing includes (1) tokenize using the NLTK tokenizer~\cite{bird2009natural}; (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary. The final data have $207,833/5,225/5,481$ $(c,x)$ pairs for train/validate/test. Furthermore, a subset of SW was manually labeled with dialog acts~\cite{stolcke2000dialogue}. We extracted dialog act labels based on the dialog act recognizer proposed in~\cite{ribeiro2015influence}. The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances. We trained a Support Vector Machine (SVM)~\cite{suykens1999least} with linear kernel on the subset of SW with human annotations. There are 42 types of dialog acts and the SVM achieved 77.3\% accuracy on held-out data. Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer. \subsection{Training} \label{sec:train} We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere. We initialize the word embedding from Glove embedding pre-trained on Twitter~\cite{pennington2014glove}. The utterance encoder has a hidden size of 300 for each direction. The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400. The prior network and the MLP for predicting $y$ both have 1 hidden layer of size 400 and $tanh$ non-linearity. The latent variable $z$ has a size of 200. The context window $k$ is 10. All the initial weights are sampled from a uniform distribution [-0.08, 0.08]. The mini-batch size is 30. The models are trained end-to-end using the Adam optimizer~\cite{kingma2014adam} with a learning rate of 0.001 and gradient clipping at 5. We selected the best models based on the variational lower bound on the validate data. Finally, we use the BOW loss along with ~\textit{KL annealing} of 10,000 batches to achieve the best performance. Section~\ref{sec:bow} gives a detailed argument for the importance of the BOW loss.
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
1703.10960
Table 3: The reconstruction perplexity and KL terms on Penn Treebank test set.
[ "Model", "Perplexity", "KL cost" ]
[ [ "Standard", "122.0", "0.05" ], [ "KLA", "111.5", "2.02" ], [ "BOW", "97.72", "7.41" ], [ "BOW+KLA", "73.04", "15.94" ] ]
The standard VAE fails to learn a meaningful latent variable by having a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM Zaremba et al. KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1. At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{256} % Enter the acl Paper ID here \newcommand{\cev}[1]{\reflectbox{\ensuremath{\vec{\reflectbox{\ensuremath{#1}}}}}} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders} \author{Tiancheng Zhao, Ran Zhao and Maxine Eskenazi \\ Language Technologies Institute \\ Carnegie Mellon University \\ Pittsburgh, Pennsylvania, USA \\ {\tt \{tianchez,ranzhao1,max+\}@cs.cmu.edu}} \date{} \begin{document} \maketitle \begin{abstract} \vspace{-0.2cm} While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a \textit{bag-of-word} loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.\footnote{Data and an implementation of our model is avalaible at https://github.com/snakeztc/NeuralDialog-CVAE} \end{abstract} \vspace{-0.4cm} \section{Introduction} \label{sec:intro} \input{intro.tex} \vspace{-0.2cm} \section{Related Work} \label{sec:related} \input{related.tex} \vspace{-0.2cm} \section{Proposed Models} \vspace{-0.2cm} \label{sec:models} \input{model.tex} \vspace{-0.2cm} \section{Experiment Setup} \label{sec:corpra} \input{corpora.tex} \vspace{-0.2cm} \section{Results} \vspace{-0.2cm} \label{sec:exp} \input{experiments.tex} \section{Conclusion and Future Work} \vspace{-0.2cm} \label{sec:conclusion} \input{conclusion.tex} \section{Acknowledgements} This work was funded by NSF grant CNS-1512973. The opinions expressed in this paper do not necessarily reflect those of NSF. \bibliographystyle{acl_natbib} \appendix \section{Supplemental Material} \label{sec:supplemental} \input{appendix.tex} \end{document} \vspace{-0.2cm} The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process. Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions~\cite{bohus2003ravenclaw,williams2007partially}. Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g. different strategies to recover from non-understanding~\cite{yu2016strategy}. However, the conventional approach of designing a dialog manager~\cite{williams2007partially} does not scale well to open-domain conversation models because of the vast quantity of possible decisions. Thus, there has been a growing interest in applying encoder-decoder models~\cite{sutskever2014sequence} for modeling open-domain conversation~\cite{vinyals2015neural,serban2016building}. The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence. The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting. However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., \textit{I don't know}), rather than meaningful and specific answers~\cite{li2015diversity,serban2016hierarchical}. There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section~\ref{sec:related} for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response. Other features should be extracted and provided to the models as conditionals in order to generate more specific responses~\cite{xing2016topic,li2016persona}; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations~\cite{wiseman2016sequence}, encouraging responses that have long-term payoff~\cite{li2016deep}, etc. Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a \textit{one-to-many} problem at the discourse level. Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is non-trivial to extract all of them. Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input. To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable (Figure~\ref{fig:example}). This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network. \vspace{-0.3cm} Specifically, our contributions are three-fold: 1. We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE)~\cite{yan2015attribute2image,sohn2015learning}, which introduces a latent variable that can capture discourse-level variations as described above 2. We propose Knowledge-Guided CVAE (kgCVAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability. 3. We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation~\cite{bowman2015generating}. We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques. In conclusion, we identified the \textit{one-to-many} nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level. While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog. In turn, the output of this novel neural dialog model will be easier to explain and control by humans. In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc. Last but not least, the recognition network in our model will serve as the foundation for designing a data-driven dialog manager, which automatically discovers useful high-level intents. All of the above suggest a promising research direction.\textbf{Connection to Conventional Dialog System Pipeline}: Our models are connected to the conventional dialog system pipeline~\cite{bohus2007olympus}. The context encoding network corresponds the natural language understanding and dialog state tracker, which produces a summarized representation of the dialog state. The prior network corresponds to the dialog manager that outputs a high-level action of the next response based on the dialog state. Then the decoder network plays the role of a context-sensitive natural language generator that produces the surface form of the prior network's outputs. Lastly the recognition network can be understood as a data-driven designer of the dialog systems, which discover useful high-level factors form the data and encode them in the latent variable $z$. This implies a promising research direction for data-driven dialog system (see Section~\ref{sec:implication} for details).\subsection{Experiments Setup} We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE. The \textbf{baseline model} is an encoder-decoder neural dialog model without latent variables similar to~\cite{serban2016building}. The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure~\ref{fig:model}. The encoded context $c$ is directly fed into the decoder networks as the initial state. The hyperparameters of the baseline are the same as the ones reported in Section~\ref{sec:train} and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss. Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate $N$ responses from the baseline by sampling from the softmax. For CVAE/kgCVAE, we sample $N$ times from the latent $z$ and only use greedy decoders so that the randomness comes entirely from the latent variable $z$. \subsection{Quantitative Analysis} Automatically evaluating an open-domain generative dialog model is an open research challenge~\cite{liu2016not}. Following our \textit{one-to-many} hypothesis, we propose the following metrics. We assume that for a given dialog context $c$, there exist $M_c$ reference responses $r_j$, $j\in[1,M_c]$. Meanwhile a model can generate $N$ hypothesis responses $h_i$, $i\in[1,N]$. The generalized response-level precision/recall for a given dialog context is: \begin{align*} \text{precision(c)} &= \frac{\sum_{i=1}^N max_{j\in[1,M_c]}d(r_j, h_i)}{N} \\ \text{recall(c)} &= \frac{\sum_{j=1}^{M_c} max_{i\in[1,N]}d(r_j,h_i))}{M_c} \end{align*} where $d(r_j, h_i)$ is a distance function which lies between 0 to 1 and measures the similarities between $r_j$ and $h_i$. The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: \begin{enumerate} \item Smoothed Sentence-level BLEU~\cite{chen2014systematic}: BLEU is a popular metric that measures the geometric mean of modified n-gram precision with a length penalty~\cite{papineni2002bleu,li2015diversity}. We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale. \item Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences~\cite{forgues2014bootstrapping,adi2016fine}. The $d(r_j, h_i)$ is the cosine distance of the two embedding vectors. We used Glove embedding described in Section~\ref{sec:corpra} and denote the average method as A-bow and extrema method as E-bow. The score is normalized to [0, 1]. \item Dialog Act Match: to measure the similarity at the discourse level, the same dialog-act tagger from~\ref{sec:data} is applied to label all the generated responses of each model. We set $d(r_j,h_i)=1$ if $r_j$ and $h_i$ have the same dialog acts, otherwise $d(r_j, h_i)=0$. \end{enumerate} One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts. This impacts reliability of our measures. Inspired by~\cite{sordoni2015neural}, we utilized information retrieval techniques (see Appendix~\ref{sec:supplemental}) to gather 10 extra candidate reference responses/context from other conversations with the same topics. Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier. The result is 6.69 extra references in average per context. The average number of distinct reference dialog acts is 4.2. Table~\ref{tbl:diversity} shows the results. The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance. This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity. As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the $N$ hypotheses. However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, A-BOW). One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words. We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts. A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (high-entropy). Figure~\ref{fig:prec_recall} shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts. Also it shows that CVAE suffers from lower precision, especially in low entropy contexts. Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy. \vspace{-0.4cm} \subsection{Qualitative Analysis} Table~\ref{tbl:sample} shows the outputs generated from the baseline and kgCVAE. In example 1, caller A begins with an open-ended question. The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts. Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on $y$. On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e. "I'm". Example 2 is a situation where caller A is telling B stories. The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener. The baseline successfully predicts "uh-huh". The kgCVAE model is also able to generate various ways of back-channeling. This implies that the latent $z$ is able to capture context-sensitive variations, i.e. in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity. Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context. In addition, past work~\cite{kingma2013auto} has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior $z$ outputted from the recognition network should cluster the responses into meaningful groups. Figure~\ref{fig:tsne} visualizes the posterior $z$ of responses in the test dataset in 2D space using t-SNE~\cite{maaten2008visualizing}. We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption. \subsection{Results for Bag-of-Word Loss} \label{sec:bow} Finally, we evaluate the effectiveness of bag-of-word (BOW) loss for training VAE/CVAE with the RNN decoder. To compare with past work~\cite{bowman2015generating}, we conducted the same language modelling (LM) task on Penn Treebank using VAE. The network architecture is same except we use GRU instead of LSTM. We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA. Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost. For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches. Table~\ref{tbl:vae_ptb} shows the reconstruction perplexity and the KL cost on the test dataset. The standard VAE fails to learn a meaningful latent variable by having a KL cost close to $0$ and a reconstruction perplexity similar to a small LSTM LM~\cite{zaremba2014recurrent}. KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1. At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost. \vspace{-0.2cm} Figure~\ref{fig:ptb_kld} visualizes the evolution of the KL cost. We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers. On the contrary, the model with only KLA learns to encode substantial information in latent $z$ when the KL cost weight is small. However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent $z$ and falls back to the naive implementation. The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder. Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments. However, first of all, from practical reasons, direct decoding from their objective function $(1-\lambda)\log p(T|S)+\lambda \log p(S|T)$ is intractable due to the enormous search space for target response. Meanwhile, the model results in non-globally-optimal solution since it fully depends on system's success in generating a sufficiently diverse N-best list. Secondly, their models have been focused on the algorithmic dimensions of the problem and are unable to explicitly model holistic properties of the sentence such topic, sentiment or topic. Our work addressed these problems through introducing latent factors of a conversation in the proposed kgCVAE models, which could extensively capture different properties of dialogues. Beam search optimization is widely used to generate diversified outputs for neural conversational model~\cite{serban2016building}. Wiseman and Rush~\shortcite{wiseman2016sequence} attempted to solve two major biases of seq2seq models that lead to monotonous output: exposure bias and loss-evaluation mismatch. To remedy these, first of all, their modified seq2seq models produce scores for ranking sentences rather than predicting the probabilities of next word. Then they introduced search-based loss for RNN training and keep full gold sequence to be at the top of the beam through recording margin violation in forward pass and backpropagating errors in backward pass. This beam search algorithm is slow and has not been proved to be able to scale up. Pavel et al,.~\shortcite{sountsov2016length} proposed an alternative model to achieve globally conditioning the $p(y|x)$ distribution. They argued that seq2seq models exhibit a bias towards short sequences that surprisingly gets worse with increasing beam size. This counter-intuitive phenomena is due to a margin discrepancy in the training loss of encoder-decoder models. Their model includes two RNN encoders to project both inputs and outputs into fixed-length vectors, which produces better calibrated sequence-level probabilities and alleviate bias issues of outputs. However, their model requires a closed space of allowed outputs and their proposed architecture is not able to generate responses directly. \cite{li2016persona} intend to add more contexts in neural response generation by presenting persona-based models to address speaker consistency . They captured individual characteristics by encoding background information and speaking style into the distributed embeddings, which is used to rerank the generated response from sequence-to-sequence model. \cite{li2016deep} pointed out maximum-likelihood estimation(MLE) objective function of sequence-to-sequence model is unable to approximate the real-world goal of conversation. Thus, they initilized sequence-to-sequence model with MLE estimation and leveraged reinforcement learning to optimize three useful conversational properties such as informativity, coherence, and ease of answering. Moreover, in order to confirm that our training objective indeed strives to capture such discourse information, we collect the posterior $z$ for every utterance in the test dataset at each stage of the training and deploy a linear classifier as a probe~\cite{zhao2016towards} the amount of captured discourse information. Specifically, we use the posterior $z$ of $x$ as the inputs to a logistic regress or that is trained to predicts the dialog act and sentiment of $x$. Table~\ref{tbl:latent} shows the results. We can see after a model is better trained, the posterior $z$ indeed begins to capture more information that is beneficial for predicting the dialog act and sentiment of the predicting response $x$. \textbf{Connection to Conventional Dialog System Pipeline}: Our models are connected to the conventional dialog system pipeline~\cite{bohus2007olympus}. The context encoding network corresponds the natural language understanding and dialog state tracker, which produces a summarized representation of the dialog state. The prior network corresponds to the dialog manager that outputs a high-level action of the next response based on the dialog state. Then the decoder network plays the role of a context-sensitive natural language generator that produces the surface form of the prior network's outputs. Lastly the recognition network can be understood as a data-driven designer of the dialog systems, which discover useful high-level factors form the data and encode them in the latent variable $z$. This implies a promising research direction for data-driven dialog system (see Section~\ref{sec:implication} for details). \subsection*{Consistence of Natural Language Generation} Since kgCVAE first predicts the dialog act of the response, and then generate the response conditioned on both the context and its predicted dialog act, a natural evaluation is to measure the performance of kgCVAE in generating sentences that is consistent with its own dialog act prediction. We denote the predicted dialog act as $y\prime$ as the ground truth and the tagged dialog act based on the generated words from dialog act tagger as prediction. The accuracy on the test dataset is $80.5\%$, which suggest that kgCVAE is able to jointly model the relationship between the dialog act and its associated surface realization. \subsection{Latent Space Analysis} At last, we are interested in investigating the properties of the learned latent space $z$. We hypothesise that the posterior $z$ from the recognition network $q_\phi(z|x,y,c)$ captures important discourse information in the context response tuple ($c$, $x$). To verify this hypothesis, we first use t-SNE~\cite{maaten2008visualizing} to map the collected $z$ into 2-dimensional space and explored the potential clustering existed in the latent space. Figure~\ref{fig:tsne} shows our finding and they suggest that the latent space is correlated with the length, the sentiment and the dialog act of the encoding $x$. Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE. \vspace{-0.2cm} \subsection{Encoder-decoder Dialog Models} \vspace{-0.2cm} Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community. Ideal output responses should be both coherent and diverse. However, most models end up with generic and dull responses. To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more specific responses. Li et al.,~\shortcite{li2016persona} captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model. Xing et al.,~\shortcite{xing2016topic} maintain topic encoding based on Latent Dirichlet Allocation (LDA)~\cite{blei2003latent} of the conversation to encourage the model to output more topic coherent responses. On the other hand, many attempts have also been made to improve the architecture of encoder-decoder models. Li et al,.~\shortcite{li2015diversity} proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses. This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input. Wiseman and Rush~\shortcite{wiseman2016sequence} focused on improving the decoder network by alleviating the biases between training and testing. They introduced a search-based loss that directly optimizes the networks for beam search decoding. The resulting model achieves better performance on word ordering, parsing and machine translation. Besides improving beam search, Li et al.,~\shortcite{li2016deep} pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation. Thus, they initialized a encoder-decoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering. \vspace{-0.2cm} \subsection{Conditional Variational Autoencoder} \vspace{-0.2cm} The variational autoencoder (VAE)~\cite{kingma2013auto,rezende2014stochastic} is one of the most popular frameworks for image generation. The basic idea of VAE is to encode the input $x$ into a probability distribution $z$ instead of a point encoding in the autoencoder. Then VAE applies a decoder network to reconstruct the original input using samples from $z$. To generate images, VAE first obtains a sample of $z$ from the prior distribution, e.g. $\mathcal{N}(0,\mathbf{I})$, and then produces an image via the decoder network. A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g. generating different human faces given skin color~\cite{yan2015attribute2image,sohn2015learning}. Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images. Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial. Bowman et al.,~\shortcite{bowman2015generating} have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable. They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder. They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable. We refer to this issue as the \textit{vanishing latent variable problem}. Serban et al.,~\shortcite{serban2016hierarchical} have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses. To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem. \subsection*{Variational Lower Bound for kgCVAE} We assume that even with the presence of linguistic feature $y$ regarding $x$, the prediction of $x_{bow}$ still only depends on the $z$ and $c$. Therefore, we have: \begin{align} \mathcal{L}(\theta,\phi;x,c,y) &= -KL(q_\phi (z|x,c,y) \| P_\theta (z|c)) \nonumber\\ &+ \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(x|z, c, y)] \nonumber\\ & + \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(y|z, c)] \nonumber \\ & + \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(x_{bow}|z, c)] \end{align} \subsection*{Collection of Multiple Reference Responses} We collected multiple reference responses for each dialog context in the test set by information retrieval techniques combining with traditional a machine learning method. First, we encode the dialog history using Term Frequency-Inverse Document Frequency (TFIDF)~\cite{salton1988term} weighted bag-of-words into vector representation $h$. Then we denote the topic of the conversation as $t$ and denote $f$ as the conversation floor, i.e. if the speakers of the last utterance in the dialog history and response utterance are the same $f=1$ otherwise $f=0$. Then we computed the similarity $d(c_i, c_j)$ between two dialog contexts using: \begin{equation} d(c_i, c_j) = \mathbb{1}(t_i = t_j) \mathbb{1}(t_i = t_j) \frac{h_i \cdot h_j}{||h_i||||h_j||} \end{equation} Unlike past work~\cite{sordoni2015neural}, this similarity function only cares about the distance in the context and imposes no constraints on the response, therefore is suitbale for finding diverse responses regarding to the same dialog context. Secondly, for each dialog context in the test set, we retrieved the 10 nearest neighbors from the training set and treated the responses from the training set as candidate reference responses. Thirdly, we further sampled 240 context-responses pairs from 5481 pairs in the total test set and post-processed the selected candidate responses by two human computational linguistic experts who were told to give a binary label for each candidate response about whether the response is appropriate regarding its dialog context. The filtered lists then served as the ground truth to train our reference response classifier. For the next step, we extracted bigrams, part-of-speech bigrams and word part-of-speech pairs from both dialogue contexts and candidate reference responses with rare threshold for feature extraction being set to 20. Then L2-regularized logistic regression with 10-fold cross validation was applied as the machine learning algorithm. Cross validation accuracy on the human-labelled data was 71\%. Finally, we automatically annotated the rest of test set with this trained classifier and the resulting data were used for model evaluation.\vspace{-0.2cm} \subsection{Conditional Variational Autoencoder (CVAE) for Dialog Generation} Each dyadic conversation can be represented via three random variables: the dialog context $c$ (context window size $k-1$), the response utterance $x$ (the $k^{th}$ utterance) and a latent variable $z$, which is used to capture the latent distribution over the valid responses. Further, $c$ is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of $x$, otherwise 0) and meta features $m$ (e.g. the topic). We then define the conditional distribution $p(x,z|c) = p(x|z,c)p(z|c)$ and our goal is to use deep neural networks (parametrized by $\theta$) to approximate $p(z|c)$ and $p(x|z,c)$. We refer to $p_{\theta}(z|c)$ as the \textit{prior network} and $p_{\theta}(x,|z,c)$ as the \textit{response decoder}. Then the generative process of $x$ is (Figure~\ref{fig:graphic} (a)): \begin{enumerate} \item Sample a latent variable $z$ from the prior network $p_{\theta}(z|c)$. \item Generate $x$ through the response decoder $p_{\theta}(x|z, c)$. \end{enumerate} CVAE is trained to maximize the conditional log likelihood of $x$ given $c$, which involves an intractable marginalization over the latent variable $z$. As proposed in~\cite{sohn2015learning,yan2015attribute2image}, CVAE can be efficiently trained with the \textit{Stochastic Gradient Variational Bayes} (SGVB) framework~\cite{kingma2013auto} by maximizing the variational lower bound of the conditional log likelihood. We assume the $z$ follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a \textit{recognition network} $q_\phi(z|x,c)$ to approximate the true posterior distribution $p(z|x,c)$. Sohn and et al,.~\shortcite{sohn2015learning} have shown that the variational lower bound can be written as: \begin{align} \label{eq:elbo} \mathcal{L}(\theta,\phi;x,c) &= -KL(q_\phi (z|x,c) \| p_\theta (z|c)) \nonumber \\ &+ \mathbf{E}_{q_\phi (z|c,x)} [\log p_\theta(x|z, c)] \\ & \leq \log p (x|c) \nonumber \end{align} Figure~\ref{fig:model} demonstrates an overview of our model. The utterance encoder is a bidirectional recurrent neural network (BRNN)~\cite{schuster1997bidirectional} with a gated recurrent unit (GRU)~\cite{chung2014empirical} to encode each utterance into fixed-size vectors by concatenating the last hidden states of the forward and backward RNN $u_i = [\vec{h_i}, \cev{h_i}]$. $x$ is simply $u_k$. The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking $u_{1:k-1}$ and the corresponding conversation floor as inputs. The last hidden state $h^c$ of the context encoder is concatenated with meta features and $c=[h^{c}, m]$. Since we assume $z$ follows isotropic Gaussian distribution, the recognition network $q_\phi(z|x,c) \sim \mathcal{N}(\mu, \sigma^2\mathbf{I})$ and the prior network $p_\theta(z|c) \sim \mathcal{N}(\mu^\prime, \sigma^{\prime 2}\mathbf{I})$, and then we have: \begin{align} \begin{bmatrix} \mu \\ \log(\sigma^2) \end{bmatrix} &= W_r \begin{bmatrix} x\\ c \end{bmatrix} + b_r \\ \begin{bmatrix} \mu^\prime \\ \log(\sigma^{\prime 2}) \end{bmatrix} &= \text{MLP}_p(c) \end{align} We then use the reparametrization trick~\cite{kingma2013auto} to obtain samples of $z$ either from $\mathcal{N}(z; \mu, \sigma^2\mathbf{I})$ predicted by the recognition network (training) or $\mathcal{N}(z; \mu^\prime, \sigma^{\prime 2}\mathbf{I})$ predicted by the prior network (testing). Finally, the response decoder is a 1-layer GRU network with initial state $s_0 = W_i[z,c] +b_i$. The response decoder then predicts the words in $x$ sequentially. \subsection{Knowledge-Guided CVAE (kgCVAE)} \vspace{-0.2cm} In practice, training CVAE is a challenging optimization problem and often requires large amount of data. On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation. For example, dialog acts~\cite{poesio1998towards} have been widely used in the dialog managers~\cite{litman1987plan,raux2005let,zhao2016towards} to represent the propositional function of the system. Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent $z$ if it is provided with explicitly extracted discourse features during the training. In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as $y$. Then we assume that the generation of $x$ depends on $c$, $z$ and $y$. $y$ relies on $z$ and $c$ as shown in Figure~\ref{fig:graphic}. Specifically, during training the initial state of the response decoder is $s_0 = W_i[z,c,y]+b_i$ and the input at every step is $[e_t, y]$ where $e_t$ is the word embedding of $t^{th}$ word in $x$. In addition, there is an MLP to predict $y'=\text{MLP}_y(z,c)$ based on $z$ and $c$. In the testing stage, the predicted $y'$ is used by the response decoder instead of the oracle decoders. We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable $z$ to capture. KgCVAE model is trained by maximizing: \begin{align} \label{eq:kg_elbo} \mathcal{L}(\theta,\phi;x,c,y) &= -KL(q_\phi (z|x,c,y) \| P_\theta (z|c)) \nonumber\\ &+ \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(x|z, c, y)] \nonumber\\ & + \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(y|z, c)] \end{align} Since now the reconstruction of $y$ is a part of the loss function, kgCVAE can more efficiently encode $y$-related information into $z$ than discovering it only based on the surface-level $x$ and $c$. Another advantage of kgCVAE is that it can output a high-level label (e.g. dialog act) along with the word-level responses, which allows easier interpretation of the model's outputs. \subsection{Optimization Challenges} \vspace{-0.2cm} A straightforward VAE with RNN decoder fails to encode meaningful information in $z$ due to the \textit{vanishing latent variable problem}~\cite{bowman2015generating}. Bowman et al.,~\shortcite{bowman2015generating} proposed two solutions: (1) \textit{KL annealing}: gradually increasing the weight of the KL term from 0 to 1 during training; (2) \textit{word drop decoding}: setting a certain percentage of the target words to 0. We found that CVAE suffers from the same issue when the decoder is an RNN. Also we did not consider word drop decoding because Bowman et al,.~\shortcite{bowman2015generating} have shown that it may hurt the performance when the drop rate is too high. As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: \textit{bag-of-word loss}. The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response $x$ as shown in Figure~\ref{fig:model}(b). We decompose $x$ into two variables: $x_o$ with word order and $x_{bow}$ without order, and assume that $x_o$ and $x_{bow}$ are conditionally independent given $z$ and $c$: $p(x,z|c)=p(x_o|z,c)p(x_{bow}|z,c)p(z|c)$. Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response. Let $f =\text{MLP}_{b}(z,x) \in \mathcal{R}^V$ where $V$ is vocabulary size, and we have: \begin{equation} \log p(x_{bow}|z, c) = \log \prod_{t=1}^{|x|}\frac{e^{f_{x_t}}}{\sum_j^V e^{f_j}} \end{equation} where $|x|$ is the length of $x$ and $x_t$ is the word index of $t_{th}$ word in $x$. The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix~\ref{sec:supplemental} for kgCVAE): \begin{align} \label{eq:bow_kg_elbo} \mathcal{L}^\prime(\theta,\phi;x,c) &=\mathcal{L}(\theta,\phi;x,c)\nonumber\\ & + \mathbf{E}_{q_\phi (z|c,x,y)} [\log p(x_{bow}|z, c)] \end{align} We will show that the bag-of-word loss in Equation~\ref{eq:bow_kg_elbo} is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.\vspace{-0.2cm} \subsection{Dataset} \label{sec:data} We chose the Switchboard (SW) 1 Release 2 Corpus~\cite{godfrey1997switchboard} to evaluate the proposed models. SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment. In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion. There are 70 available topics. We randomly split the data into $2316/60/62$ dialogs for train/validate/test. The pre-processing includes (1) tokenize using the NLTK tokenizer~\cite{bird2009natural}; (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary. The final data have $207,833/5,225/5,481$ $(c,x)$ pairs for train/validate/test. Furthermore, a subset of SW was manually labeled with dialog acts~\cite{stolcke2000dialogue}. We extracted dialog act labels based on the dialog act recognizer proposed in~\cite{ribeiro2015influence}. The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances. We trained a Support Vector Machine (SVM)~\cite{suykens1999least} with linear kernel on the subset of SW with human annotations. There are 42 types of dialog acts and the SVM achieved 77.3\% accuracy on held-out data. Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer. \subsection{Training} \label{sec:train} We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere. We initialize the word embedding from Glove embedding pre-trained on Twitter~\cite{pennington2014glove}. The utterance encoder has a hidden size of 300 for each direction. The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400. The prior network and the MLP for predicting $y$ both have 1 hidden layer of size 400 and $tanh$ non-linearity. The latent variable $z$ has a size of 200. The context window $k$ is 10. All the initial weights are sampled from a uniform distribution [-0.08, 0.08]. The mini-batch size is 30. The models are trained end-to-end using the Adam optimizer~\cite{kingma2014adam} with a learning rate of 0.001 and gradient clipping at 5. We selected the best models based on the variational lower bound on the validate data. Finally, we use the BOW loss along with ~\textit{KL annealing} of 10,000 batches to achieve the best performance. Section~\ref{sec:bow} gives a detailed argument for the importance of the BOW loss.
Semi-supervised Multitask Learning for Sequence Labeling
1704.07156
Table 3: Accuracy of different sequence labeling architectures on POS-tagging datasets.
[ "[EMPTY]", "GENIA-POS DEV", "GENIA-POS TEST", "PTB-POS DEV", "PTB-POS TEST", "UD-ES DEV", "UD-ES TEST", "UD-FI DEV", "UD-FI TEST" ]
[ [ "Baseline", "98.69", "98.61", "97.23", "97.24", "96.38", "95.99", "95.02", "94.80" ], [ "+ dropout", "98.79", "98.71", "97.36", "97.30", "96.51", "96.16", "95.88", "95.60" ], [ "+ LMcost", "[BOLD] 98.89", "[BOLD] 98.81", "[BOLD] 97.48", "[BOLD] 97.43", "[BOLD] 96.62", "[BOLD] 96.21", "[BOLD] 96.14", "[BOLD] 95.88" ] ]
While the performance improvements are small, they are consistent across all domains, languages and datasets. Application of dropout again provides a more robust model, and the language modeling cost improves the performance further. Even though the labels already offer a varied training objective, learning to predict the surrounding words at the same time has provided the model with additional general-purpose features.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{***} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Semi-supervised Multitask Learning for Sequence Labeling} \author{\hspace{-0.5cm}Marek Rei\\ \hspace{-0.5cm}The ALTA Institute\\ \hspace{-0.5cm}Computer Laboratory\\ \hspace{-0.5cm}University of Cambridge\\ \hspace{-0.5cm}United Kingdom\\ \hspace{-0.5cm}{\tt marek.rei@cl.cam.ac.uk} \\} \date{} \begin{document} \maketitle \begin{abstract} We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset. This language modeling objective incentivises the system to learn general-purpose patterns of semantic and syntactic composition, which are also useful for improving accuracy on different sequence labeling tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. The novel language modeling objective provided consistent performance improvements on every benchmark, without requiring any additional annotated or unannotated data. \end{abstract} \section{Introduction} Accurate and efficient sequence labeling models have a wide range of applications, including named entity recognition (NER), part-of-speech (POS) tagging, error detection and shallow parsing. Specialised approaches to sequence labeling often include extensive feature engineering, such as integrated gazetteers, capitalisation features, morphological information and POS tags. However, recent work has shown that neural network architectures are able to achieve comparable or improved performance, while automatically discovering useful features for a specific task and only requiring a sequence of tokens as input \cite{Collobert2011,Irsoy2014a,Lample2016}. This feature discovery is usually driven by an objective function based on predicting the annotated labels for each word, without much incentive to learn more general language features from the available text. In many sequence labeling tasks, the relevant labels in the dataset are very sparse and most of the words contribute very little to the training process. For example, in the CoNLL 2003 NER dataset \cite{TjongKimSang2003} only 17\% of the tokens represent an entity. This ratio is even lower for error detection, with only 14\% of all tokens being annotated as an error in the FCE dataset \cite{Yannakoudakis2011}. The sequence labeling models are able to learn this bias in the label distribution without obtaining much additional information from words that have the majority label (O for outside of an entity; C for correct word). Therefore, we propose an additional training objective which allows the models to make more extensive use of the available data. The task of language modeling offers an easily accessible objective -- learning to predict the next word in the sequence requires only plain text as input, without relying on any particular annotation. Neural language modeling architectures also have many similarities to common sequence labeling frameworks: words are first mapped to distributed embeddings, followed by a recurrent neural network (RNN) module for composing word sequences into an informative context representation \cite{Mikolov2010,Graves2013b,Chelba2014}. Compared to any sequence labeling dataset, the task of language modeling has a considerably larger and more varied set of possible options to predict, making better use of each available word and encouraging the model to learn more general language features for accurate composition. In this paper, we propose a neural sequence labeling architecture that is also optimised as a language model, predicting surrounding words in the dataset in addition to assigning labels to each token. Specific sections of the network are optimised as a forward- or backward-moving language model, while the label predictions are performed using context from both directions. This secondary unsupervised objective encourages the framework to learn richer features for semantic composition without requiring additional training data. We evaluate the sequence labeling model on 10 datasets from the fields of NER, POS-tagging, chunking and error detection in learner texts. Our experiments show that by including the unsupervised objective into the training process, the sequence labeling model achieves consistent performance improvements on all the benchmarks. This multitask training framework gives the largest improvements on error detection datasets, outperforming the previous state-of-the-art architecture. \section{Neural Sequence Labeling} \label{sec:seqlab} We use the neural network model of \newcite{Rei2016a} as the baseline architecture for our sequence labeling experiments. The model takes as input one sentence, separated into tokens, and assigns a label to every token using a bidirectional LSTM. The input tokens are first mapped to a sequence of distributed word embeddings $[x_1, x_2, x_3, ... , x_T]$. Two LSTM \cite{Hochreiter1997} components, moving in opposite directions through the sentence, are then used for constructing context-dependent representations for every word. Each LSTM takes as input the hidden state from the previous time step, along with the word embedding from the current step, and outputs a new hidden state. The hidden representations from both directions are concatenated, in order to obtain a context-specific representation for each word that is conditioned on the whole sentence in both directions: \begin{equation} \overrightarrow{h_t} = LSTM(x_t, \overrightarrow{h_{t-1}}) \end{equation} \begin{equation} \overleftarrow{h_t} = LSTM(x_t, \overleftarrow{h_{t+1}}) \end{equation} \begin{equation} h_t = [\overrightarrow{h_t};\overleftarrow{h_t}] \end{equation} Next, the concatenated representation is passed through a feedforward layer, mapping the components into a joint space and allowing the model to learn features based on both context directions: \begin{equation} d_t = tanh(W_d h_t) \end{equation} \noindent where $W_d$ is a weight matrix and $tanh$ is used as the non-linear activation function. In order to predict a label for each token, we use either a softmax or CRF output architecture. For softmax, the model directly predicts a normalised distribution over all possible labels for every word, conditioned on the vector $d_t$: \begin{equation} \begin{split} P(y_t | d_t) &= softmax(W_o d_t) \\ &= \frac{e^{W_{o,k} d_t}}{\sum_{\tilde{k} \in K} e^{W_{o,\tilde{k}} d_t}} \end{split} \end{equation} \noindent where $K$ is the set of all possible labels, and $W_{o,k}$ is the $k$-th row of output weight matrix $W_o$. The model is optimised by minimising categorical crossentropy, which is equivalent to minimising the negative log-probability of the correct labels: \begin{equation} E = - \sum_{t=1}^{T} log(P(y_t| d_t)) \label{eq:costsoftmax} \end{equation} While this architecture returns predictions based on all words in the input, the labels are still predicted independently. For some tasks, such as named entity recognition with a BIO\footnote{Each NER entity has sub-tags for Beginning, Inside and Outside} scheme, there are strong dependencies between subsequent labels and it can be beneficial to explicitly model these connections. The output of the architecture can be modified to include a Conditional Random Field (CRF, \newcite{Lafferty2001}), which allows the network to look for the most optimal path through all possible label sequences \cite{Huang2015,Lample2016}. The model is then optimised by maximising the score for the correct label sequence, while minimising the scores for all other sequences: \begin{equation} E = - s(y) + log \sum_{\tilde{y} \in \widetilde{Y}} e^{s(\tilde{y})} \label{eq:costcrf} \end{equation} \noindent where $s(y)$ is the score for a given sequence $y$ and $Y$ is the set of all possible label sequences. We also make use of the character-level component described by \newcite{Rei2016a}, which builds an alternative representation for each word. The individual characters of a word are mapped to character embeddings and passed through a bidirectional LSTM. The last hidden states from both direction are concatenated and passed through a nonlinear layer. The resulting vector representation is combined with a regular word embedding using a dynamic weighting mechanism that adaptively controls the balance between word-level and character-level features. This framework allows the model to learn character-based patterns and handle previously unseen words, while still taking full advantage of the word embeddings. \section{Language Modeling Objective} \label{sec:lmcost} The sequence labeling model in Section \ref{sec:seqlab} is only optimised based on the correct labels. While each token in the input does have a desired label, many of these contribute very little to the training process. For example, in the CoNLL 2003 NER dataset \cite{TjongKimSang2003} there are only 8 possible labels and 83\% of the tokens have the label O, indicating that no named entity is detected. This ratio is even higher for error detection, with 86\% of all tokens containing no errors in the FCE dataset \cite{Yannakoudakis2011}. The sequence labeling models are able to learn this bias in the label distribution without obtaining much additional information from the majority labels. Therefore, we propose a supplementary objective which would allow the models to make full use of the training data. In addition to learning to predict labels for each word, we propose optimising specific sections of the architecture as language models. The task of predicting the next word will require the model to learn more general patterns of semantic and syntactic composition, which can then be reused in order to predict individual labels more accurately. This objective is also generalisable to any sequence labeling task and dataset, as it requires no additional annotated training data. A straightforward modification of the sequence labeling model would add a second parallel output layer for each token, optimising it to predict the next word. However, the model has access to the full context on each side of the target token, and predicting information that is already explicit in the input would not incentivise the model to learn about composition and semantics. Therefore, we must design the loss objective so that only sections of the model that have not yet observed the next word are optimised to perform the prediction. We solve this by predicting the next word in the sequence only based on the hidden representation $\overrightarrow{h_t}$ from the forward-moving LSTM. Similarly, the previous word in the sequence is predicted based on $\overleftarrow{h_t}$ from the backward-moving LSTM. This architecture avoids the problem of giving the correct answer as an input to the language modeling component, while the full framework is still optimised to predict labels based on the whole sentence. First, the hidden representations from forward- and backward-LSTMs are mapped to a new space using a non-linear layer: \begin{equation} \overrightarrow{m_t} = tanh(\overrightarrow{W}_m \overrightarrow{h_t}) \end{equation} \begin{equation} \overleftarrow{m_t} = tanh(\overleftarrow{W}_m \overleftarrow{h_t}) \end{equation} \noindent where $\overrightarrow{W}_m$ and $\overleftarrow{W}_m$ are weight matrices. This separate transformation learns to extract features that are specific to language modeling, while the LSTM is optimised for both objectives. We also use the opportunity to map the representation to a smaller size -- since language modeling is not the main goal, we restrict the number of parameters available for this component, forcing the model to generalise more using fewer resources. These representations are then passed through softmax layers in order to predict the preceding and following word: \begin{equation} P(w_{t+1}|\overrightarrow{m_t}) = softmax(\overrightarrow{W}_q \overrightarrow{m_t}) \end{equation} \begin{equation} P(w_{t-1}|\overleftarrow{m_t}) = softmax(\overleftarrow{W}_q \overleftarrow{m_t}) \end{equation} The objective function for both components is then constructed as a regular language modeling objective, by calculating the negative log-likelihood of the next word in the sequence: \begin{equation} \overrightarrow{E} = - \sum_{t=1}^{T-1} log(P(w_{t+1}|\overrightarrow{m_t})) \end{equation} \begin{equation} \overleftarrow{E} = - \sum_{t=2}^{T} log(P(w_{t-1}|\overleftarrow{m_t})) \end{equation} Finally, these additional objectives are combined with the training objective $E$ from either Equation \ref{eq:costsoftmax} or \ref{eq:costcrf}, resulting in a new cost function $\widetilde{E}$ for the sequence labeling model: \begin{equation} \widetilde{E} = E + \gamma (\overrightarrow{E} + \overleftarrow{E}) \end{equation} \noindent where $\gamma$ is a parameter that is used to control the importance of the language modeling objective in comparison to the sequence labeling objective. Figure \ref{fig:network} shows a diagram of the unfolded neural architecture, when performing NER on a short sentence with 3 words. At each token position, the network is optimised to predict the previous word, the current label, and the next word in the sequence. The added language modeling objective encourages the system to learn richer feature representations that are then reused for sequence labeling. For example, $\overrightarrow{h_1}$ is optimised to predict \textit{proposes} as the next word, indicating that the current word is a subject, possibly a named entity. In addition, $\overleftarrow{h_2}$ is optimised to predict \textit{Fischler} as the previous word and these features are used as input to predict the \textit{PER} tag at $o_1$. The proposed architecture introduces 4 additional parameter matrices that are optimised during training: $\overrightarrow{W}_m$, $\overleftarrow{W}_m$, $\overrightarrow{W}_q$, and $\overleftarrow{W}_q$. However, the computational complexity and resource requirements of this model during sequence labeling are equal to the baseline from Section \ref{sec:seqlab}, since the language modeling components are ignored during testing and these additional weight matrices are not used. While our implementation uses a basic softmax as the output layer for the language modeling components, the efficiency during training could be further improved by integrating noise-contrastive estimation (NCE, \newcite{Mnih2012a}) or hierarchical softmax \cite{Morin}. \section{Evaluation Setup} The proposed architecture was evaluated on 10 different sequence labeling datasets, covering the tasks of error detection, NER, chunking, and POS-tagging. The word embeddings in the model were initialised with publicly available pretrained vectors, created using word2vec \cite{Mikolov2013a}. For general-domain datasets we used 300-dimensional embeddings trained on Google News.\footnote{https://code.google.com/archive/p/word2vec/} For biomedical datasets, the word embeddings were initialised with 200-dimensional vectors trained on PubMed and PMC.\footnote{http://bio.nlplab.org/} The neural network framework was implemented using Theano \cite{Al-Rfou2016} and we make the code publicly available online.\footnote{https://github.com/marekrei/sequence-labeler} For most of the hyperparameters, we follow the settings by \newcite{Rei2016a} in order to facilitate direct comparison with previous work. The LSTM hidden layers are set to size 200 in each direction for both word- and character-level components. All digits in the text were replaced with the character ’0’; any words that occurred only once in the training data were replaced by an OOV token. In order to reduce computational complexity in these experiments, the language modeling objective predicted only the 7,500 most frequent words, with an extra token covering all the other words. Sentences were grouped into batches of size 64 and parameters were optimised using AdaDelta \cite{Zeiler2012} with default learning rate $1.0$. Training was stopped when performance on the development set had not improved for 7 epochs. Performance on the development set was also used to select the best model, which was then evaluated on the test set. In order to avoid any outlier results due to randomness in the model initialisation, each configuration was trained with 10 different random seeds and the averaged results are presented in this paper. We use previously established splits for training, development and testing on each of these datasets. Based on development experiments, we found that error detection was the only task that did not benefit from having a CRF module at the output layer -- since the labels are very sparse and the dataset contains only 2 possible labels, explicitly modeling state transitions does not improve performance on this task. Therefore, we use a softmax output for error detection experiments and CRF on all other datasets. The publicly available sequence labeling system by \newcite{Rei2016a} is used as the baseline. During development we found that applying dropout \cite{Srivastava2014a} on word embeddings improves performance on nearly all datasets, compared to this baseline. Therefore, element-wise dropout was applied to each of the input embeddings with probability $0.5$ during training, and the weights were multiplied by $0.5$ during testing. In order to separate the effects of this modification from the newly proposed optimisation method, we report results for three different systems: 1) the publicly available baseline, 2) applying dropout on top of the baseline system, and 3) applying both dropout and the novel multitask objective from Section \ref{sec:lmcost}. Based on development experiments we set the value of $\gamma$, which controls the importance of the language modeling objective, to $0.1$ for all experiments throughout training. Since context prediction is not part of the main evaluation of sequence labeling systems, we expected the additional objective to mostly benefit early stages of training, whereas the model would later need to specialise only towards assigning labels. Therefore, we also performed experiments on the development data where the value of $\gamma$ was gradually decreased, but found that a small static value performed comparably well or even better. These experiments indicate that the language modeling objective helps the network learn general-purpose features that are useful for sequence labeling even in the later stages of training. \section{Error Detection} We first evaluate the sequence labeling architectures on the task of error detection -- given a sentence written by a language learner, the system needs to detect which tokens have been manually tagged by annotators as being an error. As the first benchmark, we use the publicly released First Certificate in English (FCE, \newcite{Yannakoudakis2011}) dataset, containing 33,673 manually annotated sentences. The texts were written by learners during language examinations in response to prompts eliciting free-text answers and assessing mastery of the upper-intermediate proficiency level. In addition, we evaluate on the CoNLL 2014 shared task dataset \cite{Ng2013a}, which has been converted to an error detection task. This contains 1,312 sentences, written by higher-proficiency learners on more technical topics. They have been manually corrected by two separate annotators, and we report results on each of these annotations. For both datasets we use the FCE training set for model optimisation and results on the CoNLL-14 dataset indicate out-of-domain performance. \newcite{Rei2016} present results on these datasets using the same setup, along with evaluating the top shared task submissions on the task of error detection. As the main evaluation metric, we use the $F_{0.5}$ measure, which is consistent with previous work and was also adopted by the CoNLL-14 shared task. Table \ref{tab:results1} contains results for the three different sequence labeling architectures on the error detection datasets. We found that including the dropout actually decreases performance in the setting of error detection, which is likely due to the relatively small amount of error examples available in the dataset -- it is better for the model to memorise them without introducing additional noise in the form of dropout. However, we did verify that dropout indeed gives an improvement in combination with the novel language modeling objective. Because the model is receiving additional information at every token, dropout is no longer obscuring the limited training data but instead helps with generalisation. The bottom row shows the performance of the language modeling objective when added on top of the baseline model, along with dropout on word embeddings. This architecture outperforms the baseline on all benchmarks, increasing both precision and recall, and giving a $3.9\%$ absolute improvement on the FCE test set. These results also improve over the previous best results by \newcite{Rei2016} and \newcite{Rei2016a}, when all systems are trained on the same FCE dataset. While the added components also require more computation time, the difference is not excessive -- one training batch over the FCE dataset was processed in 112 seconds on the baseline system and 133 seconds using the language modeling objective. Error detection is the task where introducing the additional cost objective gave the largest improvement in performance, for a few reasons: \begin{enumerate} \item This task has very sparse labels in the datasets, with error tokens very infrequent and far apart. Without the language modeling objective, the network has very little use for all the available words that contain no errors. \item There are only two possible labels, correct and incorrect, which likely makes it more difficult for the model to learn feature detectors for many different error types. Language modeling uses a much larger number of possible labels, giving a more varied training signal. \item Finally, the task of error detection is directly related to language modeling. By learning a better model of the overall text in the training corpus, the system can more easily detect any irregularities. \end{enumerate} We also analysed the performance of the different architectures during training. Figure \ref{fig:graph_fcepublic} shows the $F_{0.5}$ score on the development set for each model after every epoch over the training data. The baseline model peaks quickly, followed by a gradual drop in performance, which is likely due to overfitting on the available data. Dropout provides an effective regularisation method, slowing down the initial performance but preventing the model from overfitting. The added language modeling objective provides a substantial improvement -- the system outperforms other configurations already in the early stages of training and the results are also sustained in the later epochs. \section{NER and Chunking} In the next experiments we evaluate the language modeling objective on named entity recognition and chunking. For general-domain NER, we use the English section of the CoNLL 2003 corpus \cite{TjongKimSang2003}, containing news stories from the Reuters Corpus. We also report results on two biomedical NER datasets: The BioCreative~IV Chemical and Drug corpus (CHEMDNER, \newcite{Krallinger2015}) of 10,000 abstracts, annotated for mentions of chemical and drug names, and the JNLPBA corpus \cite{Kim2004} of 2,404 abstracts annotated for mentions of different cells and proteins. Finally, we use the CoNLL 2000 dataset \cite{TjongKimSang2000}, created from the Wall Street Journal Sections 15-18 and 20 from the Penn Treebank, for evaluating sequence labeling on the task of chunking. The standard CoNLL entity-level $F_1$ score is used as the main evaluation metric. Compared to error detection corpora, the labels are more balanced in these datasets. However, majority labels still exist: roughly 83\% of the tokens in the NER datasets are tagged as "O", indicating that the word is not an entity, and the NP label covers 53\% of tokens in the chunking data. Table \ref{tab:results2} contains results for evaluating the different architectures on NER and chunking. On these tasks, the application of dropout provides a consistent improvement -- applying some variance onto the input embeddings results in more robust models for NER and chunking. The addition of the language modeling objective consistently further improves performance on all benchmarks. While these results are comparable to the respective state-of-the-art results on most datasets, we did not fine-tune hyperparameters for any specific task, instead providing a controlled analysis of the language modeling objective in different settings. For JNLPBA, the system achieves 73.83\% compared to 72.55\% by \newcite{Zhou2004} and 72.70\% by \newcite{Rei2016a}. On CoNLL-03, \newcite{Lample2016} achieve a considerably higher result of 90.94\%, possibly due to their use of specialised word embeddings and a custom version of LSTM. However, our system does outperform a similar architecture by \newcite{Huang2015}, achieving 86.26\% compared to 84.26\% $F_1$ score on the CoNLL-03 dataset. Figure \ref{fig:graph_chemdner} shows $F_1$ on the CHEMDNER development set after every training epoch. Without dropout, performance peaks quickly and then trails off as the system overfits on the training set. Using dropout, the best performance is sustained throughout training and even slightly improved. Finally, adding the language modeling objective on top of dropout allows the system to consistently outperform the other architectures. \section{POS tagging} We also evaluated the language modeling training objective on four POS-tagging datasets. The Penn Treebank POS-tag corpus \cite{Marcus1993b} contains texts from the Wall Street Journal and has been annotated with 48 different part-of-speech tags. In addition, we use the POS-annotated subset of the GENIA corpus \cite{Ohta2002} containing 2,000 biomedical PubMed abstracts. Following \newcite{Tsuruoka2005}, we use the same 210-document test set. Finally, we also evaluate on the Finnish and Spanish sections of the Universal Dependencies v1.2 dataset (UD, \newcite{UniversalDependencies}), in order to investigate performance on morphologically complex and Romance languages. These datasets are somewhat more balanced in terms of label distributions, compared to error detection and NER, as no single label covers over 50\% of the tokens. POS-tagging also offers a large variance of unique labels, with 48 labels in PTB and 42 in GENIA, and this can provide useful information to the models during training. The baseline performance on these datasets is also close to the upper bound, therefore we expect the language modeling objective to not provide much additional benefit. The results of different sequence labeling architectures on POS-tagging can be seen in Table \ref{tab:results3} and accuracy on the development set is shown in Figure \ref{fig:graph_ptbpos}. While the performance improvements are small, they are consistent across all domains, languages and datasets. Application of dropout again provides a more robust model, and the language modeling cost improves the performance further. Even though the labels already offer a varied training objective, learning to predict the surrounding words at the same time has provided the model with additional general-purpose features. \section{Related Work} Our work builds on previous research exploring multi-task learning in the context of different sequence labeling tasks. The idea of multi-task learning was described by \newcite{Caruana1998} and has since been extended to many language processing tasks using neural networks. For example, \newcite{Weston2008} proposed a multi-task framework using weight-sharing between networks that are optimised for different supervised tasks. \newcite{Cheng2015a} described a system for detecting out-of-vocabulary names by also predicting the next word in the sequence. While they use a regular recurrent architecture, we propose a language modeling objective that can be integrated with a bidirectional network, making it applicable to existing state-of-the-art sequence labeling frameworks. \newcite{Plank2016} described a related architecture for POS-tagging, predicting the frequency of each word together with the part-of-speech, and showed that this can improve tagging accuracy on low-frequency words. While predicting word frequency can be useful for POS-tagging, language modeling provides a more general training signal, allowing us to apply the model to many different sequence labeling tasks. Recently, \newcite{Augenstein2017} explored multi-task learning for classifying keyphrase boundaries, by incorporating tasks such as semantic super-sense tagging and identification of multi-word expressions. \newcite{Bingel2017} also performed a systematic comparison of task relationships by combining different datasets through multi-task learning. Both of these approaches involve switching to auxiliary datasets, whereas our proposed language modeling objective requires no additional data. \section{Conclusion} We proposed a novel sequence labeling framework with a secondary objective -- learning to predict surrounding words for each word in the dataset. One half of a bidirectional LSTM is trained as a forward-moving language model, whereas the other half is trained as a backward-moving language model. At the same time, both of these are also combined, in order to predict the most probable label for each word. This modification can be applied to several common sequence labeling architectures and requires no additional annotated or unannotated data. The objective of learning to predict surrounding words provides an additional source of information during training. The model is incentivised to discover useful features in order to learn the language distribution and composition patterns in the training data. While language modeling is not the main goal of the system, this additional training objective leads to more accurate sequence labeling models on several different tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. We found that the additional language modeling objective provided consistent performance improvements on every benchmark. The largest benefit from the new architecture was observed on the task of error detection in learner writing. The label distribution in the original dataset is very sparse and unbalanced, making it a difficult task for the model to learn. The added language modeling objective allowed the system to take better advantage of the available training data, leading to 3.9\% absolute improvement over the previous best architecture. The language modeling objective also provided consistent improvements on other sequence labeling tasks, such as named entity recognition, chunking and POS-tagging. Future work could investigate the extension of this architecture to additional unannotated resources. Learning generalisable language features from large amounts of unlabeled in-domain text could provide sequence labeling models with additional benefit. While it is common to pre-train word embeddings on large-scale unannotated corpora, only limited research has been done towards useful methods for pre-training or co-training more advanced compositional modules. \bibliographystyle{acl_natbib} \end{document}
Semi-supervised Multitask Learning for Sequence Labeling
1704.07156
Table 1: Precision, Recall and F0.5 score of alternative sequence labeling architectures on error detection datasets. Dropout and LMcost modifications are added incrementally to the baseline.
[ "[EMPTY]", "FCE DEV [ITALIC] F0.5", "FCE TEST P", "FCE TEST R", "FCE TEST [ITALIC] F0.5", "CoNLL-14 TEST1 P", "CoNLL-14 TEST1 R", "CoNLL-14 TEST1 [ITALIC] F0.5", "CoNLL-14 TEST2 P", "CoNLL-14 TEST2 R", "CoNLL-14 TEST2 [ITALIC] F0.5" ]
[ [ "Baseline", "48.78", "55.38", "25.34", "44.56", "15.65", "16.80", "15.80", "25.22", "19.25", "23.62" ], [ "+ dropout", "48.68", "54.11", "23.33", "42.65", "14.29", "17.13", "14.71", "22.79", "19.42", "21.91" ], [ "+ LMcost", "[BOLD] 53.17", "[BOLD] 58.88", "[BOLD] 28.92", "[BOLD] 48.48", "[BOLD] 17.68", "[BOLD] 19.07", "[BOLD] 17.86", "[BOLD] 27.62", "[BOLD] 21.18", "[BOLD] 25.88" ] ]
We found that including the dropout actually decreases performance in the setting of error detection, which is likely due to the relatively small amount of error examples available in the dataset – it is better for the model to memorise them without introducing additional noise in the form of dropout. However, we did verify that dropout indeed gives an improvement in combination with the novel language modeling objective. Because the model is receiving additional information at every token, dropout is no longer obscuring the limited training data but instead helps with generalisation.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{***} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Semi-supervised Multitask Learning for Sequence Labeling} \author{\hspace{-0.5cm}Marek Rei\\ \hspace{-0.5cm}The ALTA Institute\\ \hspace{-0.5cm}Computer Laboratory\\ \hspace{-0.5cm}University of Cambridge\\ \hspace{-0.5cm}United Kingdom\\ \hspace{-0.5cm}{\tt marek.rei@cl.cam.ac.uk} \\} \date{} \begin{document} \maketitle \begin{abstract} We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset. This language modeling objective incentivises the system to learn general-purpose patterns of semantic and syntactic composition, which are also useful for improving accuracy on different sequence labeling tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. The novel language modeling objective provided consistent performance improvements on every benchmark, without requiring any additional annotated or unannotated data. \end{abstract} \section{Introduction} Accurate and efficient sequence labeling models have a wide range of applications, including named entity recognition (NER), part-of-speech (POS) tagging, error detection and shallow parsing. Specialised approaches to sequence labeling often include extensive feature engineering, such as integrated gazetteers, capitalisation features, morphological information and POS tags. However, recent work has shown that neural network architectures are able to achieve comparable or improved performance, while automatically discovering useful features for a specific task and only requiring a sequence of tokens as input \cite{Collobert2011,Irsoy2014a,Lample2016}. This feature discovery is usually driven by an objective function based on predicting the annotated labels for each word, without much incentive to learn more general language features from the available text. In many sequence labeling tasks, the relevant labels in the dataset are very sparse and most of the words contribute very little to the training process. For example, in the CoNLL 2003 NER dataset \cite{TjongKimSang2003} only 17\% of the tokens represent an entity. This ratio is even lower for error detection, with only 14\% of all tokens being annotated as an error in the FCE dataset \cite{Yannakoudakis2011}. The sequence labeling models are able to learn this bias in the label distribution without obtaining much additional information from words that have the majority label (O for outside of an entity; C for correct word). Therefore, we propose an additional training objective which allows the models to make more extensive use of the available data. The task of language modeling offers an easily accessible objective -- learning to predict the next word in the sequence requires only plain text as input, without relying on any particular annotation. Neural language modeling architectures also have many similarities to common sequence labeling frameworks: words are first mapped to distributed embeddings, followed by a recurrent neural network (RNN) module for composing word sequences into an informative context representation \cite{Mikolov2010,Graves2013b,Chelba2014}. Compared to any sequence labeling dataset, the task of language modeling has a considerably larger and more varied set of possible options to predict, making better use of each available word and encouraging the model to learn more general language features for accurate composition. In this paper, we propose a neural sequence labeling architecture that is also optimised as a language model, predicting surrounding words in the dataset in addition to assigning labels to each token. Specific sections of the network are optimised as a forward- or backward-moving language model, while the label predictions are performed using context from both directions. This secondary unsupervised objective encourages the framework to learn richer features for semantic composition without requiring additional training data. We evaluate the sequence labeling model on 10 datasets from the fields of NER, POS-tagging, chunking and error detection in learner texts. Our experiments show that by including the unsupervised objective into the training process, the sequence labeling model achieves consistent performance improvements on all the benchmarks. This multitask training framework gives the largest improvements on error detection datasets, outperforming the previous state-of-the-art architecture. \section{Neural Sequence Labeling} \label{sec:seqlab} We use the neural network model of \newcite{Rei2016a} as the baseline architecture for our sequence labeling experiments. The model takes as input one sentence, separated into tokens, and assigns a label to every token using a bidirectional LSTM. The input tokens are first mapped to a sequence of distributed word embeddings $[x_1, x_2, x_3, ... , x_T]$. Two LSTM \cite{Hochreiter1997} components, moving in opposite directions through the sentence, are then used for constructing context-dependent representations for every word. Each LSTM takes as input the hidden state from the previous time step, along with the word embedding from the current step, and outputs a new hidden state. The hidden representations from both directions are concatenated, in order to obtain a context-specific representation for each word that is conditioned on the whole sentence in both directions: \begin{equation} \overrightarrow{h_t} = LSTM(x_t, \overrightarrow{h_{t-1}}) \end{equation} \begin{equation} \overleftarrow{h_t} = LSTM(x_t, \overleftarrow{h_{t+1}}) \end{equation} \begin{equation} h_t = [\overrightarrow{h_t};\overleftarrow{h_t}] \end{equation} Next, the concatenated representation is passed through a feedforward layer, mapping the components into a joint space and allowing the model to learn features based on both context directions: \begin{equation} d_t = tanh(W_d h_t) \end{equation} \noindent where $W_d$ is a weight matrix and $tanh$ is used as the non-linear activation function. In order to predict a label for each token, we use either a softmax or CRF output architecture. For softmax, the model directly predicts a normalised distribution over all possible labels for every word, conditioned on the vector $d_t$: \begin{equation} \begin{split} P(y_t | d_t) &= softmax(W_o d_t) \\ &= \frac{e^{W_{o,k} d_t}}{\sum_{\tilde{k} \in K} e^{W_{o,\tilde{k}} d_t}} \end{split} \end{equation} \noindent where $K$ is the set of all possible labels, and $W_{o,k}$ is the $k$-th row of output weight matrix $W_o$. The model is optimised by minimising categorical crossentropy, which is equivalent to minimising the negative log-probability of the correct labels: \begin{equation} E = - \sum_{t=1}^{T} log(P(y_t| d_t)) \label{eq:costsoftmax} \end{equation} While this architecture returns predictions based on all words in the input, the labels are still predicted independently. For some tasks, such as named entity recognition with a BIO\footnote{Each NER entity has sub-tags for Beginning, Inside and Outside} scheme, there are strong dependencies between subsequent labels and it can be beneficial to explicitly model these connections. The output of the architecture can be modified to include a Conditional Random Field (CRF, \newcite{Lafferty2001}), which allows the network to look for the most optimal path through all possible label sequences \cite{Huang2015,Lample2016}. The model is then optimised by maximising the score for the correct label sequence, while minimising the scores for all other sequences: \begin{equation} E = - s(y) + log \sum_{\tilde{y} \in \widetilde{Y}} e^{s(\tilde{y})} \label{eq:costcrf} \end{equation} \noindent where $s(y)$ is the score for a given sequence $y$ and $Y$ is the set of all possible label sequences. We also make use of the character-level component described by \newcite{Rei2016a}, which builds an alternative representation for each word. The individual characters of a word are mapped to character embeddings and passed through a bidirectional LSTM. The last hidden states from both direction are concatenated and passed through a nonlinear layer. The resulting vector representation is combined with a regular word embedding using a dynamic weighting mechanism that adaptively controls the balance between word-level and character-level features. This framework allows the model to learn character-based patterns and handle previously unseen words, while still taking full advantage of the word embeddings. \section{Language Modeling Objective} \label{sec:lmcost} The sequence labeling model in Section \ref{sec:seqlab} is only optimised based on the correct labels. While each token in the input does have a desired label, many of these contribute very little to the training process. For example, in the CoNLL 2003 NER dataset \cite{TjongKimSang2003} there are only 8 possible labels and 83\% of the tokens have the label O, indicating that no named entity is detected. This ratio is even higher for error detection, with 86\% of all tokens containing no errors in the FCE dataset \cite{Yannakoudakis2011}. The sequence labeling models are able to learn this bias in the label distribution without obtaining much additional information from the majority labels. Therefore, we propose a supplementary objective which would allow the models to make full use of the training data. In addition to learning to predict labels for each word, we propose optimising specific sections of the architecture as language models. The task of predicting the next word will require the model to learn more general patterns of semantic and syntactic composition, which can then be reused in order to predict individual labels more accurately. This objective is also generalisable to any sequence labeling task and dataset, as it requires no additional annotated training data. A straightforward modification of the sequence labeling model would add a second parallel output layer for each token, optimising it to predict the next word. However, the model has access to the full context on each side of the target token, and predicting information that is already explicit in the input would not incentivise the model to learn about composition and semantics. Therefore, we must design the loss objective so that only sections of the model that have not yet observed the next word are optimised to perform the prediction. We solve this by predicting the next word in the sequence only based on the hidden representation $\overrightarrow{h_t}$ from the forward-moving LSTM. Similarly, the previous word in the sequence is predicted based on $\overleftarrow{h_t}$ from the backward-moving LSTM. This architecture avoids the problem of giving the correct answer as an input to the language modeling component, while the full framework is still optimised to predict labels based on the whole sentence. First, the hidden representations from forward- and backward-LSTMs are mapped to a new space using a non-linear layer: \begin{equation} \overrightarrow{m_t} = tanh(\overrightarrow{W}_m \overrightarrow{h_t}) \end{equation} \begin{equation} \overleftarrow{m_t} = tanh(\overleftarrow{W}_m \overleftarrow{h_t}) \end{equation} \noindent where $\overrightarrow{W}_m$ and $\overleftarrow{W}_m$ are weight matrices. This separate transformation learns to extract features that are specific to language modeling, while the LSTM is optimised for both objectives. We also use the opportunity to map the representation to a smaller size -- since language modeling is not the main goal, we restrict the number of parameters available for this component, forcing the model to generalise more using fewer resources. These representations are then passed through softmax layers in order to predict the preceding and following word: \begin{equation} P(w_{t+1}|\overrightarrow{m_t}) = softmax(\overrightarrow{W}_q \overrightarrow{m_t}) \end{equation} \begin{equation} P(w_{t-1}|\overleftarrow{m_t}) = softmax(\overleftarrow{W}_q \overleftarrow{m_t}) \end{equation} The objective function for both components is then constructed as a regular language modeling objective, by calculating the negative log-likelihood of the next word in the sequence: \begin{equation} \overrightarrow{E} = - \sum_{t=1}^{T-1} log(P(w_{t+1}|\overrightarrow{m_t})) \end{equation} \begin{equation} \overleftarrow{E} = - \sum_{t=2}^{T} log(P(w_{t-1}|\overleftarrow{m_t})) \end{equation} Finally, these additional objectives are combined with the training objective $E$ from either Equation \ref{eq:costsoftmax} or \ref{eq:costcrf}, resulting in a new cost function $\widetilde{E}$ for the sequence labeling model: \begin{equation} \widetilde{E} = E + \gamma (\overrightarrow{E} + \overleftarrow{E}) \end{equation} \noindent where $\gamma$ is a parameter that is used to control the importance of the language modeling objective in comparison to the sequence labeling objective. Figure \ref{fig:network} shows a diagram of the unfolded neural architecture, when performing NER on a short sentence with 3 words. At each token position, the network is optimised to predict the previous word, the current label, and the next word in the sequence. The added language modeling objective encourages the system to learn richer feature representations that are then reused for sequence labeling. For example, $\overrightarrow{h_1}$ is optimised to predict \textit{proposes} as the next word, indicating that the current word is a subject, possibly a named entity. In addition, $\overleftarrow{h_2}$ is optimised to predict \textit{Fischler} as the previous word and these features are used as input to predict the \textit{PER} tag at $o_1$. The proposed architecture introduces 4 additional parameter matrices that are optimised during training: $\overrightarrow{W}_m$, $\overleftarrow{W}_m$, $\overrightarrow{W}_q$, and $\overleftarrow{W}_q$. However, the computational complexity and resource requirements of this model during sequence labeling are equal to the baseline from Section \ref{sec:seqlab}, since the language modeling components are ignored during testing and these additional weight matrices are not used. While our implementation uses a basic softmax as the output layer for the language modeling components, the efficiency during training could be further improved by integrating noise-contrastive estimation (NCE, \newcite{Mnih2012a}) or hierarchical softmax \cite{Morin}. \section{Evaluation Setup} The proposed architecture was evaluated on 10 different sequence labeling datasets, covering the tasks of error detection, NER, chunking, and POS-tagging. The word embeddings in the model were initialised with publicly available pretrained vectors, created using word2vec \cite{Mikolov2013a}. For general-domain datasets we used 300-dimensional embeddings trained on Google News.\footnote{https://code.google.com/archive/p/word2vec/} For biomedical datasets, the word embeddings were initialised with 200-dimensional vectors trained on PubMed and PMC.\footnote{http://bio.nlplab.org/} The neural network framework was implemented using Theano \cite{Al-Rfou2016} and we make the code publicly available online.\footnote{https://github.com/marekrei/sequence-labeler} For most of the hyperparameters, we follow the settings by \newcite{Rei2016a} in order to facilitate direct comparison with previous work. The LSTM hidden layers are set to size 200 in each direction for both word- and character-level components. All digits in the text were replaced with the character ’0’; any words that occurred only once in the training data were replaced by an OOV token. In order to reduce computational complexity in these experiments, the language modeling objective predicted only the 7,500 most frequent words, with an extra token covering all the other words. Sentences were grouped into batches of size 64 and parameters were optimised using AdaDelta \cite{Zeiler2012} with default learning rate $1.0$. Training was stopped when performance on the development set had not improved for 7 epochs. Performance on the development set was also used to select the best model, which was then evaluated on the test set. In order to avoid any outlier results due to randomness in the model initialisation, each configuration was trained with 10 different random seeds and the averaged results are presented in this paper. We use previously established splits for training, development and testing on each of these datasets. Based on development experiments, we found that error detection was the only task that did not benefit from having a CRF module at the output layer -- since the labels are very sparse and the dataset contains only 2 possible labels, explicitly modeling state transitions does not improve performance on this task. Therefore, we use a softmax output for error detection experiments and CRF on all other datasets. The publicly available sequence labeling system by \newcite{Rei2016a} is used as the baseline. During development we found that applying dropout \cite{Srivastava2014a} on word embeddings improves performance on nearly all datasets, compared to this baseline. Therefore, element-wise dropout was applied to each of the input embeddings with probability $0.5$ during training, and the weights were multiplied by $0.5$ during testing. In order to separate the effects of this modification from the newly proposed optimisation method, we report results for three different systems: 1) the publicly available baseline, 2) applying dropout on top of the baseline system, and 3) applying both dropout and the novel multitask objective from Section \ref{sec:lmcost}. Based on development experiments we set the value of $\gamma$, which controls the importance of the language modeling objective, to $0.1$ for all experiments throughout training. Since context prediction is not part of the main evaluation of sequence labeling systems, we expected the additional objective to mostly benefit early stages of training, whereas the model would later need to specialise only towards assigning labels. Therefore, we also performed experiments on the development data where the value of $\gamma$ was gradually decreased, but found that a small static value performed comparably well or even better. These experiments indicate that the language modeling objective helps the network learn general-purpose features that are useful for sequence labeling even in the later stages of training. \section{Error Detection} We first evaluate the sequence labeling architectures on the task of error detection -- given a sentence written by a language learner, the system needs to detect which tokens have been manually tagged by annotators as being an error. As the first benchmark, we use the publicly released First Certificate in English (FCE, \newcite{Yannakoudakis2011}) dataset, containing 33,673 manually annotated sentences. The texts were written by learners during language examinations in response to prompts eliciting free-text answers and assessing mastery of the upper-intermediate proficiency level. In addition, we evaluate on the CoNLL 2014 shared task dataset \cite{Ng2013a}, which has been converted to an error detection task. This contains 1,312 sentences, written by higher-proficiency learners on more technical topics. They have been manually corrected by two separate annotators, and we report results on each of these annotations. For both datasets we use the FCE training set for model optimisation and results on the CoNLL-14 dataset indicate out-of-domain performance. \newcite{Rei2016} present results on these datasets using the same setup, along with evaluating the top shared task submissions on the task of error detection. As the main evaluation metric, we use the $F_{0.5}$ measure, which is consistent with previous work and was also adopted by the CoNLL-14 shared task. Table \ref{tab:results1} contains results for the three different sequence labeling architectures on the error detection datasets. We found that including the dropout actually decreases performance in the setting of error detection, which is likely due to the relatively small amount of error examples available in the dataset -- it is better for the model to memorise them without introducing additional noise in the form of dropout. However, we did verify that dropout indeed gives an improvement in combination with the novel language modeling objective. Because the model is receiving additional information at every token, dropout is no longer obscuring the limited training data but instead helps with generalisation. The bottom row shows the performance of the language modeling objective when added on top of the baseline model, along with dropout on word embeddings. This architecture outperforms the baseline on all benchmarks, increasing both precision and recall, and giving a $3.9\%$ absolute improvement on the FCE test set. These results also improve over the previous best results by \newcite{Rei2016} and \newcite{Rei2016a}, when all systems are trained on the same FCE dataset. While the added components also require more computation time, the difference is not excessive -- one training batch over the FCE dataset was processed in 112 seconds on the baseline system and 133 seconds using the language modeling objective. Error detection is the task where introducing the additional cost objective gave the largest improvement in performance, for a few reasons: \begin{enumerate} \item This task has very sparse labels in the datasets, with error tokens very infrequent and far apart. Without the language modeling objective, the network has very little use for all the available words that contain no errors. \item There are only two possible labels, correct and incorrect, which likely makes it more difficult for the model to learn feature detectors for many different error types. Language modeling uses a much larger number of possible labels, giving a more varied training signal. \item Finally, the task of error detection is directly related to language modeling. By learning a better model of the overall text in the training corpus, the system can more easily detect any irregularities. \end{enumerate} We also analysed the performance of the different architectures during training. Figure \ref{fig:graph_fcepublic} shows the $F_{0.5}$ score on the development set for each model after every epoch over the training data. The baseline model peaks quickly, followed by a gradual drop in performance, which is likely due to overfitting on the available data. Dropout provides an effective regularisation method, slowing down the initial performance but preventing the model from overfitting. The added language modeling objective provides a substantial improvement -- the system outperforms other configurations already in the early stages of training and the results are also sustained in the later epochs. \section{NER and Chunking} In the next experiments we evaluate the language modeling objective on named entity recognition and chunking. For general-domain NER, we use the English section of the CoNLL 2003 corpus \cite{TjongKimSang2003}, containing news stories from the Reuters Corpus. We also report results on two biomedical NER datasets: The BioCreative~IV Chemical and Drug corpus (CHEMDNER, \newcite{Krallinger2015}) of 10,000 abstracts, annotated for mentions of chemical and drug names, and the JNLPBA corpus \cite{Kim2004} of 2,404 abstracts annotated for mentions of different cells and proteins. Finally, we use the CoNLL 2000 dataset \cite{TjongKimSang2000}, created from the Wall Street Journal Sections 15-18 and 20 from the Penn Treebank, for evaluating sequence labeling on the task of chunking. The standard CoNLL entity-level $F_1$ score is used as the main evaluation metric. Compared to error detection corpora, the labels are more balanced in these datasets. However, majority labels still exist: roughly 83\% of the tokens in the NER datasets are tagged as "O", indicating that the word is not an entity, and the NP label covers 53\% of tokens in the chunking data. Table \ref{tab:results2} contains results for evaluating the different architectures on NER and chunking. On these tasks, the application of dropout provides a consistent improvement -- applying some variance onto the input embeddings results in more robust models for NER and chunking. The addition of the language modeling objective consistently further improves performance on all benchmarks. While these results are comparable to the respective state-of-the-art results on most datasets, we did not fine-tune hyperparameters for any specific task, instead providing a controlled analysis of the language modeling objective in different settings. For JNLPBA, the system achieves 73.83\% compared to 72.55\% by \newcite{Zhou2004} and 72.70\% by \newcite{Rei2016a}. On CoNLL-03, \newcite{Lample2016} achieve a considerably higher result of 90.94\%, possibly due to their use of specialised word embeddings and a custom version of LSTM. However, our system does outperform a similar architecture by \newcite{Huang2015}, achieving 86.26\% compared to 84.26\% $F_1$ score on the CoNLL-03 dataset. Figure \ref{fig:graph_chemdner} shows $F_1$ on the CHEMDNER development set after every training epoch. Without dropout, performance peaks quickly and then trails off as the system overfits on the training set. Using dropout, the best performance is sustained throughout training and even slightly improved. Finally, adding the language modeling objective on top of dropout allows the system to consistently outperform the other architectures. \section{POS tagging} We also evaluated the language modeling training objective on four POS-tagging datasets. The Penn Treebank POS-tag corpus \cite{Marcus1993b} contains texts from the Wall Street Journal and has been annotated with 48 different part-of-speech tags. In addition, we use the POS-annotated subset of the GENIA corpus \cite{Ohta2002} containing 2,000 biomedical PubMed abstracts. Following \newcite{Tsuruoka2005}, we use the same 210-document test set. Finally, we also evaluate on the Finnish and Spanish sections of the Universal Dependencies v1.2 dataset (UD, \newcite{UniversalDependencies}), in order to investigate performance on morphologically complex and Romance languages. These datasets are somewhat more balanced in terms of label distributions, compared to error detection and NER, as no single label covers over 50\% of the tokens. POS-tagging also offers a large variance of unique labels, with 48 labels in PTB and 42 in GENIA, and this can provide useful information to the models during training. The baseline performance on these datasets is also close to the upper bound, therefore we expect the language modeling objective to not provide much additional benefit. The results of different sequence labeling architectures on POS-tagging can be seen in Table \ref{tab:results3} and accuracy on the development set is shown in Figure \ref{fig:graph_ptbpos}. While the performance improvements are small, they are consistent across all domains, languages and datasets. Application of dropout again provides a more robust model, and the language modeling cost improves the performance further. Even though the labels already offer a varied training objective, learning to predict the surrounding words at the same time has provided the model with additional general-purpose features. \section{Related Work} Our work builds on previous research exploring multi-task learning in the context of different sequence labeling tasks. The idea of multi-task learning was described by \newcite{Caruana1998} and has since been extended to many language processing tasks using neural networks. For example, \newcite{Weston2008} proposed a multi-task framework using weight-sharing between networks that are optimised for different supervised tasks. \newcite{Cheng2015a} described a system for detecting out-of-vocabulary names by also predicting the next word in the sequence. While they use a regular recurrent architecture, we propose a language modeling objective that can be integrated with a bidirectional network, making it applicable to existing state-of-the-art sequence labeling frameworks. \newcite{Plank2016} described a related architecture for POS-tagging, predicting the frequency of each word together with the part-of-speech, and showed that this can improve tagging accuracy on low-frequency words. While predicting word frequency can be useful for POS-tagging, language modeling provides a more general training signal, allowing us to apply the model to many different sequence labeling tasks. Recently, \newcite{Augenstein2017} explored multi-task learning for classifying keyphrase boundaries, by incorporating tasks such as semantic super-sense tagging and identification of multi-word expressions. \newcite{Bingel2017} also performed a systematic comparison of task relationships by combining different datasets through multi-task learning. Both of these approaches involve switching to auxiliary datasets, whereas our proposed language modeling objective requires no additional data. \section{Conclusion} We proposed a novel sequence labeling framework with a secondary objective -- learning to predict surrounding words for each word in the dataset. One half of a bidirectional LSTM is trained as a forward-moving language model, whereas the other half is trained as a backward-moving language model. At the same time, both of these are also combined, in order to predict the most probable label for each word. This modification can be applied to several common sequence labeling architectures and requires no additional annotated or unannotated data. The objective of learning to predict surrounding words provides an additional source of information during training. The model is incentivised to discover useful features in order to learn the language distribution and composition patterns in the training data. While language modeling is not the main goal of the system, this additional training objective leads to more accurate sequence labeling models on several different tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. We found that the additional language modeling objective provided consistent performance improvements on every benchmark. The largest benefit from the new architecture was observed on the task of error detection in learner writing. The label distribution in the original dataset is very sparse and unbalanced, making it a difficult task for the model to learn. The added language modeling objective allowed the system to take better advantage of the available training data, leading to 3.9\% absolute improvement over the previous best architecture. The language modeling objective also provided consistent improvements on other sequence labeling tasks, such as named entity recognition, chunking and POS-tagging. Future work could investigate the extension of this architecture to additional unannotated resources. Learning generalisable language features from large amounts of unlabeled in-domain text could provide sequence labeling models with additional benefit. While it is common to pre-train word embeddings on large-scale unannotated corpora, only limited research has been done towards useful methods for pre-training or co-training more advanced compositional modules. \bibliographystyle{acl_natbib} \end{document}
Semi-supervised Multitask Learning for Sequence Labeling
1704.07156
Table 2: Performance of alternative sequence labeling architectures on NER and chunking datasets, measured using CoNLL standard entity-level F1 score.
[ "[EMPTY]", "CoNLL-00 DEV", "CoNLL-00 TEST", "CoNLL-03 DEV", "CoNLL-03 TEST", "CHEMDNER DEV", "CHEMDNER TEST", "JNLPBA DEV", "JNLPBA TEST" ]
[ [ "Baseline", "92.92", "92.67", "90.85", "85.63", "83.63", "84.51", "77.13", "72.79" ], [ "+ dropout", "93.40", "93.15", "91.14", "86.00", "84.78", "85.67", "77.61", "73.16" ], [ "+ LMcost", "[BOLD] 94.22", "[BOLD] 93.88", "[BOLD] 91.48", "[BOLD] 86.26", "[BOLD] 85.45", "[BOLD] 86.27", "[BOLD] 78.51", "[BOLD] 73.83" ] ]
On these tasks, the application of dropout provides a consistent improvement – applying some variance onto the input embeddings results in more robust models for NER and chunking. The addition of the language modeling objective consistently further improves performance on all benchmarks.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{***} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Semi-supervised Multitask Learning for Sequence Labeling} \author{\hspace{-0.5cm}Marek Rei\\ \hspace{-0.5cm}The ALTA Institute\\ \hspace{-0.5cm}Computer Laboratory\\ \hspace{-0.5cm}University of Cambridge\\ \hspace{-0.5cm}United Kingdom\\ \hspace{-0.5cm}{\tt marek.rei@cl.cam.ac.uk} \\} \date{} \begin{document} \maketitle \begin{abstract} We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset. This language modeling objective incentivises the system to learn general-purpose patterns of semantic and syntactic composition, which are also useful for improving accuracy on different sequence labeling tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. The novel language modeling objective provided consistent performance improvements on every benchmark, without requiring any additional annotated or unannotated data. \end{abstract} \section{Introduction} Accurate and efficient sequence labeling models have a wide range of applications, including named entity recognition (NER), part-of-speech (POS) tagging, error detection and shallow parsing. Specialised approaches to sequence labeling often include extensive feature engineering, such as integrated gazetteers, capitalisation features, morphological information and POS tags. However, recent work has shown that neural network architectures are able to achieve comparable or improved performance, while automatically discovering useful features for a specific task and only requiring a sequence of tokens as input \cite{Collobert2011,Irsoy2014a,Lample2016}. This feature discovery is usually driven by an objective function based on predicting the annotated labels for each word, without much incentive to learn more general language features from the available text. In many sequence labeling tasks, the relevant labels in the dataset are very sparse and most of the words contribute very little to the training process. For example, in the CoNLL 2003 NER dataset \cite{TjongKimSang2003} only 17\% of the tokens represent an entity. This ratio is even lower for error detection, with only 14\% of all tokens being annotated as an error in the FCE dataset \cite{Yannakoudakis2011}. The sequence labeling models are able to learn this bias in the label distribution without obtaining much additional information from words that have the majority label (O for outside of an entity; C for correct word). Therefore, we propose an additional training objective which allows the models to make more extensive use of the available data. The task of language modeling offers an easily accessible objective -- learning to predict the next word in the sequence requires only plain text as input, without relying on any particular annotation. Neural language modeling architectures also have many similarities to common sequence labeling frameworks: words are first mapped to distributed embeddings, followed by a recurrent neural network (RNN) module for composing word sequences into an informative context representation \cite{Mikolov2010,Graves2013b,Chelba2014}. Compared to any sequence labeling dataset, the task of language modeling has a considerably larger and more varied set of possible options to predict, making better use of each available word and encouraging the model to learn more general language features for accurate composition. In this paper, we propose a neural sequence labeling architecture that is also optimised as a language model, predicting surrounding words in the dataset in addition to assigning labels to each token. Specific sections of the network are optimised as a forward- or backward-moving language model, while the label predictions are performed using context from both directions. This secondary unsupervised objective encourages the framework to learn richer features for semantic composition without requiring additional training data. We evaluate the sequence labeling model on 10 datasets from the fields of NER, POS-tagging, chunking and error detection in learner texts. Our experiments show that by including the unsupervised objective into the training process, the sequence labeling model achieves consistent performance improvements on all the benchmarks. This multitask training framework gives the largest improvements on error detection datasets, outperforming the previous state-of-the-art architecture. \section{Neural Sequence Labeling} \label{sec:seqlab} We use the neural network model of \newcite{Rei2016a} as the baseline architecture for our sequence labeling experiments. The model takes as input one sentence, separated into tokens, and assigns a label to every token using a bidirectional LSTM. The input tokens are first mapped to a sequence of distributed word embeddings $[x_1, x_2, x_3, ... , x_T]$. Two LSTM \cite{Hochreiter1997} components, moving in opposite directions through the sentence, are then used for constructing context-dependent representations for every word. Each LSTM takes as input the hidden state from the previous time step, along with the word embedding from the current step, and outputs a new hidden state. The hidden representations from both directions are concatenated, in order to obtain a context-specific representation for each word that is conditioned on the whole sentence in both directions: \begin{equation} \overrightarrow{h_t} = LSTM(x_t, \overrightarrow{h_{t-1}}) \end{equation} \begin{equation} \overleftarrow{h_t} = LSTM(x_t, \overleftarrow{h_{t+1}}) \end{equation} \begin{equation} h_t = [\overrightarrow{h_t};\overleftarrow{h_t}] \end{equation} Next, the concatenated representation is passed through a feedforward layer, mapping the components into a joint space and allowing the model to learn features based on both context directions: \begin{equation} d_t = tanh(W_d h_t) \end{equation} \noindent where $W_d$ is a weight matrix and $tanh$ is used as the non-linear activation function. In order to predict a label for each token, we use either a softmax or CRF output architecture. For softmax, the model directly predicts a normalised distribution over all possible labels for every word, conditioned on the vector $d_t$: \begin{equation} \begin{split} P(y_t | d_t) &= softmax(W_o d_t) \\ &= \frac{e^{W_{o,k} d_t}}{\sum_{\tilde{k} \in K} e^{W_{o,\tilde{k}} d_t}} \end{split} \end{equation} \noindent where $K$ is the set of all possible labels, and $W_{o,k}$ is the $k$-th row of output weight matrix $W_o$. The model is optimised by minimising categorical crossentropy, which is equivalent to minimising the negative log-probability of the correct labels: \begin{equation} E = - \sum_{t=1}^{T} log(P(y_t| d_t)) \label{eq:costsoftmax} \end{equation} While this architecture returns predictions based on all words in the input, the labels are still predicted independently. For some tasks, such as named entity recognition with a BIO\footnote{Each NER entity has sub-tags for Beginning, Inside and Outside} scheme, there are strong dependencies between subsequent labels and it can be beneficial to explicitly model these connections. The output of the architecture can be modified to include a Conditional Random Field (CRF, \newcite{Lafferty2001}), which allows the network to look for the most optimal path through all possible label sequences \cite{Huang2015,Lample2016}. The model is then optimised by maximising the score for the correct label sequence, while minimising the scores for all other sequences: \begin{equation} E = - s(y) + log \sum_{\tilde{y} \in \widetilde{Y}} e^{s(\tilde{y})} \label{eq:costcrf} \end{equation} \noindent where $s(y)$ is the score for a given sequence $y$ and $Y$ is the set of all possible label sequences. We also make use of the character-level component described by \newcite{Rei2016a}, which builds an alternative representation for each word. The individual characters of a word are mapped to character embeddings and passed through a bidirectional LSTM. The last hidden states from both direction are concatenated and passed through a nonlinear layer. The resulting vector representation is combined with a regular word embedding using a dynamic weighting mechanism that adaptively controls the balance between word-level and character-level features. This framework allows the model to learn character-based patterns and handle previously unseen words, while still taking full advantage of the word embeddings. \section{Language Modeling Objective} \label{sec:lmcost} The sequence labeling model in Section \ref{sec:seqlab} is only optimised based on the correct labels. While each token in the input does have a desired label, many of these contribute very little to the training process. For example, in the CoNLL 2003 NER dataset \cite{TjongKimSang2003} there are only 8 possible labels and 83\% of the tokens have the label O, indicating that no named entity is detected. This ratio is even higher for error detection, with 86\% of all tokens containing no errors in the FCE dataset \cite{Yannakoudakis2011}. The sequence labeling models are able to learn this bias in the label distribution without obtaining much additional information from the majority labels. Therefore, we propose a supplementary objective which would allow the models to make full use of the training data. In addition to learning to predict labels for each word, we propose optimising specific sections of the architecture as language models. The task of predicting the next word will require the model to learn more general patterns of semantic and syntactic composition, which can then be reused in order to predict individual labels more accurately. This objective is also generalisable to any sequence labeling task and dataset, as it requires no additional annotated training data. A straightforward modification of the sequence labeling model would add a second parallel output layer for each token, optimising it to predict the next word. However, the model has access to the full context on each side of the target token, and predicting information that is already explicit in the input would not incentivise the model to learn about composition and semantics. Therefore, we must design the loss objective so that only sections of the model that have not yet observed the next word are optimised to perform the prediction. We solve this by predicting the next word in the sequence only based on the hidden representation $\overrightarrow{h_t}$ from the forward-moving LSTM. Similarly, the previous word in the sequence is predicted based on $\overleftarrow{h_t}$ from the backward-moving LSTM. This architecture avoids the problem of giving the correct answer as an input to the language modeling component, while the full framework is still optimised to predict labels based on the whole sentence. First, the hidden representations from forward- and backward-LSTMs are mapped to a new space using a non-linear layer: \begin{equation} \overrightarrow{m_t} = tanh(\overrightarrow{W}_m \overrightarrow{h_t}) \end{equation} \begin{equation} \overleftarrow{m_t} = tanh(\overleftarrow{W}_m \overleftarrow{h_t}) \end{equation} \noindent where $\overrightarrow{W}_m$ and $\overleftarrow{W}_m$ are weight matrices. This separate transformation learns to extract features that are specific to language modeling, while the LSTM is optimised for both objectives. We also use the opportunity to map the representation to a smaller size -- since language modeling is not the main goal, we restrict the number of parameters available for this component, forcing the model to generalise more using fewer resources. These representations are then passed through softmax layers in order to predict the preceding and following word: \begin{equation} P(w_{t+1}|\overrightarrow{m_t}) = softmax(\overrightarrow{W}_q \overrightarrow{m_t}) \end{equation} \begin{equation} P(w_{t-1}|\overleftarrow{m_t}) = softmax(\overleftarrow{W}_q \overleftarrow{m_t}) \end{equation} The objective function for both components is then constructed as a regular language modeling objective, by calculating the negative log-likelihood of the next word in the sequence: \begin{equation} \overrightarrow{E} = - \sum_{t=1}^{T-1} log(P(w_{t+1}|\overrightarrow{m_t})) \end{equation} \begin{equation} \overleftarrow{E} = - \sum_{t=2}^{T} log(P(w_{t-1}|\overleftarrow{m_t})) \end{equation} Finally, these additional objectives are combined with the training objective $E$ from either Equation \ref{eq:costsoftmax} or \ref{eq:costcrf}, resulting in a new cost function $\widetilde{E}$ for the sequence labeling model: \begin{equation} \widetilde{E} = E + \gamma (\overrightarrow{E} + \overleftarrow{E}) \end{equation} \noindent where $\gamma$ is a parameter that is used to control the importance of the language modeling objective in comparison to the sequence labeling objective. Figure \ref{fig:network} shows a diagram of the unfolded neural architecture, when performing NER on a short sentence with 3 words. At each token position, the network is optimised to predict the previous word, the current label, and the next word in the sequence. The added language modeling objective encourages the system to learn richer feature representations that are then reused for sequence labeling. For example, $\overrightarrow{h_1}$ is optimised to predict \textit{proposes} as the next word, indicating that the current word is a subject, possibly a named entity. In addition, $\overleftarrow{h_2}$ is optimised to predict \textit{Fischler} as the previous word and these features are used as input to predict the \textit{PER} tag at $o_1$. The proposed architecture introduces 4 additional parameter matrices that are optimised during training: $\overrightarrow{W}_m$, $\overleftarrow{W}_m$, $\overrightarrow{W}_q$, and $\overleftarrow{W}_q$. However, the computational complexity and resource requirements of this model during sequence labeling are equal to the baseline from Section \ref{sec:seqlab}, since the language modeling components are ignored during testing and these additional weight matrices are not used. While our implementation uses a basic softmax as the output layer for the language modeling components, the efficiency during training could be further improved by integrating noise-contrastive estimation (NCE, \newcite{Mnih2012a}) or hierarchical softmax \cite{Morin}. \section{Evaluation Setup} The proposed architecture was evaluated on 10 different sequence labeling datasets, covering the tasks of error detection, NER, chunking, and POS-tagging. The word embeddings in the model were initialised with publicly available pretrained vectors, created using word2vec \cite{Mikolov2013a}. For general-domain datasets we used 300-dimensional embeddings trained on Google News.\footnote{https://code.google.com/archive/p/word2vec/} For biomedical datasets, the word embeddings were initialised with 200-dimensional vectors trained on PubMed and PMC.\footnote{http://bio.nlplab.org/} The neural network framework was implemented using Theano \cite{Al-Rfou2016} and we make the code publicly available online.\footnote{https://github.com/marekrei/sequence-labeler} For most of the hyperparameters, we follow the settings by \newcite{Rei2016a} in order to facilitate direct comparison with previous work. The LSTM hidden layers are set to size 200 in each direction for both word- and character-level components. All digits in the text were replaced with the character ’0’; any words that occurred only once in the training data were replaced by an OOV token. In order to reduce computational complexity in these experiments, the language modeling objective predicted only the 7,500 most frequent words, with an extra token covering all the other words. Sentences were grouped into batches of size 64 and parameters were optimised using AdaDelta \cite{Zeiler2012} with default learning rate $1.0$. Training was stopped when performance on the development set had not improved for 7 epochs. Performance on the development set was also used to select the best model, which was then evaluated on the test set. In order to avoid any outlier results due to randomness in the model initialisation, each configuration was trained with 10 different random seeds and the averaged results are presented in this paper. We use previously established splits for training, development and testing on each of these datasets. Based on development experiments, we found that error detection was the only task that did not benefit from having a CRF module at the output layer -- since the labels are very sparse and the dataset contains only 2 possible labels, explicitly modeling state transitions does not improve performance on this task. Therefore, we use a softmax output for error detection experiments and CRF on all other datasets. The publicly available sequence labeling system by \newcite{Rei2016a} is used as the baseline. During development we found that applying dropout \cite{Srivastava2014a} on word embeddings improves performance on nearly all datasets, compared to this baseline. Therefore, element-wise dropout was applied to each of the input embeddings with probability $0.5$ during training, and the weights were multiplied by $0.5$ during testing. In order to separate the effects of this modification from the newly proposed optimisation method, we report results for three different systems: 1) the publicly available baseline, 2) applying dropout on top of the baseline system, and 3) applying both dropout and the novel multitask objective from Section \ref{sec:lmcost}. Based on development experiments we set the value of $\gamma$, which controls the importance of the language modeling objective, to $0.1$ for all experiments throughout training. Since context prediction is not part of the main evaluation of sequence labeling systems, we expected the additional objective to mostly benefit early stages of training, whereas the model would later need to specialise only towards assigning labels. Therefore, we also performed experiments on the development data where the value of $\gamma$ was gradually decreased, but found that a small static value performed comparably well or even better. These experiments indicate that the language modeling objective helps the network learn general-purpose features that are useful for sequence labeling even in the later stages of training. \section{Error Detection} We first evaluate the sequence labeling architectures on the task of error detection -- given a sentence written by a language learner, the system needs to detect which tokens have been manually tagged by annotators as being an error. As the first benchmark, we use the publicly released First Certificate in English (FCE, \newcite{Yannakoudakis2011}) dataset, containing 33,673 manually annotated sentences. The texts were written by learners during language examinations in response to prompts eliciting free-text answers and assessing mastery of the upper-intermediate proficiency level. In addition, we evaluate on the CoNLL 2014 shared task dataset \cite{Ng2013a}, which has been converted to an error detection task. This contains 1,312 sentences, written by higher-proficiency learners on more technical topics. They have been manually corrected by two separate annotators, and we report results on each of these annotations. For both datasets we use the FCE training set for model optimisation and results on the CoNLL-14 dataset indicate out-of-domain performance. \newcite{Rei2016} present results on these datasets using the same setup, along with evaluating the top shared task submissions on the task of error detection. As the main evaluation metric, we use the $F_{0.5}$ measure, which is consistent with previous work and was also adopted by the CoNLL-14 shared task. Table \ref{tab:results1} contains results for the three different sequence labeling architectures on the error detection datasets. We found that including the dropout actually decreases performance in the setting of error detection, which is likely due to the relatively small amount of error examples available in the dataset -- it is better for the model to memorise them without introducing additional noise in the form of dropout. However, we did verify that dropout indeed gives an improvement in combination with the novel language modeling objective. Because the model is receiving additional information at every token, dropout is no longer obscuring the limited training data but instead helps with generalisation. The bottom row shows the performance of the language modeling objective when added on top of the baseline model, along with dropout on word embeddings. This architecture outperforms the baseline on all benchmarks, increasing both precision and recall, and giving a $3.9\%$ absolute improvement on the FCE test set. These results also improve over the previous best results by \newcite{Rei2016} and \newcite{Rei2016a}, when all systems are trained on the same FCE dataset. While the added components also require more computation time, the difference is not excessive -- one training batch over the FCE dataset was processed in 112 seconds on the baseline system and 133 seconds using the language modeling objective. Error detection is the task where introducing the additional cost objective gave the largest improvement in performance, for a few reasons: \begin{enumerate} \item This task has very sparse labels in the datasets, with error tokens very infrequent and far apart. Without the language modeling objective, the network has very little use for all the available words that contain no errors. \item There are only two possible labels, correct and incorrect, which likely makes it more difficult for the model to learn feature detectors for many different error types. Language modeling uses a much larger number of possible labels, giving a more varied training signal. \item Finally, the task of error detection is directly related to language modeling. By learning a better model of the overall text in the training corpus, the system can more easily detect any irregularities. \end{enumerate} We also analysed the performance of the different architectures during training. Figure \ref{fig:graph_fcepublic} shows the $F_{0.5}$ score on the development set for each model after every epoch over the training data. The baseline model peaks quickly, followed by a gradual drop in performance, which is likely due to overfitting on the available data. Dropout provides an effective regularisation method, slowing down the initial performance but preventing the model from overfitting. The added language modeling objective provides a substantial improvement -- the system outperforms other configurations already in the early stages of training and the results are also sustained in the later epochs. \section{NER and Chunking} In the next experiments we evaluate the language modeling objective on named entity recognition and chunking. For general-domain NER, we use the English section of the CoNLL 2003 corpus \cite{TjongKimSang2003}, containing news stories from the Reuters Corpus. We also report results on two biomedical NER datasets: The BioCreative~IV Chemical and Drug corpus (CHEMDNER, \newcite{Krallinger2015}) of 10,000 abstracts, annotated for mentions of chemical and drug names, and the JNLPBA corpus \cite{Kim2004} of 2,404 abstracts annotated for mentions of different cells and proteins. Finally, we use the CoNLL 2000 dataset \cite{TjongKimSang2000}, created from the Wall Street Journal Sections 15-18 and 20 from the Penn Treebank, for evaluating sequence labeling on the task of chunking. The standard CoNLL entity-level $F_1$ score is used as the main evaluation metric. Compared to error detection corpora, the labels are more balanced in these datasets. However, majority labels still exist: roughly 83\% of the tokens in the NER datasets are tagged as "O", indicating that the word is not an entity, and the NP label covers 53\% of tokens in the chunking data. Table \ref{tab:results2} contains results for evaluating the different architectures on NER and chunking. On these tasks, the application of dropout provides a consistent improvement -- applying some variance onto the input embeddings results in more robust models for NER and chunking. The addition of the language modeling objective consistently further improves performance on all benchmarks. While these results are comparable to the respective state-of-the-art results on most datasets, we did not fine-tune hyperparameters for any specific task, instead providing a controlled analysis of the language modeling objective in different settings. For JNLPBA, the system achieves 73.83\% compared to 72.55\% by \newcite{Zhou2004} and 72.70\% by \newcite{Rei2016a}. On CoNLL-03, \newcite{Lample2016} achieve a considerably higher result of 90.94\%, possibly due to their use of specialised word embeddings and a custom version of LSTM. However, our system does outperform a similar architecture by \newcite{Huang2015}, achieving 86.26\% compared to 84.26\% $F_1$ score on the CoNLL-03 dataset. Figure \ref{fig:graph_chemdner} shows $F_1$ on the CHEMDNER development set after every training epoch. Without dropout, performance peaks quickly and then trails off as the system overfits on the training set. Using dropout, the best performance is sustained throughout training and even slightly improved. Finally, adding the language modeling objective on top of dropout allows the system to consistently outperform the other architectures. \section{POS tagging} We also evaluated the language modeling training objective on four POS-tagging datasets. The Penn Treebank POS-tag corpus \cite{Marcus1993b} contains texts from the Wall Street Journal and has been annotated with 48 different part-of-speech tags. In addition, we use the POS-annotated subset of the GENIA corpus \cite{Ohta2002} containing 2,000 biomedical PubMed abstracts. Following \newcite{Tsuruoka2005}, we use the same 210-document test set. Finally, we also evaluate on the Finnish and Spanish sections of the Universal Dependencies v1.2 dataset (UD, \newcite{UniversalDependencies}), in order to investigate performance on morphologically complex and Romance languages. These datasets are somewhat more balanced in terms of label distributions, compared to error detection and NER, as no single label covers over 50\% of the tokens. POS-tagging also offers a large variance of unique labels, with 48 labels in PTB and 42 in GENIA, and this can provide useful information to the models during training. The baseline performance on these datasets is also close to the upper bound, therefore we expect the language modeling objective to not provide much additional benefit. The results of different sequence labeling architectures on POS-tagging can be seen in Table \ref{tab:results3} and accuracy on the development set is shown in Figure \ref{fig:graph_ptbpos}. While the performance improvements are small, they are consistent across all domains, languages and datasets. Application of dropout again provides a more robust model, and the language modeling cost improves the performance further. Even though the labels already offer a varied training objective, learning to predict the surrounding words at the same time has provided the model with additional general-purpose features. \section{Related Work} Our work builds on previous research exploring multi-task learning in the context of different sequence labeling tasks. The idea of multi-task learning was described by \newcite{Caruana1998} and has since been extended to many language processing tasks using neural networks. For example, \newcite{Weston2008} proposed a multi-task framework using weight-sharing between networks that are optimised for different supervised tasks. \newcite{Cheng2015a} described a system for detecting out-of-vocabulary names by also predicting the next word in the sequence. While they use a regular recurrent architecture, we propose a language modeling objective that can be integrated with a bidirectional network, making it applicable to existing state-of-the-art sequence labeling frameworks. \newcite{Plank2016} described a related architecture for POS-tagging, predicting the frequency of each word together with the part-of-speech, and showed that this can improve tagging accuracy on low-frequency words. While predicting word frequency can be useful for POS-tagging, language modeling provides a more general training signal, allowing us to apply the model to many different sequence labeling tasks. Recently, \newcite{Augenstein2017} explored multi-task learning for classifying keyphrase boundaries, by incorporating tasks such as semantic super-sense tagging and identification of multi-word expressions. \newcite{Bingel2017} also performed a systematic comparison of task relationships by combining different datasets through multi-task learning. Both of these approaches involve switching to auxiliary datasets, whereas our proposed language modeling objective requires no additional data. \section{Conclusion} We proposed a novel sequence labeling framework with a secondary objective -- learning to predict surrounding words for each word in the dataset. One half of a bidirectional LSTM is trained as a forward-moving language model, whereas the other half is trained as a backward-moving language model. At the same time, both of these are also combined, in order to predict the most probable label for each word. This modification can be applied to several common sequence labeling architectures and requires no additional annotated or unannotated data. The objective of learning to predict surrounding words provides an additional source of information during training. The model is incentivised to discover useful features in order to learn the language distribution and composition patterns in the training data. While language modeling is not the main goal of the system, this additional training objective leads to more accurate sequence labeling models on several different tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. We found that the additional language modeling objective provided consistent performance improvements on every benchmark. The largest benefit from the new architecture was observed on the task of error detection in learner writing. The label distribution in the original dataset is very sparse and unbalanced, making it a difficult task for the model to learn. The added language modeling objective allowed the system to take better advantage of the available training data, leading to 3.9\% absolute improvement over the previous best architecture. The language modeling objective also provided consistent improvements on other sequence labeling tasks, such as named entity recognition, chunking and POS-tagging. Future work could investigate the extension of this architecture to additional unannotated resources. Learning generalisable language features from large amounts of unlabeled in-domain text could provide sequence labeling models with additional benefit. While it is common to pre-train word embeddings on large-scale unannotated corpora, only limited research has been done towards useful methods for pre-training or co-training more advanced compositional modules. \bibliographystyle{acl_natbib} \end{document}
Improving Neural Language Models with a Continuous Cache
1612.04426
(b) lambada
[ "Model", "Dev", "Ctrl" ]
[ [ "WB5 (Paperno et al., 2016 )", "3125", "285" ], [ "WB5+cache (Paperno et al., 2016 )", "768", "270" ], [ "LSTM-512 (Paperno et al., 2016 )", "5357", "149" ], [ "LSTM-1024 (our implem.)", "4088", "94" ], [ "Neural cache model", "138", "129" ] ]
Finally, we report experiments carried on the lambada dataset, introduced by Paperno et al. This is a dataset of short passages extracted from novels. The goal is to predict the last word of the excerpt. This dataset was built so that human subjects solve the task perfectly when given the full context (approx. 4.6 sentences), but fail to do so when only given the sentence with the target word. Thus, most state-of-the-art language models fail on this dataset. The lambada training set contains approximately 200M tokens and has a vocabulary size of 93,215. Adding a neural cache model to the LSTM baseline strongly improves the performance on the lambada dataset. This is due to the fact that more than 83% of passages of the development set include the target word, while this is true for only 14% of the control set. Ideally, a model should have strong results on both sets. One possible generalization of our model would be to adapt the interpolation parameter based on the current vector representation of the history ht.
\documentclass{article} % For LaTeX2e \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts \title{Improving Neural Language Models with a Continuous Cache} \author{Edouard Grave, Armand Joulin, Nicolas Usunier \\ Facebook AI Research \\ \texttt{\{egrave,ajoulin,usunier\}@fb.com}} \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \usetikzlibrary{positioning} \usetikzlibrary{shapes.misc} \begin{document} \maketitle \begin{abstract} We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks. \end{abstract} \input{introduction} \input{lm} \input{method} \input{relatedwork} \input{experiments} \input{conclusion} \small \bibliographystyle{iclr2017_conference} \end{document} \section{Conclusion} We presented the neural cache model to augment neural language models with a longer-term memory that dynamically updates the word probablilities based on the long-term context. A neural cache can be added on top of a pre-trained language model at negligible cost. Our experiments on both language modeling tasks and the challenging LAMBADA dataset shows that significant performance gains can be expected by adding this external memory component. Technically, the neural cache models is similar to some recent memory-augmented neural networks such as pointer networks. However, its specific design makes it possible to avoid learning the memory lookup component. This makes the neural cache appealing since it can use larger cache sizes than memory-augment networks and can be applied as easily as traditional count-based caches. \section{Related work} \paragraph{Cache model.} Adding a cache to a language model was intoducted in the context of speech recognition\citep{kuhn1988speech,kupiec1989probabilistic,kuhn1990cache}. These models were further extended by \citet{jelinek1991dynamic} into a smoothed trigram language model, reporting reduction in both perplexity and word error rates. \citet{della1992adaptive} adapt the cache to a general $n$-gram model such that it satisfies marginal constraints obtained from the current document. \paragraph{Adaptive language models.} Other adaptive language models have been proposed in the past: \citet{kneser1993dynamic} and \citet{iyer1999modeling} dynamically adapt the parameters of their model to the recent history using different weight interpolation schemes. \citet{bellegarda2000exploiting} and \citet{coccaro1998towards} use latent semantic analysis to adapt their models to the current context. Similarly, topic features have been used with either maximum entropy models~\citep{khudanpur2000maximum} or recurrent networks~\citep{mikolov2012context,wang2015larger}. Finally, \citet{lau1993trigger} proposes to use pairs of distant of words to capture long-range dependencies. \paragraph{Memory augmented neural networks.} In the context of sequence prediction, several memory augmented neural networks have obtained promising results~\citep{sukhbaatar2015end,graves2014neural,grefenstette2015learning,joulin2015inferring}. In particular,~\citet{sukhbaatar2015end} stores a representation of the recent past and accesses it using an attention mechanism~\cite{bahdanau2014neural}. \citet{sukhbaatar2015end} shows that this reduces the perplexity for language modeling. This approach has been successfully applied to question answering, when the answer is contained in a given paragraph~\citep{chen2016thorough,hermann2015teaching,kadlec2016text,sukhbaatar2015end}. Similarly, \citet{vinyals2015pointer} explores the use of this mechanism to reorder sequences of tokens. Their network uses an attention (or ``pointer'') over the input sequence to predict which element should be selected as the next output. \citet{gulcehre2016pointing} have shown that a similar mechanism called \emph{pointer softmax} could be used in the context of machine translation, to decide which word to copy from the source to target. Independently of our work, \citet{merity2016pointer} apply the same mechanism to recurrent network. Unlike our work, they uses the current hidden activation as a representation of the current input (while we use it to represent the output). This requires additional learning of a transformation between the current representation and those in the past. The advantage of our approach is that we can scale to very large caches effortlessly. \section{Introduction} Language modeling is a core problem in natural language processing, with many applications such as machine translation~\citep{brown1993mathematics}, speech recognition~\citep{bahl1983maximum} or dialogue agents~\citep{stolcke2000dialogue}. While traditional neural networks language models have obtained state-of-the-art performance in this domain~\citep{jozefowicz2016exploring,mikolov2010recurrent}, they lack the capacity to adapt to their recent history, limiting their application to dynamic environments~\citep{dodge2015evaluating}. A recent approach to solve this problem is to augment these networks with an external memory~\citep{graves2014neural,grefenstette2015learning,joulin2015inferring,sukhbaatar2015end}. These models can potentially use their external memory to store new information and adapt to a changing environment. While these networks have obtained promising results on language modeling datasets~\citep{sukhbaatar2015end}, they are quite computationally expensive. Typically, they have to learn a parametrizable mechanism to read or write to memory cells ~\citep{graves2014neural,joulin2015inferring}. This may limit both the size of their usable memory as well as the quantity of data they can be trained on. In this work, we propose a very light-weight alternative that shares some of the properties of memory augmented networks, notably the capability to dynamically adapt over time. By minimizing the computation burden of the memory, we are able to use larger memory and scale to bigger datasets. We observe in practice that this allows us to surpass the perfomance of memory augmented networks on different language modeling tasks. Our model share some similarities with a model proposed by~\citet{kuhn1988speech}, called the \emph{cache model}. A cache model stores a simple representation of the recent past, often in the form of unigrams, and uses them for prediction~\citep{kuhn1990cache}. This contextual information is quite cheap to store and can be accessed efficiently. It also does not need any training and can be appplied on top of any model. This makes this model particularly interesting for domain adaptation~\citep{kneser1993dynamic}. Our main contribution is to propose a continuous version of the cache model, called \emph{Neural Cache Model}, that can be adapted to any neural network language model. We store recent hidden activations and use them as representation for the context. Using simply a dot-product with the current hidden activations, they turn out to be extremely informative for prediction. Our model requires \emph{no training} and can be used on any pre-trained neural networks. It also scales effortlessly to thousands of memory cells. We demonstrate the quality of the Neural Cache models on several language model tasks and the LAMBADA dataset~\citep{paperno2016lambada}. \newpage \section{Neural Cache Model} The Neural Cache Model adds a cache-like memory to neural network language models. It exploits the hidden representations $h_t$ to define a probability distribution over the words in the cache. As illustrated Figure~\ref{fig:model}, the cache stores pairs $(h_i, x_{i+1})$ of a hidden representation, and the word which was generated based on this representation (we remind the reader that the vector $h_i$ encodes the history $x_i, ..., x_1$). At time $t$, we then define a probability distribution over words stored in the cache based on the stored hidden representations and the current one $h_t$ as \begin{equation*} p_{cache}( w \ | \ h_{1..t}, \ x_{1..t}) \propto \sum_{i=1}^{t-1} \mathbbm{1}_{\left\{ w = x_{i+1} \right\}} \exp (\theta h_t^{\top} h_i) \end{equation*} where the scalar $\theta$ is a parameter which controls the flatness of the distribution. When $\theta$ is equal to zero, the probability distribution over the history is uniform, and our model is equivalent to a unigram cache model~\citep{kuhn1990cache}. From the point of view of memory-augmented neural networks, the probability $p_{cache}( w \ | \ h_{1..t}, \ x_{1..t})$ given by the neural cache model can be interpreted as the probability to retrieve the word $w$ from the memory given the query $h_t$, where the desired answer is the next word $x_{t+1}$. Using previous hidden states as keys for the words in the memory, the memory lookup operator can be implemented with simple dot products between the keys and the query. In contrast to existing memory-augmented neural networks, the neural cache model avoids the need to learn the memory lookup operator. Such a cache can thus be added to a pre-trained recurrent neural language model without fine tuning of the parameters, and large cache size can be used with negligible impact on the computational cost of a prediction. \paragraph{Neural cache language model.} Following the standard practice in n-gram cache-based language models, the final probability of a word is given by the linear interpolation of the cache language model with the regular language model, obtaining: \begin{equation*} p(w \ | \ h_{1..t}, \ x_{1..t}) = (1 - \lambda) p_{vocab}(w \ | \ h_t) + \lambda p_{cache}(w \ | \ h_{1..t}, x_{1..t})\,. \end{equation*} Instead of taking a linear interpolation between the two distribution with a fixed $\lambda$, we also consider a global normalization over the two distribution: \begin{equation*} p( w \ | \ h_{1..t}, \ x_{1..t}) \propto \left( \exp(h_t^{\top} o_w) + \sum_{i=1}^{t-1} \mathbbm{1}_{\left\{ w = x_{i+1} \right\}} \exp (\theta h_t^{\top} h_i + \alpha) \right)\,. \end{equation*} This corresponds to taking a softmax over the vocabulary and the words in the cache. The parameter $\alpha$ controls the weight of the cache component, and is the counterpart of the $\lambda$ parameter for linear interpolation. The addition of the neural cache to a recurrent neural language model inherits the advantages of $n$-gram caches in usual cache-based models: The probability distribution over words is updated online depending on the context, and out-of-vocabulary words can be predicted as soon as they have been seen at least once in the recent history. The neural cache also inherits the ability of the hidden states of recurrent neural networks to model longer-term contexts than small $n$-grams, and thus allows for a finer modeling of the current context than e.g., unigram caches. \paragraph{Training procedure.} For now, we first train the (recurrent) neural network language model, without the cache component. We only apply the cache model at test time, and choose the hyperparameters $\theta$ and $\lambda$ (or $\alpha$) on the validation set. A big advantage of our method is that it is very easy and cheap to apply, with already trained neural models. There is no need to perform backpropagation over large contexts, and we can thus apply our method with large cache sizes (larger than one thousand). \section{Experiments} In this section, we evaluate our method on various language modeling datasets, which have different sizes and characteristics. On all datasets, we train a static recurrent neural network language model with LSTM units. We then use the hidden representations from this model to obtain our cache, which is interpolated with the static LSTM model. We also evaluate a unigram cache model interpolated with the static model as another baseline. \subsection{Small scale experiments} \paragraph{Datasets.} In this section, we describe experiments performed on two small datasets: the \texttt{Penn Tree Bank}~\citep{marcus1993building} and the \texttt{wikitext2}~\citep{merity2016pointer} datasets. The \texttt{Penn Tree Bank} dataset is made of articles from the Wall Street Journal, contains 929k training tokens and has a vocabulary size of 10k. The \texttt{wikitext2} dataset is derived from Wikipedia articles, contains 2M training tokens and has a vocabulary size of 33k. These datasets contain non-shuffled documents, therefore requiring models to capture inter-sentences dependencies to perform well. \paragraph{Implementation details.} We train recurrent neural network language models with 1024 LSTM units, regularized with dropout (probability of dropping out units equals to $0.65$). We use the Adagrad algorithm, with a learning rate of $0.2$, a batchsize of $20$ and initial weight uniformly sampled in the range~$[-0.05, 0.05]$. We clip the norm of the gradient to $0.1$ and unroll the network for $30$ steps. We consider cache sizes on a logarithmic scale, from $50$ to $10,000$, and fit the cache hyperparameters on the validation set. \paragraph{Results.} We report the perplexity on the validation sets in Figures~\ref{fig:ptb} and \ref{fig:wikitext2}, for various values of hyperparameters, for linear interpolation and global normalization. First, we observe that on both datasets, the linear interpolation method performs slightly better than the global normalization approach. It is also easier to apply in practice, and we thus use this method in the remainder of this paper. In Tables~\ref{tab:ptb} and \ref{tab:wikitext}, we report the test perplexity of our approach and state-of-the-art models. Our approach is competitive with previous models, in particular with the pointer sentinel LSTM model of \citet{merity2016pointer}. On \texttt{Penn Tree Bank}, we note that the improvement over the base model is similar for both methods. On the \texttt{wikitext2} dataset, both methods obtain similar results when using the same cache size ($100$ words). Since our method is computationally cheap, it is easy to increase the cache to larger values ($2,000$ words), leading to dramatic improvements~($30\%$ over the baseline, $12\%$ over a small cache of $100$ words). \subsection{Medium scale experiments} \paragraph{Datasets and implementation details.} In this section, we describe experiments performed over two medium scale datasets: \texttt{text8} and \texttt{wikitext103}. Both datasets are derived from Wikipedia, but different pre-processing were applied. The \texttt{text8} dataset contains 17M training tokens and has a vocabulary size of 44k words, while the \texttt{wikitext103} dataset has a training set of size 103M, and a vocabulary size of 267k words. We use the same setting as in the previous section, except for the batchsize (we use $128$) and dropout parameters (we use $0.45$ for \texttt{text8} and $0.25$ for \texttt{wikitext103}). Since both datasets have large vocabularies, we use the adaptive softmax~\citep{grave2016efficient} for faster training. \paragraph{Results.} We report the test perplexity as a function of the cache size in Figure~\ref{fig:cachesize}, for the neural cache model and a unigram cache baseline. We observe that our approach can exploits larger cache sizes, compared to the baseline. In Table~\ref{tab:wikitext}, we observe that the improvement in perplexity of our method over the LSTM baseline on \texttt{wikitext103} is smaller than for \texttt{wikitext2}~(approx. $16\%$ v.s. $30\%$). The fact that improvements obtained with more advanced techniques decrease when the size of training data increases has already been observed by \citet{goodman2001bit}. Both \texttt{wikitext} datasets sharing the same test set, we also observe that the LSTM baseline, trained on 103M tokens (\texttt{wikitext103}), strongly outperforms more sophisticated methods, trained on 2M tokens (\texttt{wikitext2}). For these two reasons, we believe that it is important to evaluate and compare methods on relatively large datasets. \subsection{Experiments on the lambada dataset} Finally, we report experiments carried on the \texttt{lambada} dataset, introduced by \citet{paperno2016lambada}. This is a dataset of short passages extracted from novels. The goal is to predict the last word of the excerpt. This dataset was built so that human subjects solve the task perfectly when given the full context (approx. 4.6 sentences), but fail to do so when only given the sentence with the target word. Thus, most state-of-the-art language models fail on this dataset. The \texttt{lambada} training set contains approximately 200M tokens and has a vocabulary size of $93,215$. We report results for our method in Table~\ref{tab:text8}, as well the performance of baselines from \citet{paperno2016lambada}. Adding a neural cache model to the LSTM baseline strongly improves the performance on the \texttt{lambada} dataset. We also observe in Figure~\ref{fig:lambada} that the best interpolation parameter between the static model and the cache is not the same for the development and control sets. This is due to the fact that more than 83\% of passages of the development set include the target word, while this is true for only 14\% of the control set. Ideally, a model should have strong results on both sets. One possible generalization of our model would be to adapt the interpolation parameter based on the current vector representation of the history $h_t$. \section{Language modeling} A language model is a probability distribution over sequences of words. Let $V$ be the size of the vocabulary; each word is represented by a one-hot encoding vector $x$ in $\mathbb{R}^V = \mathcal{V}$, corresponding to its index in the vocabulary. Using the chain rule, the probability assigned to a sequence of words $x_1,\dots, x_T$ can be factorized as \begin{equation*} p(x_1, ..., x_T) = \prod_{t=1}^T p(x_t \ | \ x_{t-1}, ..., x_1). \end{equation*} Language modeling is often framed as learning the conditional probability over words, given the history~\citep{bahl1983maximum}. This conditional probability is traditionally approximated with non-parameteric models based on counting statistics~\citep{goodman2001bit}. In particular, smoothed N-gram models~\citep{katz1987estimation,kneser1995improved} achieve good performance in practice~\citep{mikolov2011empirical}. Parametrized alternatives are either maximum entropy language models~\citep{rosenfeld1996maximum}, feedforward networks~\citep{bengio2003neural} or recurrent networks~\citep{mikolov2010recurrent}. In particular, recurrent networks are currently the best solution to approximate this conditional probability, achieving state-of-the-arts performance on standard language modeling benchmarks~\citep{jozefowicz2016exploring,zilly2016recurrent}. \paragraph{Recurrent networks.} Assuming that we have a vector $h_t \in \mathbb{R}^d$ encoding the history~$x_t, ..., x_1$, the conditional probability of a word $w$ can be parametrized as \begin{equation*} p_{vocab}( w \ | \ x_t, ..., x_1) \propto \exp(h_t^{\top} o_w). \end{equation*} The history vector $h_t$ is computed by a recurrent network by recursively applying an equation of the form \begin{equation*} h_t = \Phi\left(x_t, h_{t-1} \right), \end{equation*} where $\Phi$ is a function depending on the architecture of the network. Several architecture for recurrent networks have been proposed, such as the Elman network~\citep{elman1990finding}, the long short-term memory~ (LSTM)~\citep{hochreiter1997long} or the gated recurrent unit~(GRU)~\citep{chung2014empirical}. One of the simplest recurrent networks is the Elman network~\citep{elman1990finding}, where \begin{equation*} h_t = \sigma \left( L x_t + R h_{t-1} \right), \end{equation*} where $\sigma$ is a non-linearity such as the logistic or tanh functions, $L \in \mathbb{R}^{d \times V}$ is a word embedding matrix and $R \in \mathbb{R}^{d \times d}$ is the recurrent matrix. The LSTM architecture is particularly interesting in the context of language modelling~\citep{jozefowicz2016exploring} and we refer the reader to~\cite{graves2013speech} for details on this architecture. The parameters of recurrent neural network language models are learned by minimizing the negative log-likelihood of the training data. This objective function is usually minimized by using the stochastic gradient descent algorithm, or variants such as Adagrad~\citep{duchi2011adaptive}. The gradient is computed using the truncated backpropagation through time algorithm~\citep{werbos1990backpropagation,williams1990efficient}. \paragraph{Cache model.} After a word appears once in a document, it is much more likely to appear again. As an example, the frequency of the word \emph{tiger} on the Wikipedia page of the same name is $2.8\%$, compared to $0.0037\%$ over the whole Wikipedia. Cache models exploit this simple observation to improve $n$-gram language models by capturing long-range dependencies in documents. More precisely, these models have a cache component, which contains the words that appeared in the recent history (either the document or a fixed number of words). A simple language model, such as a unigram or smoothed bigram model, is fitted on the words of the cache and interpolated with the static language model (trained over a larger dataset). This technique has many advantages. First, this is a very efficient way to adapt a language model to a new domain. Second, such models can predict out-of-vocabulary words (OOV words), after seeing them once. Finally, this helps capture long-range dependencies in documents, in order to generate more coherent text.
Improving Neural Language Models with a Continuous Cache
1612.04426
(a) text8
[ "Model", "Test" ]
[ [ "LSTM-500 (Mikolov et al., 2014 )", "156" ], [ "SCRNN (Mikolov et al., 2014 )", "161" ], [ "MemNN (Sukhbaatar et al., 2015 )", "147" ], [ "LSTM-1024 (our implem.)", "121.8" ], [ "Neural cache model", "99.9" ] ]
Finally, we report experiments carried on the lambada dataset, introduced by Paperno et al. This is a dataset of short passages extracted from novels. The goal is to predict the last word of the excerpt. This dataset was built so that human subjects solve the task perfectly when given the full context (approx. 4.6 sentences), but fail to do so when only given the sentence with the target word. Thus, most state-of-the-art language models fail on this dataset. The lambada training set contains approximately 200M tokens and has a vocabulary size of 93,215. Adding a neural cache model to the LSTM baseline strongly improves the performance on the lambada dataset. This is due to the fact that more than 83% of passages of the development set include the target word, while this is true for only 14% of the control set. Ideally, a model should have strong results on both sets. One possible generalization of our model would be to adapt the interpolation parameter based on the current vector representation of the history ht.
\documentclass{article} % For LaTeX2e \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts \title{Improving Neural Language Models with a Continuous Cache} \author{Edouard Grave, Armand Joulin, Nicolas Usunier \\ Facebook AI Research \\ \texttt{\{egrave,ajoulin,usunier\}@fb.com}} \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \usetikzlibrary{positioning} \usetikzlibrary{shapes.misc} \begin{document} \maketitle \begin{abstract} We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks. \end{abstract} \input{introduction} \input{lm} \input{method} \input{relatedwork} \input{experiments} \input{conclusion} \small \bibliographystyle{iclr2017_conference} \end{document} \section{Conclusion} We presented the neural cache model to augment neural language models with a longer-term memory that dynamically updates the word probablilities based on the long-term context. A neural cache can be added on top of a pre-trained language model at negligible cost. Our experiments on both language modeling tasks and the challenging LAMBADA dataset shows that significant performance gains can be expected by adding this external memory component. Technically, the neural cache models is similar to some recent memory-augmented neural networks such as pointer networks. However, its specific design makes it possible to avoid learning the memory lookup component. This makes the neural cache appealing since it can use larger cache sizes than memory-augment networks and can be applied as easily as traditional count-based caches. \section{Related work} \paragraph{Cache model.} Adding a cache to a language model was intoducted in the context of speech recognition\citep{kuhn1988speech,kupiec1989probabilistic,kuhn1990cache}. These models were further extended by \citet{jelinek1991dynamic} into a smoothed trigram language model, reporting reduction in both perplexity and word error rates. \citet{della1992adaptive} adapt the cache to a general $n$-gram model such that it satisfies marginal constraints obtained from the current document. \paragraph{Adaptive language models.} Other adaptive language models have been proposed in the past: \citet{kneser1993dynamic} and \citet{iyer1999modeling} dynamically adapt the parameters of their model to the recent history using different weight interpolation schemes. \citet{bellegarda2000exploiting} and \citet{coccaro1998towards} use latent semantic analysis to adapt their models to the current context. Similarly, topic features have been used with either maximum entropy models~\citep{khudanpur2000maximum} or recurrent networks~\citep{mikolov2012context,wang2015larger}. Finally, \citet{lau1993trigger} proposes to use pairs of distant of words to capture long-range dependencies. \paragraph{Memory augmented neural networks.} In the context of sequence prediction, several memory augmented neural networks have obtained promising results~\citep{sukhbaatar2015end,graves2014neural,grefenstette2015learning,joulin2015inferring}. In particular,~\citet{sukhbaatar2015end} stores a representation of the recent past and accesses it using an attention mechanism~\cite{bahdanau2014neural}. \citet{sukhbaatar2015end} shows that this reduces the perplexity for language modeling. This approach has been successfully applied to question answering, when the answer is contained in a given paragraph~\citep{chen2016thorough,hermann2015teaching,kadlec2016text,sukhbaatar2015end}. Similarly, \citet{vinyals2015pointer} explores the use of this mechanism to reorder sequences of tokens. Their network uses an attention (or ``pointer'') over the input sequence to predict which element should be selected as the next output. \citet{gulcehre2016pointing} have shown that a similar mechanism called \emph{pointer softmax} could be used in the context of machine translation, to decide which word to copy from the source to target. Independently of our work, \citet{merity2016pointer} apply the same mechanism to recurrent network. Unlike our work, they uses the current hidden activation as a representation of the current input (while we use it to represent the output). This requires additional learning of a transformation between the current representation and those in the past. The advantage of our approach is that we can scale to very large caches effortlessly. \section{Introduction} Language modeling is a core problem in natural language processing, with many applications such as machine translation~\citep{brown1993mathematics}, speech recognition~\citep{bahl1983maximum} or dialogue agents~\citep{stolcke2000dialogue}. While traditional neural networks language models have obtained state-of-the-art performance in this domain~\citep{jozefowicz2016exploring,mikolov2010recurrent}, they lack the capacity to adapt to their recent history, limiting their application to dynamic environments~\citep{dodge2015evaluating}. A recent approach to solve this problem is to augment these networks with an external memory~\citep{graves2014neural,grefenstette2015learning,joulin2015inferring,sukhbaatar2015end}. These models can potentially use their external memory to store new information and adapt to a changing environment. While these networks have obtained promising results on language modeling datasets~\citep{sukhbaatar2015end}, they are quite computationally expensive. Typically, they have to learn a parametrizable mechanism to read or write to memory cells ~\citep{graves2014neural,joulin2015inferring}. This may limit both the size of their usable memory as well as the quantity of data they can be trained on. In this work, we propose a very light-weight alternative that shares some of the properties of memory augmented networks, notably the capability to dynamically adapt over time. By minimizing the computation burden of the memory, we are able to use larger memory and scale to bigger datasets. We observe in practice that this allows us to surpass the perfomance of memory augmented networks on different language modeling tasks. Our model share some similarities with a model proposed by~\citet{kuhn1988speech}, called the \emph{cache model}. A cache model stores a simple representation of the recent past, often in the form of unigrams, and uses them for prediction~\citep{kuhn1990cache}. This contextual information is quite cheap to store and can be accessed efficiently. It also does not need any training and can be appplied on top of any model. This makes this model particularly interesting for domain adaptation~\citep{kneser1993dynamic}. Our main contribution is to propose a continuous version of the cache model, called \emph{Neural Cache Model}, that can be adapted to any neural network language model. We store recent hidden activations and use them as representation for the context. Using simply a dot-product with the current hidden activations, they turn out to be extremely informative for prediction. Our model requires \emph{no training} and can be used on any pre-trained neural networks. It also scales effortlessly to thousands of memory cells. We demonstrate the quality of the Neural Cache models on several language model tasks and the LAMBADA dataset~\citep{paperno2016lambada}. \newpage \section{Neural Cache Model} The Neural Cache Model adds a cache-like memory to neural network language models. It exploits the hidden representations $h_t$ to define a probability distribution over the words in the cache. As illustrated Figure~\ref{fig:model}, the cache stores pairs $(h_i, x_{i+1})$ of a hidden representation, and the word which was generated based on this representation (we remind the reader that the vector $h_i$ encodes the history $x_i, ..., x_1$). At time $t$, we then define a probability distribution over words stored in the cache based on the stored hidden representations and the current one $h_t$ as \begin{equation*} p_{cache}( w \ | \ h_{1..t}, \ x_{1..t}) \propto \sum_{i=1}^{t-1} \mathbbm{1}_{\left\{ w = x_{i+1} \right\}} \exp (\theta h_t^{\top} h_i) \end{equation*} where the scalar $\theta$ is a parameter which controls the flatness of the distribution. When $\theta$ is equal to zero, the probability distribution over the history is uniform, and our model is equivalent to a unigram cache model~\citep{kuhn1990cache}. From the point of view of memory-augmented neural networks, the probability $p_{cache}( w \ | \ h_{1..t}, \ x_{1..t})$ given by the neural cache model can be interpreted as the probability to retrieve the word $w$ from the memory given the query $h_t$, where the desired answer is the next word $x_{t+1}$. Using previous hidden states as keys for the words in the memory, the memory lookup operator can be implemented with simple dot products between the keys and the query. In contrast to existing memory-augmented neural networks, the neural cache model avoids the need to learn the memory lookup operator. Such a cache can thus be added to a pre-trained recurrent neural language model without fine tuning of the parameters, and large cache size can be used with negligible impact on the computational cost of a prediction. \paragraph{Neural cache language model.} Following the standard practice in n-gram cache-based language models, the final probability of a word is given by the linear interpolation of the cache language model with the regular language model, obtaining: \begin{equation*} p(w \ | \ h_{1..t}, \ x_{1..t}) = (1 - \lambda) p_{vocab}(w \ | \ h_t) + \lambda p_{cache}(w \ | \ h_{1..t}, x_{1..t})\,. \end{equation*} Instead of taking a linear interpolation between the two distribution with a fixed $\lambda$, we also consider a global normalization over the two distribution: \begin{equation*} p( w \ | \ h_{1..t}, \ x_{1..t}) \propto \left( \exp(h_t^{\top} o_w) + \sum_{i=1}^{t-1} \mathbbm{1}_{\left\{ w = x_{i+1} \right\}} \exp (\theta h_t^{\top} h_i + \alpha) \right)\,. \end{equation*} This corresponds to taking a softmax over the vocabulary and the words in the cache. The parameter $\alpha$ controls the weight of the cache component, and is the counterpart of the $\lambda$ parameter for linear interpolation. The addition of the neural cache to a recurrent neural language model inherits the advantages of $n$-gram caches in usual cache-based models: The probability distribution over words is updated online depending on the context, and out-of-vocabulary words can be predicted as soon as they have been seen at least once in the recent history. The neural cache also inherits the ability of the hidden states of recurrent neural networks to model longer-term contexts than small $n$-grams, and thus allows for a finer modeling of the current context than e.g., unigram caches. \paragraph{Training procedure.} For now, we first train the (recurrent) neural network language model, without the cache component. We only apply the cache model at test time, and choose the hyperparameters $\theta$ and $\lambda$ (or $\alpha$) on the validation set. A big advantage of our method is that it is very easy and cheap to apply, with already trained neural models. There is no need to perform backpropagation over large contexts, and we can thus apply our method with large cache sizes (larger than one thousand). \section{Experiments} In this section, we evaluate our method on various language modeling datasets, which have different sizes and characteristics. On all datasets, we train a static recurrent neural network language model with LSTM units. We then use the hidden representations from this model to obtain our cache, which is interpolated with the static LSTM model. We also evaluate a unigram cache model interpolated with the static model as another baseline. \subsection{Small scale experiments} \paragraph{Datasets.} In this section, we describe experiments performed on two small datasets: the \texttt{Penn Tree Bank}~\citep{marcus1993building} and the \texttt{wikitext2}~\citep{merity2016pointer} datasets. The \texttt{Penn Tree Bank} dataset is made of articles from the Wall Street Journal, contains 929k training tokens and has a vocabulary size of 10k. The \texttt{wikitext2} dataset is derived from Wikipedia articles, contains 2M training tokens and has a vocabulary size of 33k. These datasets contain non-shuffled documents, therefore requiring models to capture inter-sentences dependencies to perform well. \paragraph{Implementation details.} We train recurrent neural network language models with 1024 LSTM units, regularized with dropout (probability of dropping out units equals to $0.65$). We use the Adagrad algorithm, with a learning rate of $0.2$, a batchsize of $20$ and initial weight uniformly sampled in the range~$[-0.05, 0.05]$. We clip the norm of the gradient to $0.1$ and unroll the network for $30$ steps. We consider cache sizes on a logarithmic scale, from $50$ to $10,000$, and fit the cache hyperparameters on the validation set. \paragraph{Results.} We report the perplexity on the validation sets in Figures~\ref{fig:ptb} and \ref{fig:wikitext2}, for various values of hyperparameters, for linear interpolation and global normalization. First, we observe that on both datasets, the linear interpolation method performs slightly better than the global normalization approach. It is also easier to apply in practice, and we thus use this method in the remainder of this paper. In Tables~\ref{tab:ptb} and \ref{tab:wikitext}, we report the test perplexity of our approach and state-of-the-art models. Our approach is competitive with previous models, in particular with the pointer sentinel LSTM model of \citet{merity2016pointer}. On \texttt{Penn Tree Bank}, we note that the improvement over the base model is similar for both methods. On the \texttt{wikitext2} dataset, both methods obtain similar results when using the same cache size ($100$ words). Since our method is computationally cheap, it is easy to increase the cache to larger values ($2,000$ words), leading to dramatic improvements~($30\%$ over the baseline, $12\%$ over a small cache of $100$ words). \subsection{Medium scale experiments} \paragraph{Datasets and implementation details.} In this section, we describe experiments performed over two medium scale datasets: \texttt{text8} and \texttt{wikitext103}. Both datasets are derived from Wikipedia, but different pre-processing were applied. The \texttt{text8} dataset contains 17M training tokens and has a vocabulary size of 44k words, while the \texttt{wikitext103} dataset has a training set of size 103M, and a vocabulary size of 267k words. We use the same setting as in the previous section, except for the batchsize (we use $128$) and dropout parameters (we use $0.45$ for \texttt{text8} and $0.25$ for \texttt{wikitext103}). Since both datasets have large vocabularies, we use the adaptive softmax~\citep{grave2016efficient} for faster training. \paragraph{Results.} We report the test perplexity as a function of the cache size in Figure~\ref{fig:cachesize}, for the neural cache model and a unigram cache baseline. We observe that our approach can exploits larger cache sizes, compared to the baseline. In Table~\ref{tab:wikitext}, we observe that the improvement in perplexity of our method over the LSTM baseline on \texttt{wikitext103} is smaller than for \texttt{wikitext2}~(approx. $16\%$ v.s. $30\%$). The fact that improvements obtained with more advanced techniques decrease when the size of training data increases has already been observed by \citet{goodman2001bit}. Both \texttt{wikitext} datasets sharing the same test set, we also observe that the LSTM baseline, trained on 103M tokens (\texttt{wikitext103}), strongly outperforms more sophisticated methods, trained on 2M tokens (\texttt{wikitext2}). For these two reasons, we believe that it is important to evaluate and compare methods on relatively large datasets. \subsection{Experiments on the lambada dataset} Finally, we report experiments carried on the \texttt{lambada} dataset, introduced by \citet{paperno2016lambada}. This is a dataset of short passages extracted from novels. The goal is to predict the last word of the excerpt. This dataset was built so that human subjects solve the task perfectly when given the full context (approx. 4.6 sentences), but fail to do so when only given the sentence with the target word. Thus, most state-of-the-art language models fail on this dataset. The \texttt{lambada} training set contains approximately 200M tokens and has a vocabulary size of $93,215$. We report results for our method in Table~\ref{tab:text8}, as well the performance of baselines from \citet{paperno2016lambada}. Adding a neural cache model to the LSTM baseline strongly improves the performance on the \texttt{lambada} dataset. We also observe in Figure~\ref{fig:lambada} that the best interpolation parameter between the static model and the cache is not the same for the development and control sets. This is due to the fact that more than 83\% of passages of the development set include the target word, while this is true for only 14\% of the control set. Ideally, a model should have strong results on both sets. One possible generalization of our model would be to adapt the interpolation parameter based on the current vector representation of the history $h_t$. \section{Language modeling} A language model is a probability distribution over sequences of words. Let $V$ be the size of the vocabulary; each word is represented by a one-hot encoding vector $x$ in $\mathbb{R}^V = \mathcal{V}$, corresponding to its index in the vocabulary. Using the chain rule, the probability assigned to a sequence of words $x_1,\dots, x_T$ can be factorized as \begin{equation*} p(x_1, ..., x_T) = \prod_{t=1}^T p(x_t \ | \ x_{t-1}, ..., x_1). \end{equation*} Language modeling is often framed as learning the conditional probability over words, given the history~\citep{bahl1983maximum}. This conditional probability is traditionally approximated with non-parameteric models based on counting statistics~\citep{goodman2001bit}. In particular, smoothed N-gram models~\citep{katz1987estimation,kneser1995improved} achieve good performance in practice~\citep{mikolov2011empirical}. Parametrized alternatives are either maximum entropy language models~\citep{rosenfeld1996maximum}, feedforward networks~\citep{bengio2003neural} or recurrent networks~\citep{mikolov2010recurrent}. In particular, recurrent networks are currently the best solution to approximate this conditional probability, achieving state-of-the-arts performance on standard language modeling benchmarks~\citep{jozefowicz2016exploring,zilly2016recurrent}. \paragraph{Recurrent networks.} Assuming that we have a vector $h_t \in \mathbb{R}^d$ encoding the history~$x_t, ..., x_1$, the conditional probability of a word $w$ can be parametrized as \begin{equation*} p_{vocab}( w \ | \ x_t, ..., x_1) \propto \exp(h_t^{\top} o_w). \end{equation*} The history vector $h_t$ is computed by a recurrent network by recursively applying an equation of the form \begin{equation*} h_t = \Phi\left(x_t, h_{t-1} \right), \end{equation*} where $\Phi$ is a function depending on the architecture of the network. Several architecture for recurrent networks have been proposed, such as the Elman network~\citep{elman1990finding}, the long short-term memory~ (LSTM)~\citep{hochreiter1997long} or the gated recurrent unit~(GRU)~\citep{chung2014empirical}. One of the simplest recurrent networks is the Elman network~\citep{elman1990finding}, where \begin{equation*} h_t = \sigma \left( L x_t + R h_{t-1} \right), \end{equation*} where $\sigma$ is a non-linearity such as the logistic or tanh functions, $L \in \mathbb{R}^{d \times V}$ is a word embedding matrix and $R \in \mathbb{R}^{d \times d}$ is the recurrent matrix. The LSTM architecture is particularly interesting in the context of language modelling~\citep{jozefowicz2016exploring} and we refer the reader to~\cite{graves2013speech} for details on this architecture. The parameters of recurrent neural network language models are learned by minimizing the negative log-likelihood of the training data. This objective function is usually minimized by using the stochastic gradient descent algorithm, or variants such as Adagrad~\citep{duchi2011adaptive}. The gradient is computed using the truncated backpropagation through time algorithm~\citep{werbos1990backpropagation,williams1990efficient}. \paragraph{Cache model.} After a word appears once in a document, it is much more likely to appear again. As an example, the frequency of the word \emph{tiger} on the Wikipedia page of the same name is $2.8\%$, compared to $0.0037\%$ over the whole Wikipedia. Cache models exploit this simple observation to improve $n$-gram language models by capturing long-range dependencies in documents. More precisely, these models have a cache component, which contains the words that appeared in the recent history (either the document or a fixed number of words). A simple language model, such as a unigram or smoothed bigram model, is fitted on the words of the cache and interpolated with the static language model (trained over a larger dataset). This technique has many advantages. First, this is a very efficient way to adapt a language model to a new domain. Second, such models can predict out-of-vocabulary words (OOV words), after seeing them once. Finally, this helps capture long-range dependencies in documents, in order to generate more coherent text.
Enriching Complex Networks with Word Embeddings for Detecting Mild Cognitive Impairment from Speech Transcripts
1704.08088
Table 7: Classification accuracy achieved on Cinderella dataset manually processed to revise non-grammatical sentences.
[ "Classifier", "CN", "CNE", "LM", "BoW" ]
[ [ "SVM-Linear", "50", "65", "65", "52" ], [ "SVM-RBF", "[BOLD] 57", "[BOLD] 67", "[BOLD] 72", "[BOLD] 55" ], [ "KNN", "42", "47", "55", "50" ], [ "RF", "52", "47", "70", "45" ], [ "G-NB", "52", "65", "62", "45" ], [ "Ensemble", "52", "60", "[BOLD] 72", "45" ] ]
In general, CNE outperforms the approach using only complex networks (CN), while SVM (Linear or RBF kernel) provides higher accuracy than other machine learning algorithms. The results for the three datasets show that characterizing transcriptions into complex networks is competitive with other traditional methods, such as the use of linguistic metrics. In fact, among the three types of features, using enriched networks (CNE) provided the highest accuracies in two datasets (Cookie Theft and original Cinderella). For the ABCD dataset, which contains short narratives, the small length of the transcriptions may have had an effect, since BoW features led to the highest accuracy. In the case of the revised Cinderella dataset, segmented into sentences and capitalized as reported in Aluísio et al. However, this process of manually removing disfluencies demands time; therefore it is not practical for large-scale assessments. In this study, we employed metrics of topological properties of CN in a machine learning classification approach to distinguish between healthy patients and patients with MCI. To the best of our knowledge, these metrics have never been used to detect MCI in speech transcripts; CN were enriched with word embeddings to better represent short texts produced in neuropsychological assessments. The topological properties of CN outperform traditional linguistic metrics in individual classifiers’ results. Furthermore, we found that combining machine and multi-view learning can improve accuracy. The comparison with our results is not straightforward, though, because the databases used in the studies are different. There is a clear need for publicly available datasets to compare different methods, which would optimize the detection of MCI in elderly people.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[utf8]{inputenc} \aclfinalcopy % Uncomment this line for the final submission \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand\Mark[1]{\textsuperscript#1} \newcommand{\mfar}[1]{ \textcolor{cyan}{\bf\small [#1 --MFAR]} } \newcommand{\supa}[1]{ \textcolor{red}{\bf\small [#1 --SUPA]} } \newcommand{\danr}[1]{ \textcolor{blue}{\bf\small [#1 --DANR]} } \title{Enriching Complex Networks with Word Embeddings for Detecting Mild Cognitive Impairment from Speech Transcripts} \author{ Leandro B. dos Santos\Mark{1}, Edilson A. Corr{\^e}a Jr\Mark{1}, Osvaldo N. Oliveira Jr\Mark{2}, Diego R. Amancio\Mark{1}, \\ \textbf{Letícia L. Mansur\Mark{3}}, \textbf{Sandra M. Aluísio\Mark{1}}\\ \Mark{1} Institute of Mathematics and Computer Science, University of S\~{a}o Paulo, S\~{a}o Carlos, S\~{a}o Paulo, Brazil\\ \Mark{2} S\~{a}o Carlos Institute of Physics, University of S\~{a}o Paulo, S\~{a}o Carlos, S\~{a}o Paulo, Brazil \\ \Mark{3} Department of Physiotherapy, Speech Pathology and Occupational Therapy, \\ University of S\~{a}o Paulo, S\~{a}o Paulo, S\~{a}o Paulo, Brazil\\ {\tt \{leandrobs,edilsonacjr,lamansur\}@usp.br}, {\tt chu@ifsc.usp.br}\\ {\tt \{diego,sandra\}@icmc.usp.br} } \date{} \begin{document} \maketitle \begin{abstract} Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose. Linguistic features, mainly from parsers, have been used to detect MCI, but this is not suitable for large-scale assessments. MCI disfluencies produce non-grammatical speech that requires manual or high precision automatic correction of transcripts. In this paper, we modeled transcripts into complex networks and enriched them with word embedding (CNE) to better represent short texts produced in neuropsychological assessments. The network measurements were applied with well-known classifiers to automatically identify MCI in transcripts, in a binary classification task. A comparison was made with the performance of traditional approaches using Bag of Words (BoW) and linguistic features for three datasets: DementiaBank in English, and Cinderella and Arizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using only complex networks, while Support Vector Machine was superior to other classifiers. CNE provided the highest accuracies for DementiaBank and Cinderella, but BoW was more efficient for the Arizona-Battery dataset probably owing to its short narratives. The approach using linguistic features yielded higher accuracy if the transcriptions of the Cinderella dataset were manually revised. Taken together, the results indicate that complex networks enriched with embedding is promising for detecting MCI in large-scale assessments. \end{abstract} \section{Introduction\label{sec:intro}} Mild Cognitive Impairment (MCI) can affect one or multiple cognitive domains (e.g. memory, language, visuospatial skills and executive functions), and may represent a pre-clinical stage of Alzheimer's disease (AD). The impairment that affects memory, referred to as amnestic MCI, is the most frequent, with the highest conversion rate for AD, at 15\% per year versus 1 to 2\% for the general population. Since dementias are chronic and progressive diseases, their early diagnosis ensures a greater chance of success to engage patients in non-pharmacological treatment strategies such as cognitive training, physical activity and socialization \citep{Art:Teixeira:2012:Non:pharmacological}. Language is one of the most efficient information sources to assess cognitive functions. Changes in language usage are frequent in patients with dementia and are normally first recognized by the patients themselves or their family members. Therefore, the automatic analysis of discourse production is promising in diagnosing MCI at early stages, which may address potentially reversible factors \citep{Art:Muangpaisan:2012:Prevalence}. Proposals to detect language-related impairment in dementias include machine learning \citep{Inc:Jarrold:2010:Language,Art:Roark:2011:Spoken,Art:Fraser:2014:Automated,Art:Fraser:2015:linguistic}, magnetic resonance imaging \citep{Artc:Dyrba:2015:Predicting}, and data screening tests added to demographic information \citep{Art:Weakley:2015:Neuropsychological}. Discourse production (mainly narratives) is attractive because it allows the analysis of linguistic microstructures, including phonetic-phonological, morphosyntactic and semantic-lexical components, as well as semantic-pragmatic macrostructures. Automated discourse analysis based on Natural Language Processing (NLP) resources and tools to diagnose dementias via machine learning methods has been used for English language \citep{Inp:Lehr:2012:Fully,Inp:Jarrold:2014:Aided,Inp:Orimaye:2014:Learning,Art:Fraser:2015:linguistic,Inp:Davy:2016:Towards} and for Brazilian Portuguese \citep{Inp:Aluisio:2016:Evaluating}. A variety of features are required for this analysis, including Part-of-Speech (PoS), syntactic complexity, lexical diversity and acoustic features. Producing robust tools to extract these features is extremely difficult because speech transcripts used in neuropsychological evaluations contain disfluencies (repetitions, revisions, paraphasias) and patient's comments about the task being evaluated. Another problem in using linguistic knowledge is the high dependence on manually created resources, such as hand-crafted linguistic rules and/or annotated corpora. Even when traditional statistical techniques (Bag of Words or ngrams) are applied, problems still appear in dealing with disfluencies, because mispronounced words will not be counted together. Indeed, other types of disfluencies (repetition, amendments, patient's comments about the task) will be counted, thus increasing the vocabulary. An approach applied successfully to several areas of NLP~\citep{Boo:Mihalcea:2011:Graph:NLP}, which may suffer less from the problems mentioned above, relies on the use of complex networks and graph theory. The word adjacency network model \citep{Art:Cancho:2001:Small,Art:Roxas:2010:prose:poetry,Art:Amancio:2012:Extractive,Art:Amancio:2015:Complex} has provided good results in text classification~\citep{Art:Arruda:2016:Classification:Texts} and related tasks, namely author detection~\citep{Art:Amancio:2015:Authorship}, identification of literary movements~\citep{Art:Amancio:2012:Literary:movements}, authenticity verification~\citep{10.1371/journal.pone.0067310} and word sense discrimination~\citep{0295-5075-98-1-18002}. In this paper, we show that speech transcripts (narratives or descriptions) can be modeled into complex networks that are enriched with word embedding in order to better represent short texts produced in these assessments. When applied to a machine learning classifier, the complex network features were able to distinguish between control participants and mild cognitive impairment participants. Discrimination of the two classes could be improved by combining complex networks with linguistic and traditional statistical features. With regard to the task of detecting MCI from transcripts, this paper is, to the best of our knowledge, the first to: a) show that classifiers using features extracted from transcripts modeled into complex networks enriched with word embedding present higher accuracy than using only complex networks for 3 datasets; and b) show that for languages that do not have competitive dependency and constituency parsers to exploit syntactic features, e.g. Brazilian Portuguese, complex networks enriched with word embedding constitute a source to extract new, language independent features from transcripts. \section{Related Work\label{sec:related}} Detection of memory impairment has been based on linguistic, acoustic, and demographic features, in addition to scores of neuropsychological tests. Linguistic and acoustic features were used to automatically detect aphasia \cite{Art:Fraser:2014:Automated}; and AD \cite{Art:Fraser:2015:linguistic} or dementia \cite{Inp:Orimaye:2014:Learning} in the public corpora of DementiaBank\footnote{\url{talkbank.org/DementiaBank/}}. Other studies distinguished different types of dementia \cite{Art:Garrard:2014:ML:WAB,Inp:Jarrold:2014:Aided}, in which speech samples were elicited using the Picnic picture of the Western Aphasia Battery \cite{Book:Kertesz:1982:Western}. \citet{Inp:Davy:2016:Towards} also used the Picnic scene to detect MCI, where the subjects were asked to write (by hand) a detailed description of the scene. As for automatic detection of MCI in narrative speech, \citet{Art:Roark:2011:Spoken} extracted speech features and linguistic complexity measures of speech samples obtained with the Wechsler Logical Memory (WLM) subtest \cite{Book:Wechsler:1997:WLM}, and \citet{Inp:Lehr:2012:Fully} fully automatized the WLM subtest. In this test, the examiner tells a short narrative to a subject, who then retells the story to the examiner, immediately and after a 30-minute delay. WLM scores are obtained by counting the number of story elements recalled. \citet{Inp:Toth:2015:Automatic} and \citet{Inp:Vincze:2016:Detecting} used short animated films to evaluate immediate and delayed recalls in MCI patients who were asked to talk about the first film shown, then about their previous day, and finally about another film shown last. \citet{Inp:Toth:2015:Automatic} adopted automatic speech recognition (ASR) to extract a phonetic level segmentation, which was used to calculate acoustic features. \citet{Inp:Vincze:2016:Detecting} used speech, morphological, semantic, and demographic features collected from their speech transcripts to automatically identify patients suffering from MCI. For the Portuguese language, machine learning algorithms were used to identify subjects with AD and MCI. \citet{Inp:Aluisio:2016:Evaluating} used a variety of linguistic metrics, such as syntactic complexity, idea density \citep{Inp:Cunha:2015:Automatic}, and text cohesion through latent semantics. NLP tools with high precision are needed to compute these metrics, which is a problem for Portuguese since no robust dependency or constituency parsers exist. Therefore, the transcriptions had to be manually revised; they were segmented into sentences, following a semantic-structural criterion and capitalization was applied. The authors also removed disfluencies and inserted omitted subjects when they were hidden, in order to reduce parsing errors. This process is obviously expensive, which has motivated us to use complex networks in the present study to model transcriptions and avoid a manual preprocessing step. \section{Modeling and Characterizing Texts as Complex Networks} The theory and concepts of complex networks have been used in several NLP tasks~\citep{Boo:Mihalcea:2011:Graph:NLP,Cong2014598}, such as text classification~\citep{Art:Arruda:2016:Classification:Texts}, summarization~\citep{Antiqueira2009584,Art:Amancio:2012:Extractive} and word sense disambiguation~\citep{0295-5075-98-5-58001}. In this study, we used the word co-occurrence model (also called word adjacency model) because most of the syntactical relations occur among neighboring words~\citep{Art:Cancho:2004:Patterns}. Each distinct word becomes a node and words that are adjacent in the text are connected by an edge. Mathematically, a network is defined as an undirected graph $G = \{V, E\}$, formed by a set $V = \{v_1, v_2, ..., v_n\}$ of nodes (words) and a set $E = \{e_1, e_2, ..., e_m\}$ of edges (co-occurrence) that are represented by an adjacency matrix $A$, whose elements $A_{ij}$ are equal to $1$ whenever there is an edge connecting nodes (words) $i$ and $j$, and equal to $0$ otherwise. Before modeling texts into complex networks, it is often necessary to do some preprocessing in the raw text. Preprocessing starts with tokenization where each document/text is divided into tokens (meaningful elements, e.g., words and punctuation marks) and then \textit{stopwords} and punctuation marks are removed, since they have little semantic meaning. One last step we decided to eliminate from the preprocessing pipeline is lemmatization, which transforms each word into its canonical form. This decision was made based on two factors. First, a recent work has shown that lemmatization has little or no influence when network modeling is adopted in related tasks~\citep{Art:Machicao:2016:Lemma:Influence}. Second, the lemmatization process requires part-of-speech (POS) tagging that may introduce undesirable noises/errors in the text, since the transcriptions in our work contain disfluencies. Another problem with transcriptions in our work is their size. As demonstrated by~\citet{Art:Amancio:2015:Short:Texts}, classification of small texts using networks can be impaired, since short texts have almost linear networks, and the topological measures of these networks have little or no information relevant to classification. To solve this problem, we adapted the approach of inducing language networks from word embeddings, proposed by \citet{Col:Perozzi:2014:Inducing} to enrich the networks with semantic information. In their work, language networks were generated from continuous word representations, in which each word is represented by a dense, real-valued vector obtained by training neural networks in the language model task (or variations, such as context prediction)~\cite{Art:Bengio:2003:Neural,Art:Collobert:2011:NLP,Art:Mikolov:2013:Exploiting,Inp:Mikolov:2013:Distributed}. This structure is known to capture syntactic and semantic information. \citet{Col:Perozzi:2014:Inducing}, in particular, take advantage of word embeddings to build networks where each word is a vertex and edges are defined by similarity between words established by the proximity of the word vectors. Following this methodology, in our model we added new edges to the co-occurrence networks considering similarities between words, that is, for all pairs of words in the text that were not connected, an edge was created if their vectors (from word embedding) had a cosine similarity higher than a given threshold. Figure~\ref{fig:complex:network} shows an example of a co-occurrence network enriched by similarity links (the dotted edges). The gain in information by enriching a co-occurrence network with semantic information is readily apparent in Figure ~\ref{fig:complex:network:transcriotion}. \section{Datasets, Features and Methods\label{sec:methods}} \subsection{Datasets} The datasets\footnote{All datasets are made available in the same representations used in this work, upon request to the authors.} used in our study consisted of: (i) manually segmented and transcribed samples from the DementiaBank and Cinderella story and (ii) transcribed samples of Arizona Battery for Communication Disorders of Dementia (ABCD) automatically segmented into sentences, since we are working towards a fully automated system to detect MCI in transcripts and would like to evaluate a dataset which was automatically processed. The DementiaBank dataset is composed of short English descriptions, while the Cinderella dataset contains longer Brazilian Portuguese narratives. ABCD dataset is composed of very short narratives, also in Portuguese. Below, we describe in further detail the datasets, participants, and the task in which they were used. \subsubsection{The Cookie Theft Picture Description Dataset} The clinical dataset used for the English language was created during a longitudinal study conducted by the University of Pittsburgh School of Medicine on Alzheimer’s and related dementia, funded by the National Institute of Aging. To be eligible for inclusion in the study, all participants were required to be above 44 years of age, have at least 7 years of education, no history of nervous system disorders nor be taking neuroleptic medication, have an initial Mini-Mental State Exam (MMSE) score of 10 or greater, and be able to give informed consent. The dataset contains transcripts of verbal interviews with AD and related Dementia patients, including those with MCI (for further details see \cite{Art:Becker:1994:Natural}). We used 43 transcriptions with MCI in addition to another 43 transcriptions sampled from 242 healthy elderly people to be used as the control group. Table \ref{tab:demographic:talkbank} shows the demographic information for the two diagnostic groups. For this dataset, interviews were conducted in English and narrative speech was elicited using the Cookie Theft picture \citep{Book:Goodglass:2001:Assessment} (Figure \ref{fig:book:Cookie:Theft} from \citet{Book:Goodglass:2001:Assessment} in Section \ref{sec:supplemental:example}). During the interview, patients were given the picture and were told to discuss everything they could see happening in the picture. The patients’ verbal utterances were recorded and then transcribed into the CHAT (Codes for the Human Analysis of Transcripts) transcription format \citep{Book:Macwhinney:2000:Childes}. We extracted the word-level transcript patient sentences from the CHAT files and discarded the annotations, as our goal was to create a fully automated system that does not require the input of a human annotator. We automatically removed filled pauses such as \emph{uh}, \emph{um} , \emph{er} , and \emph{ah} (e.g. \emph{uh it seems to be summer out}), short false starts (e.g. \emph{just t the ones} ), and repetition (e.g. \emph{mother's finished certain of the the dishes} ), as in \cite{Art:Fraser:2015:linguistic}. The control group had an average of 9.58 sentences per narrative, with each sentence having an average of 9.18 words; while the MCI group had an average of 10.97 sentences per narrative, with 10.33 words per sentence in average. \subsubsection{The Cinderella Narrative Dataset} The dataset examined in this study included 20 subjects with MCI and 20 normal elderly control subjects, as diagnosed at the Medical School of the University of S\~ao Paulo (FMUSP). Table \ref{tab:demographic:data} shows the demographic information of the two diagnostic groups, which were also used in \citet{Inp:Aluisio:2016:Evaluating}. The criteria used to diagnose MCI came from \citet{Art:Petersen:2004:MCI}. Diagnostics were carried out by a multidisciplinary team consisting of psychiatrists, geriatricians, neurologists, neuropsychologists, speech pathologists, and occupational therapists, by a criterion of consensus. Inclusion criteria for the control group were elderlies with no cognitive deficits and preservation of functional capacity in everyday life. The exclusion criteria for the normal group were: poorly controlled clinical diseases, sensitive deficits that were not being compensated for and interfered with the performance in tests, and other neurological or psychiatric diagnoses associated with dementia or cognitive deficits and use of medications in doses that affected cognition. Speech narrative samples were elicited by having participants tell the Cinderella story; participants were given as much time as they needed to examine a picture book illustrating the story (Figure \ref{fig:book:cinderella} in Section \ref{sec:supplemental}). When each participant had finished looking at the pictures, the examiner asked the subject to tell the story in their own words, as in \citet{Art:Saffran:1989:Quantitative}. The time was recorded, but there was no limit imposed to the narrative length. If the participant had difficulty initiating or continuing speech, or took a long pause, an evaluator would use the stimulus question ``What happens next ?'', seeking to encourage the participant to continue his/her narrative. When the subject was unable to proceed with the narrative, the examiner asked if he/she had finished the story and had something to add. Each speech sample was recorded and then manually transcribed at the word level following the NURC/SP N. 338 EF and 331 D2 transcription norms\footnote{\url{albertofedel.blogspot.com.br/2010_11_01_archive.html}}. Other tests were applied after the narrative, in the following sequence: phonemic verbal fluency test, action verbal fluency, Camel and Cactus test \citep{Art:Bozeat:2000:Non:Verbal}, and Boston Naming test \citep{Book:Kaplan:2005:Boston}, in order to diagnose the groups. Since our ultimate goal is to create a fully automated system that does not require the input of a human annotator, we manually segmented sentences to simulate a high-quality ASR transcript with sentence segmentation, and we automatically removed the disfluencies following the same guidelines of TalkBank project. However, other disfluencies (revisions, elaboration, paraphasias and comments about the task) were kept. The control group had an average of 30.80 sentences per narrative, and each sentence averaged 12.17 words. As for the MCI group, it had an average of 29.90 sentences per narrative, and each sentence averaged 13.03 words. We also evaluated a different version of the dataset used in \citet{Inp:Aluisio:2016:Evaluating}, where narratives were manually annotated and revised to improve parsing results. The revision process was the following: (i) in the original transcript, segments with hesitations or repetitions of more than one word or segment of a single word were annotated to become a feature and then removed from the narrative to allow the extraction of features from parsing; (ii) empty emissions, which were comments unrelated to the topic of narration or confirmations, such as “\textit{n\'{e}}” (alright), were also annotated and removed; (iii) prolongations of vowels, short pauses and long pauses were also annotated and removed; and (iv) omitted subjects in sentences were inserted. In this revised dataset, the control group had an average of 45.10 sentences per narrative, and each sentence averaged 8.17 words. The MCI group had an average of 31.40 sentences per narrative, with each sentence averaging 10.91 words. \subsubsection{The ABCD Dataset} The subtest of immediate/delayed recall of narratives of the ABCD battery was administered to 23 participants with a diagnosis of MCI and 20 normal elderly control participants, as diagnosed at the Medical School of the University of S\~ao Paulo (FMUSP). MCI subjects produced 46 narratives while the control group produced 39 ones. In order to carry out experiments with a balanced corpus, as with the previous two datasets, we excluded seven transcriptions from the MCI group. We used the automatic sentence segmentation method referred to as DeepBond \cite{Inp:Treviso:2017:DeepBond} in the transcripts. Table \ref{tab:demographic:data:abcd} shows the demographic information. The control group had an average of 5.23 sentences per narrative, with 11 words per sentence on average, and the MCI group had an average of 4.95 sentences per narrative, with an average of 12.04 words per sentence. Interviews were conducted in Portuguese and the subject listened to the examiner read a short narrative. The subject then retold the narrative to the examiner twice: once immediately upon hearing it and again after a 30-minute delay \cite{Book:Bayles:1991:ABCD}. Each speech sample was recorded and then manually transcribed at the word level following the NURC/SP N. 338 EF and 331 D2 transcription norms. \subsection{Features} Features of three distinct natures were used to classify the transcribed texts: topological metrics of co-occurrence networks, linguistic features and bag of words representations. \subsubsection{Topological Characterization of Networks} Each transcription was mapped into a co-occurrence network, and then enriched via word embeddings using the cosine similarity of words. Since the occurrence of out-of-vocabulary words is common in texts of neuropsychological assessments, we used the method proposed by \citet{Art:Bojanowski:2016:Enriching} to generate word embeddings. This method extends the skip-gram model to use character-level information, with each word being represented as a bag of character $n$-grams. It provides some improvement in comparison with the traditional skip-gram model in terms of syntactic evaluation~\cite{Inp:Mikolov:2013:Distributed} but not for semantic evaluation. Once the network has been enriched, we characterize its topology using the following ten measurements: \begin{enumerate} \item \textbf{PageRank:} is a centrality measurement that reflects the relevance of a node based on its connections to other relevant nodes \citep{Inp:Brin:2012:PageRank}; \item \textbf{Betweenness:} is a centrality measurement that considers a node as relevant if it is highly accessed via shortest paths. The betweenness of a node $v$ is defined as the fraction of shortest paths going through node $v$; \item \textbf{Eccentricity:} of a node is calculated by measuring the shortest distance from the node to all other vertices in the graph and taking the maximum; \item \textbf{Eigenvector centrality:} is a measurement that defines the importance of a node based on its connectivity to high-rank nodes; \item \textbf{Average Degree of the Neighbors of a Node:} is the average of the degrees of all its direct neighbors; \item \textbf{Average Shortest Path Length of a Node:} is the average distance between this node and all other nodes of the network; \item \textbf{Degree:} is the number of edges connected to the node; \item \textbf{Assortativity Degree:} or degree correlation measures the tendency of nodes to connect to other nodes that have similar degree; \item \textbf{Diameter:} is defined as the maximum shortest path; \item \textbf{Clustering Coefficient:} measures the probability that two neighbors of a node are connected. \end{enumerate} Most of the measurements described above are local measurements, i.e. each node $i$ possesses a value $X_i$, so we calculated the average $\mu(X)$, standard deviation $\sigma(X)$ and skewness $\gamma(X)$ for each measurement~\citep{Art:Amancio:2015:Complex}. \subsubsection{Linguistic Features} Linguistic features for classification of neuropsychological assessments have been used in several studies \citep{Art:Roark:2011:Spoken,Inp:Jarrold:2014:Aided,Art:Fraser:2014:Automated,Inp:Orimaye:2014:Learning,Art:Fraser:2015:linguistic,Inp:Vincze:2016:Detecting,Inp:Davy:2016:Towards}. We used the Coh-Metrix\footnote{\url{cohmetrix.com}}\citep{Art:Graesser:2004:Coh:Metrix} tool to extract features from English transcripts, resulting in 106 features. The metrics are divided into eleven categories: Descriptive, Text Easability Principal Component, Referential Cohesion, Latent Semantic Analysis (LSA), Lexical Diversity, Connectives, Situation Model, Syntactic Complexity, Syntactic Pattern Density, Word Information, and Readability (Flesch Reading Ease, Flesch-Kincaid Grade Level, Coh-Metrix L2 Readability). For Portuguese, Coh-Metrix-Dementia \citep{Inp:Aluisio:2016:Evaluating} was used. The metrics affected by constituency and dependency parsing were not used because they are not robust with disfluencies. Metrics based on manual annotation (such as proportion short pauses, mean pause duration, mean number of empty words, and others) were also discarded. The metrics of Coh-Metrix-Dementia are divided into twelve categories: Ambiguity, Anaphoras, Basic Counts, Connectives, Co-reference Measures, Content Word Frequencies, Hypernyms, Logic Operators, Latent Semantic Analysis, Semantic Density, Syntactical Complexity, and Tokens. The metrics used are shown in detail in Section \ref{sec:supplemental:metrics}. In total, 58 metrics were used, from the 73 available on the website\footnote{\url{http://143.107.183.175:22380}}. \subsubsection{Bag of Words} The representation of text collections under the BoW assumption (i.e., with no information relating to word order) has been a robust solution for text classification. In this methodology, transcripts are represented by a table in which the columns represent the terms (or existing words) in the transcripts and the values represent frequency of a term in a document. \subsection{Classification Algorithms} In order to quantify the ability of the topological characterization of networks, linguistic metrics and BoW features were used to distinguish subjects with MCI from healthy controls. We employed four machine learning algorithms to induce classifiers from a training set. These techniques were the Gaussian Naive Bayes (G-NB), $k$-Nearest Neighbor ($k$-NN), Support Vector Machine (SVM), linear and radial bases functions (RBF), and Random Forest (RF). We also combined these classifiers through ensemble and multi-view learning. In ensemble learning, multiple models/classifiers are generated and combined using a majority vote or the average of class probabilities to produce a single result~\citep{Book:Zhou:2012:Ensemble}. In multi-view learning, multiple classifiers are trained in different feature spaces and thus combined to produce a single result. This approach is an elegant solution in comparison to combining all features in the same vector or space, for two main reasons. First, combination is not a straightforward step and may lead to noise insertion since the data have different natures. Second, using different classifiers for each feature space allows for different weights to be given for each type of feature, and these weights can be learned by a regression method to improve the model. In this work, we used majority voting to combine different feature spaces. \section{Experiments and Results\label{sec:experiments}} All experiments were conducted using the Scikit-learn\footnote{\url{http://scikit-learn.org}} \citep{Art:Pedregosa:scikit-learn:2011}, with classifiers evaluated on the basis of classification accuracy i.e. the total proportion of narratives which were correctly classified. The evaluation was performed using 5-fold cross-validation instead of the well-accepted 10-fold cross-validation because the datasets in our study were small and the test set would have shrunk, leading to less precise measurements of accuracy. The threshold parameter was optimized with the best values being $0.7$ in the Cookie Theft dataset and $0.4$ in both the Cinderella and ABCD datasets. We used the model proposed by \citet{Art:Bojanowski:2016:Enriching} with default parameters (100 dimensional embeddings, context window equal to 5 and 5 epochs) to generate word embedding. We trained the models in Portuguese and English Wikipedia dumps from October and November 2016 respectively. The accuracy in classification is given in Tables \ref{results:en} through \ref{results:pt:abcd}. CN, CNE, LM, and BoW denote, respectively, complex networks, complex network enriched with embedding, linguistic metrics and Bag of Words, and CNE-LM, CNE-BoW, LM-BoW and CNE-LM-BoW refer to combinations of the feature spaces (multiview learning), using the majority vote. Cells with the ``--'' sign mean that it was not possible to apply majority voting because there were two classifiers. The last line represents the use of an ensemble of machine learning algorithms, in which the combination used was the majority voting in both ensemble and multiview learning. In general, CNE outperforms the approach using only complex networks (CN), while SVM (Linear or RBF kernel) provides higher accuracy than other machine learning algorithms. The results for the three datasets show that characterizing transcriptions into complex networks is competitive with other traditional methods, such as the use of linguistic metrics. In fact, among the three types of features, using enriched networks (CNE) provided the highest accuracies in two datasets (Cookie Theft and original Cinderella). For the ABCD dataset, which contains short narratives, the small length of the transcriptions may have had an effect, since BoW features led to the highest accuracy. In the case of the revised Cinderella dataset, segmented into sentences and capitalized as reported in \citet{Inp:Aluisio:2016:Evaluating}, Table \ref{results:pt:andre} shows that the manual revision was an important factor, since the highest accuracies were obtained with the approach based on linguistic metrics (LM). However, this process of manually removing disfluencies demands time; therefore it is not practical for large-scale assessments. Ensemble and multi-view learning were helpful for the Cookie Theft dataset, in which multi-view learning achieved the highest accuracy (65\% of accuracy for narrative texts, a 3\% of improvement compared to the best individual classifier). However, neither multi-view or ensemble learning enhanced accuracy in the Cinderella dataset, where SVM-RBF with CNE space achieved the highest accuracy (65\%). For the ABCD dataset, multi-view CNE-LM-BoW with SVM-RBF and KNN classifiers improved the accuracy to 4\% and 2\%, respectively. Somewhat surprising were the results of SVM with linear kernel in BoW feature space (75\% of accuracy). \section{Conclusions and Future Work \label{sec:conclusion}} In this study, we employed metrics of topological properties of CN in a machine learning classification approach to distinguish between healthy patients and patients with MCI. To the best of our knowledge, these metrics have never been used to detect MCI in speech transcripts; CN were enriched with word embeddings to better represent short texts produced in neuropsychological assessments. The topological properties of CN outperform traditional linguistic metrics in individual classifiers’ results. Linguistic features depend on grammatical texts to present good results, as can be seen in the results of the manually processed Cinderella dataset (Table \ref{results:pt:andre}). Furthermore, we found that combining machine and multi-view learning can improve accuracy. The accuracies found here are comparable to the values reported by other authors, ranging from 60\% to 85\% \citep{Inp:Prud'hommeaux:2011:Alignment,Inp:Lehr:2012:Fully,Inp:Toth:2015:Automatic,Inp:Vincze:2016:Detecting}, which means that it is not easy to distinguish between healthy subjects and those with cognitive impairments. The comparison with our results is not straightforward, though, because the databases used in the studies are different. There is a clear need for publicly available datasets to compare different methods, which would optimize the detection of MCI in elderly people. In future work, we intend to explore other methods to enrich CN, such as the Recurrent Language Model, and use other metrics to characterize an adjacency network. The pursuit of these strategies is relevant because language is one of the most efficient information sources to evaluate cognitive functions, commonly used in neuropsychological assessments. As this work is ongoing, we will keep collecting new transcriptions of the ABCD retelling subtest to increase the corpus size and obtain more reliable results in our studies. Our final goal is to apply neuropsychological assessment batteries, such as the ABCD retelling subtest, to mobile devices, specifically tablets. This adaptation will enable large-scale applications in hospitals and facilitate the maintenance of application history in longitudinal studies, by storing the results in databases immediately after the test application. \section*{Acknowledgments} This work was supported by CAPES, CNPq, FAPESP, and Google Research Awards in Latin America. We would like to thank NVIDIA for their donation of GPU. \bibliographystyle{acl_natbib} \appendix \section{Supplementary Material} \label{sec:supplemental} Figure \ref{fig:book:Cookie:Theft} is Cookie Theft picture, which was used in DementiaBank project. Figure \ref{fig:book:cinderella} is a sequence of pictures from the Cinderella story, which were used to elicit speech narratives. \subsection{Examples of transcriptions\label{sec:supplemental:example}} Below follows an example of a transcript of the Cookie Theft dataset. You just want me to start talking ? Well the little girl is asking her brother we 'll say for a cookie . Now he 's getting the cookie one for him and one for her . He unbalances the step the little stool and he 's about to fall . And the lid 's off the cookie jar . And the mother is drying the dishes abstractly so she 's left the water running in the sink and it is spilling onto the floor . And there are two there 's look like two cups and a plate on the sink and board . And that boy 's wearing shorts and the little girl is in a short skirt . And the mother has an apron on . And she 's standing at the window . The window 's opened . It must be summer or spring . And the curtains are pulled back . And they have a nice walk around their house . And there 's this nice shrubbery it appears and grass . And there 's a big picture window in the background that has the drapes pulled off . There 's a not pulled off but pulled aside . And there 's a tree in the background . And the house with the kitchen has a lot of cupboard space under the sink board and under the cabinet from which the cookie you know cookies are being removed . Below follows an excerpt of a transcript of the Cinderella dataset. \textbf{Original transcript in Portuguese:} ela morava com a madrasta as irmã né e ela era diferenciada das três era maltratada ela tinha que fazer limpeza na casa toda no castelo alias e as irmãs não faziam nada até que um dia chegou um convite do rei ele ia fazer um baile e a madrasta então é colocou que todas as filhas elas iam menos a cinderela bom como ela não tinha o vestido sapato as coisas tudo então ela mesmo teve que fazer a roupa dela começou a fazer ... \textbf{Translation of the transcript in English:} she lived with the stepmother the sister right and she was differentiated from the three was mistreated she had to do the cleaning in the entire house actually in the castle and the sisters didn’t do anything until one day the king’s invitation arrived he would invite everyone to a ball and then the stepmother is said that all the daughters they would go except for cinderella well since she didn’t have a dress shoes all the things she had to make her own clothes she started to make them ... \subsection{Coh-Metrix-Dementia metrics\label{sec:supplemental:metrics}} \begin{enumerate} \item \textbf{Ambiguity}: verb ambiguity, noun ambiguity, adjective ambiguity, adverb ambiguity; \item \textbf{Anaphoras}: adjacent anaphoric references, anaphoric references; \item \textbf{Basic Counts}: Flesch index, number of word, number of sentences, number of paragraphs, words per sentence, sentences per paragraph, syllables per content word, verb incidence, noun incidence, adjective incidence, adverb incidence, pronoun incidence, content word incidence, function word incidence; \item \textbf{Connectives}: connectives incidence, additive positive connectives incidence, additive negative connectives incidence, temporal positive connectives incidence, temporal negative connectives incidence, casual positive connectives incidence, casual negative connectives incidence, logical positive connectives incidence, logical negative connectives incidence; \item \textbf{Co-reference Measures}: adjacent argument overlap, argument overlap, adjacent stem overlap, stem overlap, adjacent content word overlap; \item \textbf{Content Word Frequencies}: Content words frequency, minimum among content words frequency; \item \textbf{Hypernyms}: Mean hypernyms per verb; \item \textbf{Logic Operators}: Logic operators incidence, and incidence, or incidence, if incidence, negation incidence; \item \textbf{Latent Semantic Analysis (LSA)}: Average and standard deviation similarity between pairs of adjacent sentences in the text, Average and standard deviation similarity between all sentence pairs in the text, Average and standard deviation similarity between pairs of adjacent paragraphs in the text, Givenness average and standard deviation of each sentence in the text; \item \textbf{Semantic Density}: content density; \item \textbf{Syntactical Complexity}: only cross entropy; \item \textbf{Tokens}: personal pronouns incidence, type-token ratio, Brunet index, Honoré Statistics. \end{enumerate} \end{document}
Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings
1704.07130
Table 7: Ablations of our model on the dev set show the importance of entity abstraction and message passing (K=2).
[ "Model", "ℓ" ]
[ [ "DynoNet (K = 2)", "[BOLD] 2.16" ], [ "DynoNet (K = 1)", "2.20" ], [ "DynoNet (K = 0)", "2.26" ], [ "DynoNet (K = 2) w/o entity abstraction", "2.21" ] ]
Our model has two novel designs: entity abstraction and message passing for node embeddings. When the number of message passing iterations, K, is reduced from 2 to 0, the loss consistently increases. Removing entity abstraction—meaning adding entity embeddings to node embeddings and the LSTM input embeddings—also degrades performance. This shows that DynoNet benefits from contextually-defined, structural node embeddings rather than ones based on a classic lookup table.
\section{Symmetric Collaborative Dialogue} \label{sec:problem} We begin by introducing a collaborative task between two agents and describe the human-human dialogue collection process. We show that our data exhibits diverse, interesting language phenomena. \subsection{Task Definition} In the symmetric collaborative dialogue setting, there are two agents, A and B, each with a private knowledge base---\kba{} and \kbb{}, respectively. Each knowledge base includes a list of \emph{items}, where each item has a value for each \emph{attribute}. For example, in the \MF{} setting, \reffig{example_dialog}, items are friends and attributes are name, school, etc. There is a shared item that A and B both have; their goal is to converse with each other to determine the shared item and select it. Formally, an agent is a mapping from its private KB and the dialogue thus far (sequence of utterances) to the next utterance to generate or a selection. A dialogue is considered \textit{successful} when both agents correctly select the shared item. This setting has parallels in human-computer collaboration where each agent has complementary expertise. \input utterance_example \subsection{Data collection} \label{sec:data} We created a schema with 7 attributes and approximately 3K entities (attribute values). To elicit linguistic and strategic variants, we generate a random scenario for each task by varying the number of items (5 to 12), the number attributes (3 or 4), and the distribution of values for each attribute (skewed to uniform). See \refapp{schema} and \ref{sec:scenario} for details of schema and scenario generation. We crowdsourced dialogues on AMT by randomly pairing up workers to perform the task within 5 minutes.\footnote{If the workers exceed the time limit, the dialogue is marked as unsuccessful (but still logged).} Our chat interface is shown in \reffig{website}. To discourage random guessing, we prevent workers from selecting more than once every 10 seconds. Our task was very popular and we collected 11K dialogues over a period of 13.5 hours.\footnote{Tasks are put up in batches; the total time excludes intervals between batches.} Of these, over 9K dialogues are successful. Unsuccessful dialogues are usually the result of either worker leaving the chat prematurely. \subsection{Dataset statistics} \label{sec:data_stat} We show the basic statistics of our dataset in \reftab{gen-statistics}. An utterance is defined as a message sent by one of the agents. The average utterance length is short due to the informality of the chat, however, an agent usually sends multiple utterances in one turn. Some example dialogues are shown in \reftab{human-bot-chats} and \refapp{human-bot-chats}. \footnotetext{Entity names are replaced by their entity types.} We categorize utterances into coarse types---\act{inform}, \act{ask}, \act{answer}, \act{greeting}, \act{apology}---by pattern matching (\refapp{utterance_type}). There are 7.4\% multi-type utterances, and 30.9\% utterances contain more than one entity. In \reftab{type_example}, we show example utterances with rich semantics that cannot be sufficiently represented by traditional slot-values. Some of the standard ones are also non-trivial due to coreference and logical compositionality. Our dataset also exhibits some interesting communication phenomena. Coreference occurs frequently when people check multiple attributes of one item. Sometimes mentions are dropped, as an utterance simply continues from the partner's utterance. People occasionally use external knowledge to group items with out-of-schema attributes (e.g., gender based on names, location based on schools). We summarize these phenomena in \reftab{phenomenon_example}. In addition, we find 30\% utterances involve cross-talk where the conversation does not progress linearly (e.g., italic utterances in \reffig{example_dialog}), a common characteristic of online chat~\citep{ivanovic2005dialogue}. %, for example, when a single answer is received in response to multiple questions. One strategic aspect of this task is choosing the order of attributes to mention. We find that people tend to start from attributes with fewer unique values, e.g., \utterance{all my friends like morning} given the \kbb{} in \reftab{human-bot-chats}, as intuitively it would help exclude items quickly given fewer values to check.\footnote{Our goal is to model human behavior thus we do not discuss the optimal strategy here.} We provide a more detailed analysis of strategy in \refsec{eval} and \refapp{strategy}. \section{Introduction} \label{sec:intro} Current task-oriented dialogue systems~\cite{young2013pomdp,wen2017network,dhingra2017information} require a pre-defined dialogue state (e.g., slots such as food type and price range for a restaurant searching task) and a fixed set of dialogue acts (e.g., request, inform). However, human conversation often requires richer dialogue states and more nuanced, pragmatic dialogue acts. Recent open-domain chat systems ~\citep{shang2015neural,serban2015building,sordoni2015neural,li2016persona,lowe2017ubuntu,mei2017coherent} learn a mapping directly from previous utterances to the next utterance. While these models capture open-ended aspects of dialogue, the lack of structured dialogue state prevents them from being directly applied to settings that require interfacing with structured knowledge. \input intro_example In order to bridge the gap between the two types of systems, we focus on a \emph{symmetric collaborative dialogue} setting, which is task-oriented but encourages open-ended dialogue acts. In our setting, two agents, each with a private list of items with attributes, must communicate to identify the unique shared item. Consider the dialogue in \reffig{example_dialog}, in which two people are trying to find their mutual friend. By asking \utterance{do you have anyone who went to columbia?}, B is suggesting that she has some Columbia friends, and that they probably work at Google. Such conversational implicature is lost when interpreting the utterance as simply an information request. In addition, it is hard to define a structured state that captures the diverse semantics in many utterances (e.g., defining ``most of'', ``might be''; see details in \reftab{type_example}). To model both structured and open-ended context, we propose the \emph{\underline{Dy}namic K\underline{no}wledge Graph \underline{Net}work} (\dkg{}), in which the dialogue state is modeled as a knowledge graph with an embedding for each node (\refsec{overview}). Our model is similar to EntNet~\cite{henaff2017tracking} in that node/entity embeddings are updated recurrently given new utterances. The difference is that we structure entities as a knowledge graph; as the dialogue proceeds, new nodes are added and new context is propagated on the graph. An attention-based mechanism~\cite{bahdanau2015neural} over the node embeddings drives generation of new utterances. Our model's use of knowledge graphs captures the grounding capability of classic task-oriented systems and the graph embedding provides the representational flexibility of neural models. The naturalness of communication in the symmetric collaborative setting enables large-scale data collection: We were able to crowdsource around 11K human-human dialogues on Amazon Mechanical Turk (AMT) in less than 15 hours.\footnote{The dataset is available publicly at \url{https://stanfordnlp.github.io/cocoa/}.} We show that the new dataset calls for more flexible representations beyond fully-structured states (\refsec{data}). In addition to conducting the third-party human evaluation adopted by most work~\cite{liu2016evaluate,li2016diversity,li2016rl}, we also conduct partner evaluation~\cite{wen2017network} where AMT workers rate their conversational partners (other workers or our models) based on fluency, correctness, cooperation, and human-likeness. We compare \dkg{} with baseline neural models and a strong rule-based system. The results show that \dkg{} can perform the task with humans efficiently and naturally; it also captures some strategic aspects of human-human dialogues. The contributions of this work are: (i) a new symmetric collaborative dialogue setting and a large dialogue corpus that pushes the boundaries of existing dialogue systems; (ii) \dkg{}, which integrates semantically rich utterances with structured knowledge to represent open-ended dialogue states; (iii) multiple automatic metrics based on bot-bot chat and a comparison of third-party and partner evaluation. We study a \emph{symmetric collaborative dialogue} setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models. \section{Discussion and Related Work} \label{sec:discussion} There has been a recent surge of interest in end-to-end task-oriented dialogue systems, though progress has been limited by the size of available datasets~\cite{serban2015survey}. Most work focuses on information-querying tasks, using Wizard-of-Oz data collection~\cite{williams2016dstc,maluuba2016frames} or simulators~\cite{bordes2017learning,li2016user}, In contrast, collaborative dialogues are easy to collect as natural human conversations, and are also challenging enough given the large number of scenarios and diverse conversation phenomena. There are some interesting strategic dialogue datasets---settlers of Catan~\cite{afantenos2012developing} (2K turns) and the cards corpus~\cite{potts2012cards} (1.3K dialogues), as well as work on dialogue strategies~\cite{keizer2017negotiation,vogel2013emergence}, though no full dialogue system has been built for these datasets. Most task-oriented dialogue systems follow the POMDP-based approach~\cite{williams2007partially,young2013pomdp}. Despite their success~\cite{wen2017network,dhingra2017information,su2016continuous}, the requirement for handcrafted slots limits their scalability to new domains and burdens data collection with extra state labeling. To go past this limit, \newcite{bordes2017learning} proposed a Memory-Networks-based approach without domain-specific features. However, the memory is unstructured and interfacing with KBs relies on API calls, whereas our model embeds both the dialogue history and the KB structurally. \newcite{williams2017dialog} use an LSTM to automatically infer the dialogue state, but as they focus on dialogue control rather than the full problem, the response is modeled as a templated action, which restricts the generation of richer utterances. Our network architecture is most similar to EntNet~\cite{henaff2017tracking}, where memories are also updated by input sentences recurrently. The main difference is that our model allows information to be propagated between structured entities, which is shown to be crucial in our setting (\refsec{ablation}). Our work is also related to language generation conditioned on knowledge bases~\cite{mei2016what,kiddon2016globally}. One challenge here is to avoid generating false or contradicting statements, which is currently a weakness of neural models. Our model is mostly accurate when generating facts and answering existence questions about a single entity, but will need a more advanced attention mechanism for generating utterances involving multiple entities, e.g., attending to items or attributes first, then selecting entities; generating high-level concepts before composing them to natural tokens~\cite{serban2017multiresolution}. In conclusion, we believe the symmetric collaborative dialogue setting and our dataset provide unique opportunities at the interface of traditional task-oriented dialogue and open-domain chat. We also offered \dkg{} as a promising means for open-ended dialogue state representation. Our dataset facilitates the study of pragmatics and human strategies in dialogue---a good stepping stone towards learning more complex dialogues such as negotiation. \renewcommand{\bot}[1]{{\bf #1}} \definecolor{LightCyan}{rgb}{0.88,1,1} \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[utf8]{inputenc} \newif\ifcomment \aclfinalcopy % Uncomment this line for the final submission \input std-macros \input macros \title{Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings} \author{ He He \and Anusha Balakrishnan \and Mihail Eric \and Percy Liang \\ Computer Science Department, Stanford University \\ {\tt \{hehe,anusha28,meric,pliang\}@cs.stanford.edu} } \date{} \begin{document} \maketitle \begin{abstract} \input abstract \end{abstract} \input intro \input problem \input approach \input experiments \input discussion \paragraph{Acknowledgments.} This work is supported by DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF-15-1-0462. Mike Kayser worked on an early version of the project while he was at Stanford. We also thank members of the Stanford NLP group for insightful discussions. \paragraph{Reproducibility.} All code, data, and experiments for this paper are available on the CodaLab platform: {\footnotesize \url{https://worksheets.codalab.org/worksheets/0xc757f29f5c794e5eb7bfa8ca9c945573}}. \bibliographystyle{acl_natbib} \clearpage \appendix \input appendix \end{document} \section{Experiments} \label{sec:experiments} We compare our model with a rule-based system and a baseline neural model. Both automatic and human evaluations are conducted to test the models in terms of fluency, correctness, cooperation, and human-likeness. The results show that \dkg{} is able to converse with humans in a coherent and strategic way. \subsection{Setup} We randomly split the data into train, dev, and test sets (8:1:1). We use a one-layer LSTM with 100 hidden units and 100-dimensional word vectors for both the encoder and the decoder (\refsec{gen}). Each successful dialogue is turned into two examples, each from the perspective of one of the two agents. We maximize the log-likelihood of all utterances in the dialogues. The parameters are optimized by AdaGrad~\cite{duchi10adagrad} with an initial learning rate of 0.5. We trained for at least 10 epochs; after that, training stops if there is no improvement on the dev set for 5 epochs. By default, we perform $K=2$ iterations of message passing to compute node embeddings (\refsec{graph_embed}). For decoding, we sequentially sample from the output distribution with a softmax temperature of 0.5.\footnote{ Since selection is a common `utterance' in our dataset and neural generation models are susceptible to over-generating common sentences, we halve its probability during sampling. } Hyperparameters are tuned on the dev set. \ab{Maybe add a reminder of what \dkg{} is (the acronym was introduced and used only once in the intro section) and explicitly show how \skg{} differes from it.} We compare \dkg{} with its static cousion (\skg{}) and a rule-based system (\rl{}). \skg{} uses $G_0$ throughout the dialogue, thus the dialogue history is completely contained in the LSTM states instead of being injected into the knowledge graph. \rl{} maintains weights for each entity and each item in the KB to decide what to talk about and which item to select. It has a pattern-matching semantic parser, a rule-based policy, and a templated generator. See \refapp{rule} for details. \subsection{Evaluation} \label{sec:eval} We test our systems in two interactive settings: bot-bot chat and bot-human chat. We perform both automatic evaluation and human evaluation. \paragraph{Automatic Evaluation.} First, we compute the cross-entropy ($\ell$) of a model on test data. As shown in \reftab{auto-eval}, \dkg{} has the lowest test loss. Next, we have a model chat with itself on the scenarios from the test set.\footnote{ We limit the number of turns in bot-bot chat to be the maximum number of turns humans took in the test set (46 turns).} We evaluate the chats with respect to language variation, effectiveness, and strategy. For language variation, we report the average utterance length $L_u$ and the unigram entropy $H$ in \reftab{auto-eval}. Compared to \rl{}, the neural models tend to generate shorter utterances~\cite{li2016diversity,serban2017hierarchical}. However, they are more diverse; for example, questions are asked in multiple ways such as \utterance{Do you have ...}, \utterance{Any friends like ...}, \utterance{What about ...}. At the discourse level, we expect the distribution of a bot's utterance types to match the distribution of human's. We show percentages of each utterance type in \reftab{auto-eval}. For \rl{}, the decision about which action to take is written in the rules, while \skg{} and \dkg{} learned to behave in a more human-like way, frequently informing and asking questions. To measure effectiveness, we compute the overall success rate ($C$) and the success rate per turn ($C_T$) and per selection ($C_S$). As shown in \reftab{auto-eval}, humans are the best at this game, followed by \rl{} which is comparable to \dkg{}. Next, we investigate the strategies leading to these results. An agent needs to decide which entity/attribute to check first to quickly reduce the search space. We hypothesize that humans tend to first focus on a majority entity and an attribute with fewer unique values (\refsec{data_stat}). For example, in the scenario in \reftab{human-bot-chats}, \ent{time} and \ent{location} are likely to be mentioned first. We show the average frequency of first-mentioned entities (\#Ent$_1$) and the average number of unique values for first-mentioned attributes ($|\text{Attr}_1|$) in \reftab{auto-eval}.\footnote{ Both numbers are normalized to $[0,1]$ with respect to all entities/attributes in the corresponding KB.} Both \dkg{} and \skg{} successfully match human's starting strategy by favoring entities of higher frequency and attributes of smaller domain size. To examine the overall strategy, we show the average number of attributes (\#Attr) and entities (\#Ent) mentioned during the conversation in \reftab{auto-eval}. Humans and \dkg{} strategically focus on a few attributes and entities, whereas \rl{} needs almost twice entities to achieve similar success rates. This suggests that the effectiveness of \rl{} mainly comes from large amounts of unselective information, which is consistent with comments from their human partners. \input human_bot_example \paragraph{Partner Evaluation.} We generated 200 new scenarios and put up the bots on AMT using the same chat interface that was used for data collection. The bots follow simple turn-taking rules explained in \refapp{turn-taking}. Each AMT worker is randomly paired with \rl{}, \skg{}, \dkg{}, or another human (but the worker doesn't know which), and we make sure that all four types of agents are tested in each scenario at least once. At the end of each dialogue, humans are asked to rate their partner in terms of fluency, correctness, cooperation, and human-likeness from 1 (very bad) to 5 (very good), along with optional comments. We show the average ratings (with significance tests) in \reftab{human-eval} and the histograms in \refapp{histograms}. In terms of fluency, the models have similar performance since the utterances are usually short. Judgment on correctness is a mere guess since the evaluator cannot see the partner's KB; we will analyze correctness more meaningfully in the third-party evaluation below. Noticeably, \dkg{} is more cooperative than the other models. As shown in the example dialogues in \reftab{human-bot-chats}, \dkg{} cooperates smoothly with the human partner, e.g., replying with relevant information about morning/indoor friends when the partner mentioned that all her friends prefer morning and most like indoor. \skg{} starts well but doesn't follow up on the morning friend, presumably because the \ent{morning} node is not updated dynamically when mentioned by the partner. \rl{} follows the partner poorly. In the comments, the biggest complaint about \rl{} was that it was not `listening' or `understanding'. Overall, \dkg{} achieves better partner satisfaction, especially in cooperation. \paragraph{Third-party Evaluation.} We also created a \emph{third-party evaluation} task, where an independent AMT worker is shown a conversation and the KB of one of the agents; she is asked to rate the same aspects of the agent as in the partner evaluation and provide justifications. Each agent in a dialogue is rated by at least 5 people. The average ratings and histograms are shown in \reftab{human-eval} and \refapp{histograms}. For correctness, we see that \rl{} has the best performance since it always tells the truth, whereas humans can make mistakes due to carelessness and the neural models can generate false information. For example, in \reftab{human-bot-chats}, \dkg{} `lied' when saying that it has a morning friend who likes outdoor. Surprisingly, there is a discrepancy between the two evaluation modes in terms of cooperation and human-likeness. Manual analysis of the comments indicates that third-party evaluators focus less on the dialogue strategy and more on linguistic features, probably because they were not fully engaged in the dialogue. For example, justification for cooperation often mentions frequent questions and timely answers, less attention is paid to what is asked about though. For human-likeness, partner evaluation is largely correlated with coherence (e.g., not repeating or ignoring past information) and task success, whereas third-party evaluators often rely on informality (e.g., usage of colloquia like ``hiya'', capitalization, and abbreviation) or intuition. Interestingly, third-party evaluators noted most phenomena listed in \reftab{phenomenon_example} as indicators of human-beings, e.g., correcting oneself, making chit-chat other than simply finishing the task. See example comments in \refapp{comments}. \subsection{Ablation Studies} \label{sec:ablation} Our model has two novel designs: entity abstraction and message passing for node embeddings. \reftab{ablation} shows what happens if we ablate these. When the number of message passing iterations, $K$, is reduced from 2 to 0, the loss consistently increases. Removing entity abstraction---meaning adding entity embeddings to node embeddings and the LSTM input embeddings---also degrades performance. This shows that \dkg{} benefits from contextually-defined, structural node embeddings rather than ones based on a classic lookup table. \newcommand{\eos}{$||$} \renewcommand{\bot}[1]{{\bf #1}} \definecolor{LightCyan}{rgb}{0.88,1,1} \newcommand\sa{\ensuremath{\mathcal{a}}} \newcommand\sd{\ensuremath{\mathcal{d}}} \newcommand\se{\ensuremath{\mathcal{e}}} \newcommand\sg{\ensuremath{\mathcal{g}}} \newcommand\sh{\ensuremath{\mathcal{h}}} \newcommand\si{\ensuremath{\mathcal{i}}} \newcommand\sj{\ensuremath{\mathcal{j}}} \newcommand\sk{\ensuremath{\mathcal{k}}} \newcommand\sm{\ensuremath{\mathcal{m}}} \newcommand\sn{\ensuremath{\mathcal{n}}} \newcommand\so{\ensuremath{\mathcal{o}}} \newcommand\sq{\ensuremath{\mathcal{q}}} \newcommand\sr{\ensuremath{\mathcal{r}}} \newcommand\st{\ensuremath{\mathcal{t}}} \newcommand\su{\ensuremath{\mathcal{u}}} \newcommand\sv{\ensuremath{\mathcal{v}}} \newcommand\sw{\ensuremath{\mathcal{w}}} \newcommand\sx{\ensuremath{\mathcal{x}}} \newcommand\sy{\ensuremath{\mathcal{y}}} \newcommand\sz{\ensuremath{\mathcal{z}}} \newcommand\sA{\ensuremath{\mathcal{A}}} \newcommand\sB{\ensuremath{\mathcal{B}}} \newcommand\sC{\ensuremath{\mathcal{C}}} \newcommand\sD{\ensuremath{\mathcal{D}}} \newcommand\sE{\ensuremath{\mathcal{E}}} \newcommand\sF{\ensuremath{\mathcal{F}}} \newcommand\sG{\ensuremath{\mathcal{G}}} \newcommand\sH{\ensuremath{\mathcal{H}}} \newcommand\sI{\ensuremath{\mathcal{I}}} \newcommand\sJ{\ensuremath{\mathcal{J}}} \newcommand\sK{\ensuremath{\mathcal{K}}} \newcommand\sL{\ensuremath{\mathcal{L}}} \newcommand\sM{\ensuremath{\mathcal{M}}} \newcommand\sN{\ensuremath{\mathcal{N}}} \newcommand\sO{\ensuremath{\mathcal{O}}} \newcommand\sP{\ensuremath{\mathcal{P}}} \newcommand\sQ{\ensuremath{\mathcal{Q}}} \newcommand\sR{\ensuremath{\mathcal{R}}} \newcommand\sS{\ensuremath{\mathcal{S}}} \newcommand\sT{\ensuremath{\mathcal{T}}} \newcommand\sU{\ensuremath{\mathcal{U}}} \newcommand\sV{\ensuremath{\mathcal{V}}} \newcommand\sW{\ensuremath{\mathcal{W}}} \newcommand\sX{\ensuremath{\mathcal{X}}} \newcommand\sY{\ensuremath{\mathcal{Y}}} \newcommand\sZ{\ensuremath{\mathcal{Z}}} \newcommand\ba{\ensuremath{\mathbf{a}}} \newcommand\bb{\ensuremath{\mathbf{b}}} \newcommand\bc{\ensuremath{\mathbf{c}}} \newcommand\bd{\ensuremath{\mathbf{d}}} \newcommand\be{\ensuremath{\mathbf{e}}} \newcommand\bg{\ensuremath{\mathbf{g}}} \newcommand\bh{\ensuremath{\mathbf{h}}} \newcommand\bi{\ensuremath{\mathbf{i}}} \newcommand\bj{\ensuremath{\mathbf{j}}} \newcommand\bk{\ensuremath{\mathbf{k}}} \newcommand\bl{\ensuremath{\mathbf{l}}} \newcommand\bn{\ensuremath{\mathbf{n}}} \newcommand\bo{\ensuremath{\mathbf{o}}} \newcommand\bp{\ensuremath{\mathbf{p}}} \newcommand\bq{\ensuremath{\mathbf{q}}} \newcommand\br{\ensuremath{\mathbf{r}}} \newcommand\bs{\ensuremath{\mathbf{s}}} \newcommand\bt{\ensuremath{\mathbf{t}}} \newcommand\bu{\ensuremath{\mathbf{u}}} \newcommand\bv{\ensuremath{\mathbf{v}}} \newcommand\bw{\ensuremath{\mathbf{w}}} \newcommand\bx{\ensuremath{\mathbf{x}}} \newcommand\by{\ensuremath{\mathbf{y}}} \newcommand\bz{\ensuremath{\mathbf{z}}} \newcommand\bA{\ensuremath{\mathbf{A}}} \newcommand\bB{\ensuremath{\mathbf{B}}} \newcommand\bC{\ensuremath{\mathbf{C}}} \newcommand\bD{\ensuremath{\mathbf{D}}} \newcommand\bE{\ensuremath{\mathbf{E}}} \newcommand\bF{\ensuremath{\mathbf{F}}} \newcommand\bG{\ensuremath{\mathbf{G}}} \newcommand\bH{\ensuremath{\mathbf{H}}} \newcommand\bI{\ensuremath{\mathbf{I}}} \newcommand\bJ{\ensuremath{\mathbf{J}}} \newcommand\bK{\ensuremath{\mathbf{K}}} \newcommand\bL{\ensuremath{\mathbf{L}}} \newcommand\bM{\ensuremath{\mathbf{M}}} \newcommand\bN{\ensuremath{\mathbf{N}}} \newcommand\bO{\ensuremath{\mathbf{O}}} \newcommand\bP{\ensuremath{\mathbf{P}}} \newcommand\bQ{\ensuremath{\mathbf{Q}}} \newcommand\bR{\ensuremath{\mathbf{R}}} \newcommand\bS{\ensuremath{\mathbf{S}}} \newcommand\bT{\ensuremath{\mathbf{T}}} \newcommand\bU{\ensuremath{\mathbf{U}}} \newcommand\bV{\ensuremath{\mathbf{V}}} \newcommand\bW{\ensuremath{\mathbf{W}}} \newcommand\bX{\ensuremath{\mathbf{X}}} \newcommand\bY{\ensuremath{\mathbf{Y}}} \newcommand\bZ{\ensuremath{\mathbf{Z}}} \newcommand\Ba{\ensuremath{\mathbb{a}}} \newcommand\Bb{\ensuremath{\mathbb{b}}} \newcommand\Bc{\ensuremath{\mathbb{c}}} \newcommand\Bd{\ensuremath{\mathbb{d}}} \newcommand\Be{\ensuremath{\mathbb{e}}} \newcommand\Bf{\ensuremath{\mathbb{f}}} \newcommand\Bg{\ensuremath{\mathbb{g}}} \newcommand\Bh{\ensuremath{\mathbb{h}}} \newcommand\Bi{\ensuremath{\mathbb{i}}} \newcommand\Bj{\ensuremath{\mathbb{j}}} \newcommand\Bk{\ensuremath{\mathbb{k}}} \newcommand\Bl{\ensuremath{\mathbb{l}}} \newcommand\Bm{\ensuremath{\mathbb{m}}} \newcommand\Bn{\ensuremath{\mathbb{n}}} \newcommand\Bo{\ensuremath{\mathbb{o}}} \newcommand\Bp{\ensuremath{\mathbb{p}}} \newcommand\Bq{\ensuremath{\mathbb{q}}} \newcommand\Br{\ensuremath{\mathbb{r}}} \newcommand\Bs{\ensuremath{\mathbb{s}}} \newcommand\Bt{\ensuremath{\mathbb{t}}} \newcommand\Bu{\ensuremath{\mathbb{u}}} \newcommand\Bv{\ensuremath{\mathbb{v}}} \newcommand\Bw{\ensuremath{\mathbb{w}}} \newcommand\Bx{\ensuremath{\mathbb{x}}} \newcommand\By{\ensuremath{\mathbb{y}}} \newcommand\Bz{\ensuremath{\mathbb{z}}} \newcommand\BA{\ensuremath{\mathbb{A}}} \newcommand\BB{\ensuremath{\mathbb{B}}} \newcommand\BC{\ensuremath{\mathbb{C}}} \newcommand\BD{\ensuremath{\mathbb{D}}} \newcommand\BE{\ensuremath{\mathbb{E}}} \newcommand\BF{\ensuremath{\mathbb{F}}} \newcommand\BG{\ensuremath{\mathbb{G}}} \newcommand\BH{\ensuremath{\mathbb{H}}} \newcommand\BI{\ensuremath{\mathbb{I}}} \newcommand\BJ{\ensuremath{\mathbb{J}}} \newcommand\BK{\ensuremath{\mathbb{K}}} \newcommand\BL{\ensuremath{\mathbb{L}}} \newcommand\BM{\ensuremath{\mathbb{M}}} \newcommand\BN{\ensuremath{\mathbb{N}}} \newcommand\BO{\ensuremath{\mathbb{O}}} \newcommand\BP{\ensuremath{\mathbb{P}}} \newcommand\BQ{\ensuremath{\mathbb{Q}}} \newcommand\BR{\ensuremath{\mathbb{R}}} \newcommand\BS{\ensuremath{\mathbb{S}}} \newcommand\BT{\ensuremath{\mathbb{T}}} \newcommand\BU{\ensuremath{\mathbb{U}}} \newcommand\BV{\ensuremath{\mathbb{V}}} \newcommand\BW{\ensuremath{\mathbb{W}}} \newcommand\BX{\ensuremath{\mathbb{X}}} \newcommand\BY{\ensuremath{\mathbb{Y}}} \newcommand\BZ{\ensuremath{\mathbb{Z}}} \newcommand\balpha{\ensuremath{\mbox{\boldmath$\alpha$}}} \newcommand\bbeta{\ensuremath{\mbox{\boldmath$\beta$}}} \newcommand\btheta{\ensuremath{\mbox{\boldmath$\theta$}}} \newcommand\bphi{\ensuremath{\mbox{\boldmath$\phi$}}} \newcommand\bpi{\ensuremath{\mbox{\boldmath$\pi$}}} \newcommand\bpsi{\ensuremath{\mbox{\boldmath$\psi$}}} \newcommand\bmu{\ensuremath{\mbox{\boldmath$\mu$}}} \newcommand\fig[1]{\begin{center} \includegraphics{#1} \end{center}} \newcommand\Fig[4]{} \newcommand\FigTop[4]{} \newcommand\FigStar[4]{} \newcommand\FigRight[4]{\begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.5\textwidth]{#1} \end{center} \caption{\label{fig:#3} #4} \end{wrapfigure}} \newcommand\aside[1]{\quad\text{[#1]}} \newcommand\interpret[1]{\llbracket #1 \rrbracket} % Denotation \DeclareMathOperator*{\tr}{tr} \DeclareMathOperator*{\sign}{sign} \newcommand{\var}{\text{Var}} % Variance \newcommand{\cov}{\text{Cov}} % Covariance \DeclareMathOperator*{\diag}{diag} % Diagonal matrix \newcommand\p[1]{\ensuremath{\left( #1 \right)}} % Parenthesis () \newcommand\pa[1]{\ensuremath{\left\langle #1 \right\rangle}} % <> \newcommand\pb[1]{\ensuremath{\left[ #1 \right]}} % [] \newcommand\pc[1]{\ensuremath{\left\{ #1 \right\}}} % {} \newcommand\eval[2]{\ensuremath{\left. #1 \right|_{#2}}} % Integral evaluation \newcommand\inv[1]{\ensuremath{\frac{1}{#1}}} \newcommand\half{\ensuremath{\frac{1}{2}}} \newcommand\R{\ensuremath{\mathbb{R}}} % Real numbers \newcommand\Z{\ensuremath{\mathbb{Z}}} % Integers \newcommand\inner[2]{\ensuremath{\left< #1, #2 \right>}} % Inner product \newcommand\mat[2]{\ensuremath{\left(\begin{array}{#1}#2\end{array}\right)}} % Matrix \newcommand\eqn[1]{\begin{align} #1 \end{align}} % Equation (array) \newcommand\eqnl[2]{\begin{align} \label{eqn:#1} #2 \end{align}} % Equation (array) with label \newcommand\eqdef{\ensuremath{\stackrel{\rm def}{=}}} % Equal by definition \newcommand{\1}{\mathbb{I}} % Indicator (don't use \mathbbm{1} because bbm is not TrueType) \newcommand{\bone}{\mathbf{1}} % for vector one \newcommand{\bzero}{\mathbf{0}} % for vector zero \newcommand\refeqn[1]{(\ref{eqn:#1})} \newcommand\refeqns[2]{(\ref{eqn:#1}) and (\ref{eqn:#2})} \newcommand\refchp[1]{Chapter~\ref{chp:#1}} \newcommand\refsec[1]{Section~\ref{sec:#1}} \newcommand\refsecs[2]{Sections~\ref{sec:#1} and~\ref{sec:#2}} \newcommand\reffig[1]{Figure~\ref{fig:#1}} \newcommand\reffigs[2]{Figures~\ref{fig:#1} and~\ref{fig:#2}} \newcommand\reffigss[3]{Figures~\ref{fig:#1},~\ref{fig:#2}, and~\ref{fig:#3}} \newcommand\reffigsss[4]{Figures~\ref{fig:#1},~\ref{fig:#2},~\ref{fig:#3}, and~\ref{fig:#4}} \newcommand\reftab[1]{Table~\ref{tab:#1}} \newcommand\refapp[1]{Appendix~\ref{sec:#1}} \newcommand\refthm[1]{Theorem~\ref{thm:#1}} \newcommand\refthms[2]{Theorems~\ref{thm:#1} and~\ref{thm:#2}} \newcommand\reflem[1]{Lemma~\ref{lem:#1}} \newcommand\reflems[2]{Lemmas~\ref{lem:#1} and~\ref{lem:#2}} \newcommand\refalg[1]{Algorithm~\ref{alg:#1}} \newcommand\refalgs[2]{Algorithms~\ref{alg:#1} and~\ref{alg:#2}} \newcommand\refex[1]{Example~\ref{ex:#1}} \newcommand\refexs[2]{Examples~\ref{ex:#1} and~\ref{ex:#2}} \newcommand\refprop[1]{Proposition~\ref{prop:#1}} \newcommand\refdef[1]{Definition~\ref{def:#1}} \newcommand\refcor[1]{Corollary~\ref{cor:#1}} \newcommand\Chapter[2]{\chapter{#2}\label{chp:#1}} \newcommand\Section[2]{\section{#2}\label{sec:#1}} \newcommand\Subsection[2]{\subsection{#2}\label{sec:#1}} \newcommand\Subsubsection[2]{\subsubsection{#2}\label{sec:#1}} \ifthenelse{\isundefined{\definition}}{\newtheorem{definition}{Definition}}{} \ifthenelse{\isundefined{\assumption}}{\newtheorem{assumption}{Assumption}}{} \ifthenelse{\isundefined{\hypothesis}}{\newtheorem{hypothesis}{Hypothesis}}{} \ifthenelse{\isundefined{\proposition}}{\newtheorem{proposition}{Proposition}}{} \ifthenelse{\isundefined{\theorem}}{\newtheorem{theorem}{Theorem}}{} \ifthenelse{\isundefined{\lemma}}{\newtheorem{lemma}{Lemma}}{} \ifthenelse{\isundefined{\corollary}}{\newtheorem{corollary}{Corollary}}{} \ifthenelse{\isundefined{\alg}}{\newtheorem{alg}{Algorithm}}{} \ifthenelse{\isundefined{\example}}{\newtheorem{example}{Example}}{} \newcommand\cv{\ensuremath{\to}} % Convergence \newcommand\cvL{\ensuremath{\xrightarrow{\mathcal{L}}}} % Convergence in law \newcommand\cvd{\ensuremath{\xrightarrow{d}}} % Convergence in distribution \newcommand\cvP{\ensuremath{\xrightarrow{P}}} % Convergence in probability \newcommand\cvas{\ensuremath{\xrightarrow{a.s.}}} % Convergence almost surely \newcommand\eqdistrib{\ensuremath{\stackrel{d}{=}}} % Equal in distribution \newcommand{\E}{\ensuremath{\mathbb{E}}} % Expectation \newcommand\KL[2]{\ensuremath{\text{KL}\left( #1 \| #2 \right)}} % KL-divergence \ifcomment \newcommand\pl[1]{\textcolor{red}{[PL: #1]}} \newcommand\hh[1]{\textcolor{blue}{[HH: #1]}} \newcommand\me[1]{\textcolor{green}{[ME: #1]}} \newcommand\ab[1]{\textcolor{orange}{[AB: #1]}} \else \newcommand\pl[1]{\textcolor{red}{[PL: #1]}} \newcommand\hh[1]{} \newcommand\me[1]{} \newcommand\ab[1]{} \fi \newcommand{\kba}{$\text{KB}_\text{A}$} \newcommand{\kbb}{$\text{KB}_\text{B}$} \newcommand{\ent}[1]{{\small\texttt{#1}}} \newcommand{\MF}{{MutualFriends}} \newcommand{\utterance}[1]{``#1''} \newcommand{\cond}{\ensuremath{\,|\,}} \newcommand{\sts}{\textsc{seq2seq}} \newcommand{\dkg}{{DynoNet}} \newcommand{\skg}{{StanoNet}} \newcommand{\rl}{{Rule}} \DeclareMathOperator{\dir}{Dir} \newcommand{\fl}{{\small Flnt}} \newcommand{\cor}{{\small Crct}} \newcommand{\co}{{\small Coop}} \newcommand{\str}{{\small Strt}} \newcommand{\hu}{{\small Human}} \newcommand{\act}[1]{{\small{\textsf {#1}}}} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcommand{\ul}[1]{\underline{#1}} \section{Dynamic Knowledge Graph Network} \label{sec:overview} The diverse semantics in our data motivates us to combine unstructured representation of the dialogue history with structured knowledge. Our model consists of three components shown in \reffig{overview}: (i) a dynamic knowledge graph, which represents the agent's private KB and shared dialogue history as a graph (\refsec{kg}), (ii) a graph embedding over the nodes (\refsec{graph_embed}), and (iii) an utterance generator (\refsec{gen}). The knowledge graph represents entities and relations in the agent's private KB, e.g., \ent{item-1}'s \ent{company} is \ent{google}. As the conversation unfolds, utterances are embedded and incorporated into node embeddings of mentioned entities. For instance, in \reffig{overview}, \utterance{anyone went to columbia} updates the embedding of \ent{columbia}. Next, each node recursively passes its embedding to neighboring nodes so that related entities (e.g., those in the same row or column) also receive information from the most recent utterance. In our example, \ent{jessica} and \ent{josh} both receive new context when \ent{columbia} is mentioned. Finally, the utterance generator, an LSTM, produces the next utterance by attending to the node embeddings. % that represent the dialogue state. \subsection{Knowledge Graph} \label{sec:kg} Given a dialogue of $T$ utterances, we construct graphs $(G_t)_{t=1}^T$ over the KB and dialogue history for agent A.\footnote{ It is important to differentiate perspectives of the two agents as they have different KBs. Thereafter we assume the perspective of agent A, i.e., accessing \kba{} for A only, and refer to B as the partner. } There are three types of nodes: item nodes, attribute nodes, and entity nodes. Edges between nodes represent their relations. For example, %\ent{(school, Includes, columbia)} means that \ent{columbia} is a value of attribute \ent{school}; \ent{(item-1, hasSchool, columbia)} means that the first item has attribute \ent{school} whose value is \ent{columbia}. An example graph is shown in \reffig{overview}. The graph $G_t$ is updated based on utterance $t$ by taking $G_{t-1}$ and adding a new node for any entity mentioned in utterance $t$ but not in \kba{}.\footnote{ We use a rule-based lexicon to link text spans to entities. See details in \refapp{lexicon}. } \subsection{Graph Embedding} \label{sec:graph_embed} Given a knowledge graph, we are interested in computing a vector representation for each node $v$ that captures both its unstructured context from the dialogue history and its structured context in the KB. A node embedding $V_t(v)$ for each node $v \in G_t$ is built from three parts: structural properties of an entity defined by the KB, embeddings of utterances in the dialogue history, and message passing between neighboring nodes. \paragraph{Node Features.} Simple structural properties of the KB often govern what is talked about; e.g., a high-frequency entity is usually interesting to mention (consider \utterance{All my friends like dancing.}). We represent this type of information as a feature vector $F_t(v)$, which includes the degree and type (item, attribute, or entity type) of node $v$, and whether it has been mentioned in the current turn. Each feature is encoded as a one-hot vector and they are concatenated to form $F_t(v)$. \paragraph{Mention Vectors.} A mention vector $M_t(v)$ contains unstructured context from utterances relevant to node $v$ up to turn $t$. To compute it, we first define the utterance representation $\tilde{u}_t$ and the set of relevant entities $E_t$. Let $u_t$ be the embedding of utterance $t$ (\refsec{gen}). To differentiate between the agent's and the partner's utterances, we represent it as $\tilde{u}_t=\pb{u_t \cdot \mathbbm{1}_{\pc{u_t \in U_{\text{self}}}}, u_t \cdot \mathbbm{1}_{\pc{u_t \in U_{\text{partner}}}}}$, where $U_{\text{self}}$ and $U_{\text{partner}}$ denote sets of utterances generated by the agent and the partner, and $[\cdot,\cdot]$ denotes concatenation. Let $E_t$ be the set of entity nodes mentioned in utterance $t$ if utterance $t$ mentions some entities, or utterance $t-1$ otherwise.\footnote{ Relying on utterance $t-1$ is useful when utterance $t$ answers a question, e.g., \utterance{do you have any google friends?} \utterance{No.} } The mention vector $M_t(v)$ of node $v$ incorporates the current utterance if $v$ is mentioned and inherits $M_{t-1}(v)$ if not: \begin{align} M_t(v) &= \lambda_t M_{t-1}(v) + (1 - \lambda_t) \tilde{u}_t; \\ \lambda_t &= \begin{cases} \sigma\p{W^\text{inc} \pb{M_{t-1}(v), \tilde{u}_t}} & \text{if $v \in E_t$}, \\ 1 & \text{otherwise}. \nonumber \end{cases} \end{align} Here, $\sigma$ is the sigmoid function and $W^\text{inc}$ is a parameter matrix. \paragraph{Recursive Node Embeddings.} We propagate information between nodes according to the structure of the knowledge graph. In \reffig{overview}, given \utterance{anyone went to columbia?}, the agent should focus on her friends who went to Columbia University. Therefore, we want this utterance to be sent to item nodes connected to \ent{columbia}, and one step further to other attributes of these items because they might be mentioned next as relevant information, e.g., \ent{jessica} and \ent{josh}. We compute the node embeddings recursively, analogous to belief propagation: \begin{align} V_t^k(v) =& \max_{v' \in N_t(v)} \tanh \\ &\p{W^\text{mp} \pb{V_t^{k-1}(v'), R(e_{v\rightarrow v'})}}, \nonumber \end{align} where $V_t^k(v)$ is the depth-$k$ node embedding at turn $t$ and $N_t(v)$ denotes the set of nodes adjacent to $v$. The message from a neighboring node $v'$ depends on its embedding at depth-$(k-1)$, the edge label $e_{v\rightarrow v'}$ (embedded by a relation embedding function $R$), and a parameter matrix $W^\text{mp}$. Messages from all neighbors are aggregated by $\max$, the element-wise max operation.\footnote{Using sum or mean slightly hurts performance.} Example message passing paths are shown in \reffig{overview}. The final node embedding is the concatenation of embeddings at each depth: \begin{align} V_t(v) = \pb{V_t^0(v), \ldots, V_t^K(v)}, \label{eqn:node_embedding} \end{align} where $K$ is a hyperparameter (we experiment with $K \in \{0,1,2\}$) and $V_t^0(v) = \pb{F_t(v), M_t(v)}$. \subsection{Utterance Embedding and Generation} \label{sec:gen} We embed and generate utterances using Long Short Term Memory (LSTM) networks that take the graph embeddings into account. \paragraph{Embedding.} On turn $t$, upon receiving an utterance consisting of $n_t$ tokens, $x_t = (x_{t,1}, \dots, x_{t,n_t})$, the LSTM maps it to a vector as follows: \begin{align} h_{t,j} = \text{LSTM}_{\text{enc}}(h_{t,j-1}, A_t(x_{t,j})), \end{align} where $h_{t,0} = h_{t-1,n_{t-1}}$, and $A_t$ is an \emph{entity abstraction} function, explained below. The final hidden state $h_{t,n_t}$ is used as the utterance embedding $u_t$, which updates the mention vectors as described in \refsec{graph_embed}. In our dialogue task, the identity of an entity is unimportant. For example, replacing \ent{google} with \ent{alphabet} in \reffig{example_dialog} should make little difference to the conversation. The role of an entity is determined instead by its relation to other entities and relevant utterances. Therefore, we define the abstraction $A_t(y)$ for a word $y$ as follows: if $y$ is linked to an entity $v$, then we represent an entity by its type (\ent{school}, \ent{company} etc.) embedding concatenated with its current node embedding: $A_t(y) = [E_{\text{type}(y)}, V_t(v)]$. Note that $V_t(v)$ is determined only by its structural features and its context. If $y$ is a non-entity, then $A_t(y)$ is the word embedding of $y$ concatenated with a zero vector of the same dimensionality as $V_t(v)$. This way, the representation of an entity only depends on its structural properties given by the KB and the dialogue context, which enables the model to generalize to unseen entities at test time. \paragraph{Generation.} Now, assuming we have embedded utterance $x_{t-1}$ into $h_{t-1,n_{t-1}}$ as described above, we use another LSTM to generate utterance $x_t$. Formally, %the RNN generator is defined as we carry over the last utterance embedding $h_{t,0} = h_{t-1,n_{t-1}}$ and define: \begin{align} h_{t,j} = \text{LSTM}_{\text{dec}}(h_{t,j-1}, \pb{A_t(x_{t,j}), c_{t,j}}), \end{align} where $c_{t,j}$ is a weighted sum of node embeddings in the current turn: $c_{t,j} = \sum_{v \in G_t} \alpha_{t,j,v} V_t(v)$, where $\alpha_{t,j,v}$ are the attention weights over the nodes. Intuitively, high weight should be given to relevant entity nodes as shown in \reffig{overview}. We compute the weights through standard attention mechanism \citep{bahdanau2015neural}: \begin{align*} \alpha_{t,j} &= \text{softmax}(s_{t,j}),\\ s_{t,j,v} &= w^\text{attn} \cdot \tanh\p{W^\text{attn} \pb{h_{t,j-1}, V_t(v)}}, \end{align*} where vector $w^\text{attn}$ and $W^\text{attn}$ are parameters. Finally, we define a distribution over both words in the vocabulary and nodes in $G_t$ using the copying mechanism of \newcite{jia2016recombination}: \begin{align*} p(x_{t,j+1} &= y \cond G_t, x_{t,\le j}) \propto \exp\p{W^{\text{vocab}} h_{t,j} + b}, \\ p(x_{t,j+1} &= r(v) \cond G_t, x_{t,\le j}) \propto \exp\p{s_{t,j,v}}, \end{align*} where $y$ is a word in the vocabulary, $W^{\text{vocab}}$ and $b$ are parameters, and $r(v)$ is the realization of the entity represented by node $v$, e.g., \ent{google} is realized to ``Google'' during copying.\footnote{ We realize an entity by sampling from the empirical distribution of its surface forms found in the training data. } \section{Knowledge Base Schema} \label{sec:schema} The attribute set $\mathcal{A}$ for the MutualFriends task contains name, school, major, company, hobby, time-of-day preference, and location preference. Each attribute $a$ has a set of possible values (entities) $\mathcal{E}_a$. For name, school, major, company, and hobby, we collected a large set of values from various online sources.\footnote{Names: \url{https://www.ssa.gov/oact/babynames/decades/century.html}\\Schools: \url{http://doors.stanford.edu/~sr/universities.html}\\Majors: \url{http://www.a2zcolleges.com/majors}\\Companies: \url{https://en.wikipedia.org/wiki/List_of_companies_of_the_United_States}\\Hobbies: \url{https://en.wikipedia.org/wiki/List_of_hobbies}} We used three possible values (morning, afternoon, and evening) for the time-of-day preference, and two possible values (indoors and outdoors) for the location preference. \section{Scenario Generation} \label{sec:scenario} We generate scenarios randomly to vary task complexity and elicit linguistic and strategic variants. A scenario $S$ is characterized by the number of items ($N_S$), the attribute set ($\mathcal{A}_S$) whose size is $M_S$, and the values for each attribute $a \in \mathcal{A}_S$ in the two KBs. A scenario is generated as follows. \begin{enumerate} \item Sample $N_S$ and $M_S$ uniformly from $\{5,\dots, 12\}$ and $\{3,4\}$ respectively. \item Generate $\mathcal{A}_S$ by sampling $M_S$ attributes without replacement from $\mathcal{A}$. \item For each attribute $a \in \mathcal{A}_S$, sample the concentration parameter $\alpha_a$ uniformly from the set $\{0.3,1,3\}$. \item Generate two KBs by sampling $N_S$ values for each attribute $a$ from a Dirichlet-multinomial distribution over the value set $\mathcal{E}_a$ with the concentration parameter $\alpha_a$. \end{enumerate} We repeat the last step until the two KBs have one unique common item. \section{Chat Interface} \label{sec:website} In order to collect real-time dialogue between humans, we set up a web server and redirect AMT workers to our website. Visitors are randomly paired up as they arrive. For each pair, we choose a random scenario, and randomly assign a KB to each dialogue participant. We instruct people to play intelligently, to refrain from brute-force tactics (e.g., mentioning every attribute value), and to use grammatical sentences. To discourage random guessing, we prevent users from selecting a friend (item) more than once every 10 seconds. Each worker was paid \$0.35 for a successful dialogue within a 5-minute time limit. We log each utterance in the dialogue along with timing information. \section{Entity Linking and Realization} \label{sec:lexicon} We use a rule-based lexicon to link text spans to entities. For every entity in the schema, we compute different variations of its canonical name, including acronyms, strings with a certain edit distance, prefixes, and morphological variants. Given a text span, a set of candidate entities is returned by string matching. A heuristic ranker then scores each candidate (e.g., considering whether the span is a substring of a candidate, the edit distance between the span and a candidate etc.). The highest-scoring candidate is returned. A linked entity is considered as a single token and its surface form is ignored in all models. At generation time, we realize an entity by sampling from the empirical distribution of its surface forms in the training set. \section{Utterance Categorization} \label{sec:utterance_type} We categorize utterances into \act{inform}, \act{ask}, \act{answer}, \act{greeting}, \act{apology} heuristically by pattern matching. \begin{itemize} \item An \act{ask} utterance asks for information regarding the partner's KB. We detect these utterances by checking for the presence of a `?' and/or a question word like ``do", ``does", ``what", etc. \item An \act{inform} utterance provides information about the agent's KB. We define it as an utterances that mentions entities in the KB and is not an \act{ask} utterance. \item An \act{answer} utterance simply provides a positive/negative response to a question, containing words like ``yes", ``no", ``nope", etc. \item A \act{greeting} utterance contains words like ``hi" or ``hello"; it often occurs at the beginning of a dialogue. \item An \act{apology} utterance contains the word ``sorry", which is typically associated with corrections and wrong selections. \end{itemize} See \reftab{phenomenon_example} and \reftab{type_example} for examples of these utterance types. \section{Strategy} \label{sec:strategy} During scenario generation, we varied the number of attributes, the number of items in each KB, and the distribution of values for each attribute. We find that as the number of items and/or attributes grows, the dialogue length and the completion time also increase, indicating that the task becomes harder. We also anticipated that varying the value of $\alpha$ would impact the overall strategy (for example, the order in which attributes are mentioned) since $\alpha$ controls the skewness of the distribution of values for an attribute. \newcommand{\grp}[1]{{\small{\textsf {#1}}}} On examining the data, we find that humans tend to first mention attributes with a more skewed (i.e., less uniform) distribution of values. Specifically, we rank the $\alpha$ values of all attributes in a scenario (see step 3 in \refsec{scenario}), and bin them into 3 distribution groups---\grp{least\_uniform}, \grp{medium}, and \grp{most\_uniform}, according to the ranking where higher $\alpha$ values corresponds to more uniform distributions.\footnote{ For scenarios with 3 attributes, each group contains one attributes. For scenarios with 4 attributes, we put the two attributes with rankings in the middle to \grp{medium}. } In \reffig{attr_stats}, we plot the histogram of the distribution group of the first-mentioned attribute in a dialogues, which shows that skewed attributes are mentioned much more frequently. \section{Rule-based System} \label{sec:rule} The rule-based bot takes the following actions: greeting, informing or asking about a set of entities, answering a question, and selecting an item. The set of entities to inform/ask is sampled randomly given the entity weights. Initially, each entity is weighted by its count in the KB. We then increment or decrement weights of entities mentioned by the partner and its related entities (in the same row or column), depending on whether the mention is positive or negative. A negative mention contains words like ``no'', ``none'', ``n't'' etc. Similarly, each item has an initial weight of 1, which is updated depending on the partner's mention of its attributes. If there exists an item with weight larger than 1, the bot selects the highest-weighted item with probability 0.3. If a question is received, the bot informs facts of the entities being asked, e.g., ``anyone went to columbia?'', ``I have 2 friends who went to columbia''. Otherwise, the bot samples an entity set and randomly chooses between informing and asking about the entities. All utterances are generated by sentence templates, and parsing of the partner's utterance is done by entity linking and pattern matching (\refsec{utterance_type}). \section{Turn-taking Rules} \label{sec:turn-taking} Turn-taking is universal in human conversations and the bot needs to decide when to `talk' (send an utterance). To prevent the bot from generating utterances continuously and forming a monologue, we allow it to send at most one utterance if the utterance contains any entity, and two utterances otherwise. When sending more than one utterance in a turn, the bot must wait for 1 to 2 seconds in between. In addition, after an utterance is generated by the model (almost instantly), the bot must hold on for some time to simulate message typing before sending. We used a typing speed of 7 chars / sec and added an additional random delay between 0 to 1.5s after `typing'. The rules are applied to all models. \input human_bot_appendix_example \section{Additional Human-Bot Dialogue} \label{sec:human-bot-chats} We show another set of human-bot/human chats in \reftab{human-bot-chats-2}. In this scenario, the distribution of values are more uniform compared to \reftab{human-bot-chats}. Nevertheless, we see that \skg{} and \dkg{} still learned to start from relatively high-frequency entities. They also appear more cooperative and mentions relevant entities in the dialogue context compared to \rl{}. \section{Histograms of Ratings from Human Evaluations} \label{sec:histograms} The histograms of ratings from partner and third-party evaluations is shown in \reffig{partner-eval-dist} and \reffig{third-eval-dist} respectively. As these figures show, there are some obvious discrepancies between the ratings made by agents who chatted with the bot and those made by an `objective' third party. These ratings provide some interesting insights into how dialogue participants in this task setting perceive their partners, and what constitutes a `human-like' or a `fluent' partner. \section{Example Comments from Partner and Third-party Evaluations} \label{sec:comments} In \reftab{comments}, we show several pairs of ratings and comments on human-likeness for the same dialogue from both the partner evaluation and the third-party evaluation. As a conversation participant, the dialogue partner often judges from the cooperation and strategy perspective, whereas the third-party evaluator relies more on linguistic features (e.g., length, spelling, formality).
The Neural Noisy Channel
1611.02554
Table 4: BLEU scores from different models for the Chinese to English machine translation task.
[ "Model", "BLEU" ]
[ [ "seq2seq w/o attention", "11.19" ], [ "seq2seq w/ attention", "25.27" ], [ "direct (bi)", "23.33" ], [ "direct + LM + bias (bi)", "23.33" ], [ "channel + LM + bias (bi)", "26.28" ], [ "direct + channel + LM + bias (bi)", "[BOLD] 26.44" ] ]
To set benchmarks, we train the vanilla and attentional sequence to sequence models (sutskever:2014; bahdanau:2015) using the same parallel data. For direct models, we leverage bidirectional LSTMs as the encoder for this task. We can see that the vanilla sequence to sequence model behaves poorly due to the small amounts of parallel data. By contrast, the direct model (SSNT) and the attentional model work relatively well, with the attentional model outperforming the SSNT direct model. Although these models both directly model p(y∣x), this result is unsurprising because the SSNT direct model is most effective when the alignment between sequences is largely monotonic, and Chinese–English translation word orders diverge considerably. However, despite this limitation, the noisy channel model is approximately 3 points higher in BLEU than the direct model, and the combination of noisy channel and direct model gives extra boost. Confirming the empirical findings of prior work (and in line with theoretical predictions), the interpolation of the direct model and language model is not effective.
\documentclass{article} % For LaTeX2e \newcommand{\cjd}[1]{\textcolor{blue}{\bf \small [#1 --CJD]}} \newcommand{\pb}[1]{\textcolor{orange}{\bf \small [#1 --PB]}} \newcommand{\leiyu}[1]{\textcolor{red}{\bf \small [#1 --Lei]}} \newcommand{\etg}[1]{\textcolor{pink}{\bf \small [#1 --ETG]}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\topk}{topk} \DeclareMathOperator*{\argtopk}{arg\,topk} \DeclareMathOperator*{\candidate}{getCandidateOutputs} \title{The Neural Noisy Channel} \author{Lei Yu$^1$\thanks{Work completed at DeepMind.} , Phil Blunsom$^{1,2}$, Chris Dyer$^{2}$, Edward Grefenstette$^{2}$, and Tom\'{a}\v{s} Ko\v{c}isk\'{y}$^{1,2}$ \\ $^1$University of Oxford and $^2$DeepMind \\ {\tt lei.yu@cs.ox.ac.uk, \{pblunsom,cdyer,etg,tkocisky\}@google.com} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects during training, noisy channel models must produce outputs that explain their inputs, and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use. \end{abstract} \section{Introduction} Recurrent neural network sequence to sequence models~\citep{kalchbrenner:2013,sutskever:2014,bahdanau:2015} are excellent models of $p(\textrm{output sequence }\boldsymbol{y} \mid \textrm{input sequence }\boldsymbol{x})$, provided sufficient input--output $(\boldsymbol{x},\boldsymbol{y})$ pairs are available for estimating their parameters. However, in many domains, vastly more unpaired output examples are available than input--output pairs (e.g., transcribed speech is relatively rare although non-spoken texts are abundant; Swahili--English translations are rare although English texts are abundant; etc.). A classic strategy for exploiting both kinds of data is to use Bayes' rule to rewrite $p(\boldsymbol{y} \mid \boldsymbol{x})$ as $p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})/p(\boldsymbol{x})$, a factorisation which is called a \textbf{noisy channel model}~\citep{shannon:1948}. A noisy channel model thus consists of two component models: the conditional \textbf{channel model}, $p(\boldsymbol{x} \mid \boldsymbol{y})$, which characterizes the \emph{reverse} transduction problem and whose parameters are estimated from the paired $(\boldsymbol{x},\boldsymbol{y})$ samples, and the unconditional \textbf{source model}, $p(\boldsymbol{y})$, whose parameters are estimated from both the paired and (usually much more numerous) unpaired samples.\footnote{We do not model $p(\boldsymbol{x})$ since, in general, we will be interested in finding $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x})$, and $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x}) = \argmax_\mathbf{y} \frac{p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})}{p(\mathbf{x})}= \argmax_\mathbf{y} p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})$.} Beyond their data omnivorousness, noisy channel models have other benefits. First, the two component models mean that two different aspects of the transduction problem can be addressed independently. For example, in many applications, source models are language models and innovations in these can be leveraged to obtain improvements in any system that uses them as a component. Second, the component models can have complementary strengths, since inference is carried out in the product space; this simplifies design because a single model does not have to get everything perfectly right. Third, the noisy channel operates by selecting outputs that both are \emph{a priori} likely \emph{and} that explain the input well. This addresses a failure mode that can occur in conditional models in which inputs are ``explained away'' by highly predictive output prefixes, resulting in poor training \citep{klein:2001}. Since the noisy channel formulation requires its outputs to explain the observed input, this problem is avoided. In principle, the noisy channel decomposition is straightforward; however, in practice, decoding (i.e., computing $\arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})$) is a significant computational challenge, and tractability concerns impose restrictions on the form the component models can take. To illustrate, an appealing parameterization would be to use an attentional seq2seq network \citep{bahdanau:2015} to model the channel probability $p(\boldsymbol{x} \mid \boldsymbol{y})$. However, seq2seq models are designed under the assumption that the complete conditioning sequence is available before any prefix probabilities of the output sequence can be computed. This assumption is problematic for channel models since it means that a complete output sequence must be constructed before the channel model can be evaluated (since the channel model conditions on the output). Therefore, to be practical, the channel probability must decompose in terms of prefixes of the conditioning variable, $\boldsymbol{y}$. While the chain rule justifies decomposing output variable probabilities in terms of successive extensions of a partial prefix, no such convenience exists for conditioning variables, and approximations must be introduced. In this work, we use a variant of the newly proposed online seq2seq model of \citet{yu:2016} which uses a latent alignment variable to enable its probabilities to factorize in terms of prefixes of both the input and output, making it an appropriate channel model~(\S\ref{sec:model}). Using this channel model, the decoding problem then becomes similar to the problem faced when decoding with direct models~(\S\ref{sec:decoding}). Experiments on abstractive summarization, machine translation, and morphological inflection show that the noisy channel can significantly improve performance and exploit unpaired output training samples and that models that combine the direct model and a noisy channel model offer further improvements still~(\S\ref{sec:experiments}). \section{Background: Segment to Segment Neural Transduction} \label{sec:model} Our model is based on the Segment to Segment Neural Transduction model (SSNT) of Yu et al., 2016. At a high level, the model alternates between encoding more of the input sequence and decoding output tokens from the encoded representation. This presentation deviates from the Yu et al.'s presentation so as to emphasize the incremental construction of the conditioning context that is enabled by the latent variable. \subsection{Model description} Similar to other neural sequence to sequence models, SSNT models the conditional probability $p(\boldsymbol{y} \mid \boldsymbol{x})$ of a output sequence $\boldsymbol{y}$ given a input sequence $\boldsymbol{x}$. To avoid having to observe the complete input sequence $\boldsymbol{x}$ before making a prediction of the beginning of the output sequence, we introduce a latent alignment variable $\boldsymbol{z}$ which indicates when each token of the output sequence is to be generated as the input sequence is being read. Since we assume that the input is read just once from left to right, we restrict $\boldsymbol{z}$ to be a monotonically increasing alignment (i.e., $z_{j+1} \ge z_j$ is true with probability 1), where $z_j = i$ denotes that the output token at position $j$ ($y_j$) is generated when the input sequence up through position $i$ has been read. The SSNT model is: \begin{align} \begin{split} p(\boldsymbol{y} \mid \boldsymbol{x}) & = \sum_{\boldsymbol{z}} p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) \\ p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) & \approx \prod_{j=1}^{|\boldsymbol{y}|} \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \label{eq:model} \end{split} \end{align} We explain the model in terms of its two components, starting with the word generation term. In the SSNT, the input and output sequences $\boldsymbol{x}$, $\boldsymbol{y}$ are encoded with two separate LSTMs \citep{hochreiter1997long}, resulting in sequences of hidden states representing prefixes of these sequences. In Yu et al.'s formulation, the input sequence encoder (i.e., the conditioning context encoder) can either be a unidirectional or bidirectional LSTM, but here we assume that it is a unidirectional LSTM, which ensures that it will function well as a channel model that can compute probabilities with incomplete conditioning contexts (this is necessary since, at decoding time, we will be constructing the conditioning context incrementally). Let $\mathbf{h}_i$ represent the input sequence encoding for the prefix $\boldsymbol{x}_1^{i}$. Since the final action at timestep $j$ will be to predict $y_j$, it is convenient to let $\mathbf{s}_j$ denote the hidden state that excludes $y_j$, i.e., the encoding of the prefix $\boldsymbol{y}_1^{j-1}$. The probability of the next token $y_j$ is calculated by concatenating the aligned hidden state vectors $\mathbf{s}_j$ and $\mathbf{h}_{z_j}$ followed by a softmax layer, \begin{align*} p(y_j \mid \boldsymbol{x}_1^{z_j}, \boldsymbol{y}_1^{j-1}) \propto \text{exp} (\mathbf{W}_w[\mathbf{h}_{z_j};\mathbf{s}_j] + \mathbf{b}_w). \end{align*} The model thus depends on the current alignment position $z_j$, which determines how far into $\boldsymbol{x}$ it has read. We now discuss how the sequence of $z_j$'s are generated. First, we remark that modelling this distribution requires some care so as to avoid conditioning on the entire input sequence. To illustrate why one might induce a dependency on the entire input sequence in this model, it is useful to compare to a standard attention model. Attention models operate by computing a score using a representation of alignment candidate (in our case, the candidates would be every unread token remaining in the input). If we followed this strategy, it would be necessary to observe the full input sequence when making the first alignment decision. We instead model the alignment transition from timestep $j$ to $j+1$ by decomposing it into a sequence of conditionally independent \textsc{shift} and \textsc{emit} operations that progressively decide whether to read another token or stop reading. That is, at input position $i$, the model decides to \textsc{emit}, i.e., to set $z_j=i$ and predict the next output token $y_j$ from the word model, or it decides to \textsc{shift}, i.e., to read one more input token and increment the input position $i \gets i+1$. The probability $p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_1^{i}, \boldsymbol{y}_1^{j-1})$ is calculated using the encoder and decoder states defined above as: \begin{align*} p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_{1}^{i}, \boldsymbol{y}_1^{j-1}) = \sigma(\text{MLP}(\mathbf{W}_t[\mathbf{h}_{i};\mathbf{s}_j] + b_t)). \end{align*} The probability of \textsc{shift} is simply $1-p(a_{i,j} = \textsc{emit})$. In this formulation, the probabilities of aligning $z_j$ to each alignment candidate $i$ can be computed by reading just $\boldsymbol{x}_1^i$ (rather than the entire sequence). The probabilities are also independent of the contents of the suffix $\boldsymbol{x}_{i+1}^{|\boldsymbol{x}|}$. Using the probabilities of the auxiliary $a_{i,j}$ variables, the alignment probabilities needed in Eq.~\ref{eq:model} are computed as: \begin{align*} p(z_j = i \mid z_{j-1}, \boldsymbol{y}_1^{j-1}, \boldsymbol{x}_{1}^{i}) &= \begin{cases} 0 & \text{if }i < z_{j-1} \\ p(a_{i,j} = \textsc{emit}) & \text{if }i=z_{j-1} \\ \left(\prod_{i'=z_{j-1}}^{i-1} p(a_{i',j} = \textsc{shift}) \right) p(a_{i,j} = \textsc{emit}) & \text{if }i>z_{j-1} \end{cases} \end{align*} \subsection{Inference algorithms} In SSNT, the probability of generating each $y_j$ depends only on the current output position's alignment ($z_j$), the current output prefix ($\boldsymbol{y}_1^{j-1}$), the input prefix up to the current alignment ($\boldsymbol{x}_1^{z_j}$). It does \emph{not} depend on the history of the alignment decisions. Likewise, the alignment decisions at each position are also conditionally independent of the history of alignment decisions. Because of these independence assumptions, $\boldsymbol{z}$ can be marginalised using a $O(|\boldsymbol{x}|^2 \cdot |\boldsymbol{y}|)$ time dynamic programming algorithm where each fill in a chart with computing the following marginal probabilities: \begin{align*} \begin{split} \alpha(i,j) & = p(z_j=i, \boldsymbol{y}_1^j \mid \boldsymbol{x}_1^{z_j}) = \sum_{i'=1}^{i} \alpha(i',j-1) \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \end{split} \end{align*} The model is trained to minimize the negative log likelihood of the parallel corpus $S$: \begin{equation} \label{loss} \begin{split} \mathcal{L}(\boldsymbol{\theta}) &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log p(\boldsymbol{y}\ |\ \boldsymbol{x}; \boldsymbol{\theta})\\ &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log \alpha(|\boldsymbol{x}|, |\boldsymbol{y}|).\\ \end{split} \end{equation} The gradients of this objective with respect to the component probability models can be computed using automatic differentiation or using a secondary dynamic program that computes `backward' probabilities. We refer the reader to Section 3.1 of \citet{yu:2016} for details. In this paper, we use a slightly different objective from the one described in \citet{yu:2016}. Rather than marginalizing over the paths that end in any possible input positions $\sum_{i=1}^I\alpha(i, |\boldsymbol{y}|)$, we require that the full input be consumed when the final output symbol is generated. This constraint biases away from predicting outputs without explaining them using the input sequence. \section{Decoding} \label{sec:decoding} We now turn to the problem of decoding, that is, of computing \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y}), \end{align*} where we are using the SSNT model described in the previous section as the channel model and a language model that delivers prior probabilities of the output sequence in left-to-right order, i.e., $p(y_i \mid \boldsymbol{y}^{i-1})$. Marginalizing the latent variable during search is computationally hard \citep{simaan:1996}, and so we approximate the search problem as \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} \max_{\boldsymbol{z}} p(\boldsymbol{x},\boldsymbol{z} \mid \boldsymbol{y}) p(\boldsymbol{y}). \end{align*} However, even with this simplification, the search problem remains nontrivial. On one hand, we must search over the space of all possible outputs with a model that makes no Markovian assumptions. This is similar to the decoding problem faced in standard seq2seq transducers. On the other hand, our model computes the probability of the given input conditional on the predicted output hypothesis. Therefore, instead of just relying on a single softmax to provide a probability for every output word type (as we conveniently can in the direct model), we must loop over each output word type, and run a softmax over the input vocabulary---a computational expense that is quadratic in the size of the vocabulary! To reduce this computational effort, we make use of an auxiliary direct model $q(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x})$ to explore probable extensions of partial hypotheses, rather than trying to perform an exhaustive search over the vocabulary each time we extend an item on the beam. Algorithm~\ref{decode}, in Appendix~\ref{sec:algo_appendix}, describes the decoding algorithm based on a formulation by \citet{tillmann1997dp}. The idea is to create a matrix $Q$ of partial hypotheses. Each hypothesis in cell $(i,j)$ covers the first $i$ words of the input ($\boldsymbol{x}_1^i$) and corresponds to an output hypothesis prefix of length $j$ ($\boldsymbol{y}_1^j$). The hypothesis is associated with a model score. For each cell $(i, j)$, the direct proposal model first calculates the scores of possible extensions of previous cells that could then reach $(i,j)$ by considering every token in the output vocabulary, from all previous candidate cells $(i-1,\le j)$. That gives the top $K_1$ partial output sequences. These partial output sequences are subsequently rescored by the noisy channel model, and the $K_2$ best candidates are kept in the beam and used for further extension. The beam size $K_1$ and $K_2$ are hyperparameters to be tuned in the experiments. \subsection{Model combination} The decoder we have just described makes use of an auxiliary decoding model. This means that, as a generalisation, it is capable of decoding under an objective that is a linear combination of the direct model, channel model, language model and a bias for the output length\footnote{In the experiments, we did not marginalize the probability of the direct model when calculating the general search objective. We found that marginalizing the probability does not give better performance and makes decoding extremely slow.}, \begin{equation} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j} = \lambda_1 \log p(\boldsymbol{y}_1^j\ |\ \boldsymbol{x}_1^i) + \lambda_2 \log p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) + \lambda_3 \log p(\boldsymbol{y}_1^j) + \lambda_4 |\boldsymbol{y}_1^j|. \end{equation} The bias is used to penalize the noisy channel model for generating too-short (or long) sequences. The $\lambda$'s are hyperparameters to be tuned using on a small amount of held-out development data. \section{Experiments} \label{sec:experiments} We evaluate our model on three natural language processing tasks, abstractive sentence summarisation, machine translation and morphological inflection generation. For each task, we compare the performance of the direct model, noisy channel model, and the interpolation of the two models. \subsection{Abstractive Sentence Summarisation} Sentence summarisation is the problem of constructing a shortened version of a sentence while preserving the majority of its meaning. In contrast to extractive summarisation, which can only copy words from the original sentence, abstractive summarisation permits arbitrary rewording of the sentence. The dataset \citep{DBLP:conf/emnlp/RushCW15} that we use is constructed by pairing the first sentence and the headline of each article from the annotated Gigaword corpus \citep{graff2003english,napoles2012annotated}. There are 3.8m, 190k and 381k sentence pairs in the training, validation and test sets, respectively. \cite{yu:2016} filtered the dataset by restricting the lengths of the input and output sentences to be no greater than 50 and 25 tokens, respectively. From the filtered data, they further sampled 1 million sentence pairs for training. We experimented on training the direct model and channel model with both the sampled 1 million and the full 3.8 million parallel data. The language model is trained on the target side of the parallel data, i.e. the headlines. We evaluated the generated summaries of 2000 randomly sampled sentence pairs using full length ROUGE F1. This setup is in line with the previous work on this task \citep{DBLP:conf/emnlp/RushCW15,chopra,gulcehre2016pointing,yu:2016}. The same configuration is used to train the direct model and the channel model. The loss (Equation \ref{loss}) is optimized by Adam \citep{DBLP:journals/corr/KingmaB14}, with initial learning rate of 0.001. We use LSTMs with 1 layer for both the encoder and decoders, with hidden units of 256. The mini-batch size is 32, and dropout of 0.2 is applied to the input and output of LSTMs. For the language model, we use a 2-layer LSTM with 1024 hidden units and 0.5 dropout. The learning rate is 0.0001. All the hyperparameters are optimised via grid search on the perplexity of the validation set. During decoding, beam search is employed with the number of proposals generated by the direct model $K_1 = 20$, and the number of best candidates selected by the noisy channel model $K_2 = 10$. Table \ref{test_rg} presents the ROUGE-F1 scores of the test set from the direct model, noisy channel model (channel + LM + bias), the interpolation of the direct model and the noisy channel model (direct + channel + LM + bias), and the interpolation of the direct model and language model (direct + LM + bias) trained on different sizes of data. The noisy channel model with the language model trained on the target side of the 1 million parallel data outperforms the direct model by approximately 1 point. Such improvement indicates that the language model helps improve the quality of the output sequence when no extra unlabelled data is available. Training the language model with all the headlines in the dataset, i.e. 3.8 million sentences, gives a further boost to the ROUGE score. This is in line with our expectation that the model benefits from adding large amounts of unlabelled data. The interpolation of the direct model, channel model, language model and bias of the output length achieves the best results --- the ROUGE score is close to the direct model trained on all the parallel data. Although there is still improvement, when the direct model is trained with more data, the gap between the direct model and the noisy channel model is smaller. No gains is observed if the language model is combined with the direct model. We find that as we increase the weight of the language model, the result is getting worse. Table \ref{prev_work} surveys published results on this task, and places our best models in the context of the current state-of-the-art results. ABS+ \citep{DBLP:conf/emnlp/RushCW15}, RAS-LSTM and RAS-Elman \citep{chopra} are different variations of the attentive models. {\it Pointing the unkown words} uses pointer networks \citep{vinyals2015pointer} to select the output token from the input sequence in order to avoid generating unknown tokens. ASC + FSC \citep{miao2016} is a semi-supervised model based on a variational autoencoder. Trained on 1m paired samples and 3.8m unpaired samples, the noisy channel achieves comparable or better results than (direct) models trained with 3.8m paired samples. Compared to \cite{miao2016}, whose ASC + FSC models is an alternative strategy for using unpaired data, the noisy channel is significantly more effective --- 33.21 versus 31.09 in ROUGE-1. Finally, motivated by the qualitative observation that noisy channel model outputs were quite fluent and often used reformulations of the input rather than a strict compression (which would be poorly scored by ROUGE), we carried out a human preference evaluation whose results are summarised in Table~\ref{tab:human}. This confirms that noisy channel summaries are strongly preferred over those of the direct model. \subsection{Machine Translation} We next evaluate our models on a Chinese--English machine translation task. We used parallel data with 184k sentence pairs (from the FBIS corpus, LDC2003E14) and monolingual data with 4.3 million of English sentences (selected from the English Gigaword). The training data is preprocessed by lowercasing the English sentences, replacing digits with `\#' token, and replacing tokens appearing less than 5 times with an UNK token. This results in vocabulary sizes of 30k and 20k for Chinese sentences and English sentences, respectively. The models are trained using Adam \citep{DBLP:journals/corr/KingmaB14} with initial learning rate of 0.001 for the direct model and the channel model, and 0.0001 for the language model. The LSTMs for the direct and channel models have 512 hidden units and 1 layer, and 2 layers with 1024 hidden units per layer for the language model. Dropout of 0.5 on the input and output of LSTMs is set for all the model training. The noisy channel decoding uses $K_1$ = 20 and $K_2$ = 10 as the beam sizes. Table \ref{mt-result} lists the translation performance of different models in BLEU scores. To set benchmarks, we train the vanilla and attentional sequence to sequence models \citep{sutskever:2014,bahdanau:2015} using the same parallel data. For direct models, we leverage bidirectional LSTMs as the encoder for this task. We can see that the vanilla sequence to sequence model behaves poorly due to the small amounts of parallel data. By contrast, the direct model (SSNT) and the attentional model work relatively well, with the attentional model outperforming the SSNT direct model. Although these models both directly model $p(\boldsymbol{y} \mid \boldsymbol{x})$, this result is unsurprising because the SSNT direct model is most effective when the alignment between sequences is largely monotonic, and Chinese--English translation word orders diverge considerably. However, despite this limitation, the noisy channel model is approximately 3 points higher in BLEU than the direct model, and the combination of noisy channel and direct model gives extra boost. Confirming the empirical findings of prior work (and in line with theoretical predictions), the interpolation of the direct model and language model is not effective. \subsection{Morphological Inflection Generation} Morphological inflection is the task of generating a target (inflected form) word from a source word (base form), given a morphological attribute, e.g. number, tense, and person etc.. It is useful for reducing data sparsity issues in translating morphologically rich languages. The transformation from the base form to the inflected form is usually to add prefix or suffix, or to do character replacement. The dataset \citep{DBLP:conf/naacl/DurrettD13} that we use in the experiments is created from Wiktionary, including inflections for German nouns, German verbs, Spanish Verbs, Finnish noun and adjective, and Finnish verbs. We only experimented on German nouns and German verbs, as German nouns is the most difficult task\footnote{While state-of-the-art systems can achieve 99\% accuracies on Spanish verbs and Finnish verbs, they can only get 89\% accuracy on German nouns.}, and the direct model does not perform as well as other state-of-the-art systems on German verbs. The train/dev/test split for German nouns is 2364/200/200, and for German verbs is 1617/200/200. There are 8 and 27 inflection types in German nouns and German verbs, respectively. Following previous work, we learn a separate model for each type of inflection independent of the other inflections. We report results on the average accuracy across different inflections. Our language models were trained on word types extracted by running a morphological analysis tool on the WMT 2016 monolingual data and extracting examples of appropriately inflected word forms.\footnote{\url{http://www.statmt.org/wmt16/translation-task.html}} After annotation the number of instances for training the language model ranged from 300k to 3.8m for different inflection types in German nouns, and from 200 to 54k in German verbs. The experimental setup that we use on this task is $K_1$ = 60, $K_2$ = 30, \begin{itemize} \item direct and channel model: 1 layer LSTM with 128 hidden, $\eta = 0.001$, dropout = 0.5. \item language model: 2 layer LSTM with 512 hidden, $\eta = 0.0001$, dropout = 0.5. \end{itemize} Table \ref{morph-result} summarises the results from our models. On both datasets, the noisy channel model (channel + LM + bias) does not perform as well as the direct model, but the interpolation of the direct model and noisy channel model (direct + channel + LM + bias) significantly outperforms the direct model. The interpolation of the direct model and language model (direct + LM + bias) achieves better results than the direct model and the noisy channel model on German nouns, but not on German verbs. For further comparison, we also included the state-of-the-art results as benchmarks. NCK15 \citep{DBLP:conf/naacl/NicolaiCK15} tackles the task based on the three-stage approach: (1) align the source and target word, (2) extract inflection rules, (3) apply the rule to new examples. FTND16 \citep{DBLP:conf/naacl/FaruquiTND16} is based on neural sequence to sequence models. Both models (NCK15+ and FTND16+) rerank the candidate outputs by the scores predicted from n-gram language models, together with other features. \section{Analysis} By observing the output generated by the direct model and noisy channel model, we find (in line with theoretical critiques of conditional models) that the direct model may leave out key information. By contrast, the noisy channel model does seem to avoid this issue. To illustrate, in Example~1 (see Appendix~B) in Table~\ref{example}, the direct model ignores the key phrase `coping with', resulting in incomplete meaning, but the noisy channel model covers it. Similarly, in Example 6, the direct model does not translate the Chinese word corresponding to `investigation'. We also observe that while the direct model mostly copies words from the source sentence, the noisy channel model prefers generating paraphrases. For instance, in Example 2, while the direct model copies the word `accelerate' in the generated output, the noisy channel model generate `speed up' instead. While one might argue that copying is a preferable compression technique than paraphrasing (as long as it produces grammatical outputs), it does show the power of these models. \section{Related work} Noisy channel decompositions have been successfully used in a variety of problems, including speech recognition~\citep{jelinek:1998}, machine translation~\citep{brown:1993}, spelling correction~\citep{brill:2000}, and question answering~\citep{echihabi:2003}. The idea of adding language models and monolingual data in machine translation has been explored in earlier work. \cite{gulcehre:2015} propose two strategies of combining a language model with a neural sequence to sequence model. In shallow fusion, during decoding the sequence to sequence model (direct model) proposes candidate outputs and these candidates are reranked based on the scores calculated by a weighted sum of the probability of the translation model and that of the language model. In deep fusion, the language model is integrated into the decoder of the sequence to sequence model by concatenating their hidden state at each time step. \cite{sennrich:2016} incorporate target language unpaired training data by doing back-translation to create synthetic parallel training data. While this technique is quite effective, its practicality seems limited to problems where the inputs and outputs contain roughly the same information (such as translation). \cite{cheng:2016} leverages the abundant monolingual data by doing multitask learning with an autoencoding objective. A number of papers have remarked on the tendency for content to get dropped (or repeated) in translation. \citet{liu:2016} propose translating in both a left-to-right and a left-to-right direction and seeking a consensus. \citet{tu:2016} propose augmenting a direct model's decoding objective with a reverse translation model (similar to our channel model except it conditions on the direct model's output RNN's hidden states rather than the words); however, that work just reranks complete translation hypotheses rather than developing a model that permits an incremental search. Another trend of work that is related to our model is the investigation of making online prediction for machine translation \citep{gu:2016,grissom:2014,sankaran:2010} and speech recognition \citep{hwang:2016,jaitly2015neural}. Our direct model (and channel model) shares the idea of introducing stochastic latent variables to neural networks with several papers and marginalising these during training. Examples include connectionist temporal classification (CTC) \citep{graves2006connectionist} and the more recent segmental recurrent neural networks (SRNN) \citep{kong2015segmental}. Compared to these models, our direct model has the advantage of capturing unbounded dependencies of output words. The direct model is closely related to the sequence transduction model \citep{graves2012sequence} in the way of modeling the probability of predicting output tokens and marginalizing latent variables using dynamic programming. However, rather than modeling the joint distribution over outputs and alignments by inserting null symbols into the output sequence, our direct model defines a separate latent alignment variable, with alignment distribution defined with neural networks. Similar to our work, the model in \citep{alkhoulialignment} is decomposed into the alignment model and the model of word predictions. The two models are trained separately and combined during decoding, with subsequent refinements using a Viterbi-EM approximation. By contrast, in our direct and channel models, the latent and observed components of the models are trained jointly using a dynamic program to exactly marginalise the unobserved variables. \section{Conclusion} We have presented and empirically validated a noisy channel transduction model that uses component models based on recurrent neural networks. This formulation lets us use unpaired outputs to estimate the parameters of the source model and input-output pairs to train the channel model. Despite the channel model's ability to condition on long sequences, we are able to maintain tractable decoding by using a latent segmentation variable that breaks the conditioning context up into a series of monotonically growing segments. Our experiments show that this model makes excellent use of unpaired training data. \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Algorithm} \label{sec:algo_appendix} \begin{algorithm*}[ht] \caption{Noisy Channel Decoding} \label{decode} \begin{algorithmic} \State \textbf{Notation: } $Q$ is the Viterbi matrix, bp is the backpointer, $W$ stores the predicted tokens, $\mathcal{V}$ refers to the vocabulary, $I=|\boldsymbol{x}|$, and $J_\text{max}$ denotes the maximum number of output tokens that can be predicted. \State \textbf{Input: } source sequence $\boldsymbol{x}$ \State \textbf{Output: } best output sequence $\boldsymbol{y^*}$ \State \textbf{Initialisation: } $Q \in \mathbb{R}^{I \times J_\text{max}\times K_1}$, bp $\in \mathbb{N}^{I \times J_\text{max}\times K_1}$, $W \in \mathbb{N}^{I \times J_\text{max}\times K_1}$, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $Q_{temp} \in \mathbb{R}^{K_1}$, $bp_{temp} \in \mathbb{N}^{K_1}$, $W_{temp} \in \mathbb{N}^{K_1}$ \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}}q(z_1 = i) $ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_{1}^{z_1})$ \Comment Candidates generated by $q(\boldsymbol{y}\ |\ \boldsymbol{x})$. \State $bp_{temp}\gets 0$ \State $W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}}q(z_1 = i)$ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_1^{z_1})$ \State $Q[i, 1] \gets \topk(K_2)_{y \in W_{temp}} O_{\boldsymbol{x}_1^i, y}$ \Comment Rerank the candidates by objective ($O$). \State $W[i,1] \gets \argtopk(K_2)_{y \in W_{temp}}O_{\boldsymbol{x}_1^i, y}$ \EndFor \For{$j\in[2, J_\text{max}]$} \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}, k \in [1, i]} Q[k,j-1] \cdot$ $q(z_j = i\ |\ z_{j-1} = k)q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x}_1^{z_j})$ \State $bp_{temp} , W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}, k \in [1, i]} $ $Q[k,j-1]q(z_j = i\ |\ z_{j-1} = k) \cdot$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x})$ \State $Y \gets \candidate(bp_{temp}, W_{temp})$ \Comment Get partial candidate $\boldsymbol{y}_1^j$. \State $Q[i,j] \gets \topk(K_2)_{\boldsymbol{y}_j \in Y} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \State $bp[i,j] , W[i,j] \gets %\argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) p(\boldsymbol{y}_1^j)$ \argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \EndFor \EndFor \State \Return a sequence of words stored in $W$ by following backpointers starting from $(I,\argmax_j Q[I, j])$. \end{algorithmic} \end{algorithm*} \section{Example outputs} \label{sec:outputs} \end{document}
The Neural Noisy Channel
1611.02554
Table 1: ROUGE F1 scores on the sentence summarisation test set. The ‘uni’ and ‘bi’ in the parentheses denote the encoder for the model proposing candidates is a unidirectional LSTM or bidirectional LSTM. Those rows marked with an ∗ denote models that process their input online.
[ "Model", "# Parallel data", "# Data for LM", "RG-1", "RG-2", "RG-L" ]
[ [ "direct (uni)∗", "1.0m", "-", "30.94", "14.20", "28.72" ], [ "direct (bi)", "1.0m", "-", "31.25", "14.52", "29.03" ], [ "direct (bi)", "3.8m", "-", "33.82", "16.66", "31.50" ], [ "channel + LM + bias (uni)∗", "1.0m", "1.0m", "31.92", "14.75", "29.58" ], [ "channel + LM + bias (bi)", "1.0m", "1.0m", "31.96", "14.89", "29.51" ], [ "direct + channel + LM + bias (uni)", "1.0m", "1.0m", "33.07", "15.21", "30.29" ], [ "direct + channel + LM + bias (bi)", "1.0m", "1.0m", "33.18", "15.65", "30.45" ], [ "channel + LM + bias (uni)∗", "1.0m", "3.8m", "32.59", "15.05", "30.06" ], [ "channel + LM + bias (bi)", "1.0m", "3.8m", "32.65", "14.95", "30.23" ], [ "direct + LM + bias (bi)", "1.0m", "3.8m", "31.25", "14.52", "29.03" ], [ "direct + channel + LM + bias (uni)", "1.0m", "3.8m", "33.16", "15.63", "30.53" ], [ "direct + channel + LM + bias (bi)", "1.0m", "3.8m", "33.21", "15.65", "30.60" ], [ "chanel + LM + bias (bi)", "3.8m", "3.8m", "34.12", "16.41", "31.38" ], [ "direct + LM + bias (bi)", "3.8m", "3.8m", "33.82", "16.66", "31.50" ], [ "direct + channel + LM + bias (bi)", "3.8m", "3.8m", "[BOLD] 34.41", "[BOLD] 16.86", "[BOLD] 31.83" ] ]
The noisy channel model with the language model trained on the target side of the 1 million parallel data outperforms the direct model by approximately 1 point. Such improvement indicates that the language model helps improve the quality of the output sequence when no extra unlabelled data is available. Training the language model with all the headlines in the dataset, i.e. 3.8 million sentences, gives a further boost to the ROUGE score. This is in line with our expectation that the model benefits from adding large amounts of unlabelled data. The interpolation of the direct model, channel model, language model and bias of the output length achieves the best results — the ROUGE score is close to the direct model trained on all the parallel data. Although there is still improvement, when the direct model is trained with more data, the gap between the direct model and the noisy channel model is smaller. No gains is observed if the language model is combined with the direct model. We find that as we increase the weight of the language model, the result is getting worse.
\documentclass{article} % For LaTeX2e \newcommand{\cjd}[1]{\textcolor{blue}{\bf \small [#1 --CJD]}} \newcommand{\pb}[1]{\textcolor{orange}{\bf \small [#1 --PB]}} \newcommand{\leiyu}[1]{\textcolor{red}{\bf \small [#1 --Lei]}} \newcommand{\etg}[1]{\textcolor{pink}{\bf \small [#1 --ETG]}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\topk}{topk} \DeclareMathOperator*{\argtopk}{arg\,topk} \DeclareMathOperator*{\candidate}{getCandidateOutputs} \title{The Neural Noisy Channel} \author{Lei Yu$^1$\thanks{Work completed at DeepMind.} , Phil Blunsom$^{1,2}$, Chris Dyer$^{2}$, Edward Grefenstette$^{2}$, and Tom\'{a}\v{s} Ko\v{c}isk\'{y}$^{1,2}$ \\ $^1$University of Oxford and $^2$DeepMind \\ {\tt lei.yu@cs.ox.ac.uk, \{pblunsom,cdyer,etg,tkocisky\}@google.com} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects during training, noisy channel models must produce outputs that explain their inputs, and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use. \end{abstract} \section{Introduction} Recurrent neural network sequence to sequence models~\citep{kalchbrenner:2013,sutskever:2014,bahdanau:2015} are excellent models of $p(\textrm{output sequence }\boldsymbol{y} \mid \textrm{input sequence }\boldsymbol{x})$, provided sufficient input--output $(\boldsymbol{x},\boldsymbol{y})$ pairs are available for estimating their parameters. However, in many domains, vastly more unpaired output examples are available than input--output pairs (e.g., transcribed speech is relatively rare although non-spoken texts are abundant; Swahili--English translations are rare although English texts are abundant; etc.). A classic strategy for exploiting both kinds of data is to use Bayes' rule to rewrite $p(\boldsymbol{y} \mid \boldsymbol{x})$ as $p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})/p(\boldsymbol{x})$, a factorisation which is called a \textbf{noisy channel model}~\citep{shannon:1948}. A noisy channel model thus consists of two component models: the conditional \textbf{channel model}, $p(\boldsymbol{x} \mid \boldsymbol{y})$, which characterizes the \emph{reverse} transduction problem and whose parameters are estimated from the paired $(\boldsymbol{x},\boldsymbol{y})$ samples, and the unconditional \textbf{source model}, $p(\boldsymbol{y})$, whose parameters are estimated from both the paired and (usually much more numerous) unpaired samples.\footnote{We do not model $p(\boldsymbol{x})$ since, in general, we will be interested in finding $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x})$, and $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x}) = \argmax_\mathbf{y} \frac{p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})}{p(\mathbf{x})}= \argmax_\mathbf{y} p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})$.} Beyond their data omnivorousness, noisy channel models have other benefits. First, the two component models mean that two different aspects of the transduction problem can be addressed independently. For example, in many applications, source models are language models and innovations in these can be leveraged to obtain improvements in any system that uses them as a component. Second, the component models can have complementary strengths, since inference is carried out in the product space; this simplifies design because a single model does not have to get everything perfectly right. Third, the noisy channel operates by selecting outputs that both are \emph{a priori} likely \emph{and} that explain the input well. This addresses a failure mode that can occur in conditional models in which inputs are ``explained away'' by highly predictive output prefixes, resulting in poor training \citep{klein:2001}. Since the noisy channel formulation requires its outputs to explain the observed input, this problem is avoided. In principle, the noisy channel decomposition is straightforward; however, in practice, decoding (i.e., computing $\arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})$) is a significant computational challenge, and tractability concerns impose restrictions on the form the component models can take. To illustrate, an appealing parameterization would be to use an attentional seq2seq network \citep{bahdanau:2015} to model the channel probability $p(\boldsymbol{x} \mid \boldsymbol{y})$. However, seq2seq models are designed under the assumption that the complete conditioning sequence is available before any prefix probabilities of the output sequence can be computed. This assumption is problematic for channel models since it means that a complete output sequence must be constructed before the channel model can be evaluated (since the channel model conditions on the output). Therefore, to be practical, the channel probability must decompose in terms of prefixes of the conditioning variable, $\boldsymbol{y}$. While the chain rule justifies decomposing output variable probabilities in terms of successive extensions of a partial prefix, no such convenience exists for conditioning variables, and approximations must be introduced. In this work, we use a variant of the newly proposed online seq2seq model of \citet{yu:2016} which uses a latent alignment variable to enable its probabilities to factorize in terms of prefixes of both the input and output, making it an appropriate channel model~(\S\ref{sec:model}). Using this channel model, the decoding problem then becomes similar to the problem faced when decoding with direct models~(\S\ref{sec:decoding}). Experiments on abstractive summarization, machine translation, and morphological inflection show that the noisy channel can significantly improve performance and exploit unpaired output training samples and that models that combine the direct model and a noisy channel model offer further improvements still~(\S\ref{sec:experiments}). \section{Background: Segment to Segment Neural Transduction} \label{sec:model} Our model is based on the Segment to Segment Neural Transduction model (SSNT) of Yu et al., 2016. At a high level, the model alternates between encoding more of the input sequence and decoding output tokens from the encoded representation. This presentation deviates from the Yu et al.'s presentation so as to emphasize the incremental construction of the conditioning context that is enabled by the latent variable. \subsection{Model description} Similar to other neural sequence to sequence models, SSNT models the conditional probability $p(\boldsymbol{y} \mid \boldsymbol{x})$ of a output sequence $\boldsymbol{y}$ given a input sequence $\boldsymbol{x}$. To avoid having to observe the complete input sequence $\boldsymbol{x}$ before making a prediction of the beginning of the output sequence, we introduce a latent alignment variable $\boldsymbol{z}$ which indicates when each token of the output sequence is to be generated as the input sequence is being read. Since we assume that the input is read just once from left to right, we restrict $\boldsymbol{z}$ to be a monotonically increasing alignment (i.e., $z_{j+1} \ge z_j$ is true with probability 1), where $z_j = i$ denotes that the output token at position $j$ ($y_j$) is generated when the input sequence up through position $i$ has been read. The SSNT model is: \begin{align} \begin{split} p(\boldsymbol{y} \mid \boldsymbol{x}) & = \sum_{\boldsymbol{z}} p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) \\ p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) & \approx \prod_{j=1}^{|\boldsymbol{y}|} \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \label{eq:model} \end{split} \end{align} We explain the model in terms of its two components, starting with the word generation term. In the SSNT, the input and output sequences $\boldsymbol{x}$, $\boldsymbol{y}$ are encoded with two separate LSTMs \citep{hochreiter1997long}, resulting in sequences of hidden states representing prefixes of these sequences. In Yu et al.'s formulation, the input sequence encoder (i.e., the conditioning context encoder) can either be a unidirectional or bidirectional LSTM, but here we assume that it is a unidirectional LSTM, which ensures that it will function well as a channel model that can compute probabilities with incomplete conditioning contexts (this is necessary since, at decoding time, we will be constructing the conditioning context incrementally). Let $\mathbf{h}_i$ represent the input sequence encoding for the prefix $\boldsymbol{x}_1^{i}$. Since the final action at timestep $j$ will be to predict $y_j$, it is convenient to let $\mathbf{s}_j$ denote the hidden state that excludes $y_j$, i.e., the encoding of the prefix $\boldsymbol{y}_1^{j-1}$. The probability of the next token $y_j$ is calculated by concatenating the aligned hidden state vectors $\mathbf{s}_j$ and $\mathbf{h}_{z_j}$ followed by a softmax layer, \begin{align*} p(y_j \mid \boldsymbol{x}_1^{z_j}, \boldsymbol{y}_1^{j-1}) \propto \text{exp} (\mathbf{W}_w[\mathbf{h}_{z_j};\mathbf{s}_j] + \mathbf{b}_w). \end{align*} The model thus depends on the current alignment position $z_j$, which determines how far into $\boldsymbol{x}$ it has read. We now discuss how the sequence of $z_j$'s are generated. First, we remark that modelling this distribution requires some care so as to avoid conditioning on the entire input sequence. To illustrate why one might induce a dependency on the entire input sequence in this model, it is useful to compare to a standard attention model. Attention models operate by computing a score using a representation of alignment candidate (in our case, the candidates would be every unread token remaining in the input). If we followed this strategy, it would be necessary to observe the full input sequence when making the first alignment decision. We instead model the alignment transition from timestep $j$ to $j+1$ by decomposing it into a sequence of conditionally independent \textsc{shift} and \textsc{emit} operations that progressively decide whether to read another token or stop reading. That is, at input position $i$, the model decides to \textsc{emit}, i.e., to set $z_j=i$ and predict the next output token $y_j$ from the word model, or it decides to \textsc{shift}, i.e., to read one more input token and increment the input position $i \gets i+1$. The probability $p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_1^{i}, \boldsymbol{y}_1^{j-1})$ is calculated using the encoder and decoder states defined above as: \begin{align*} p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_{1}^{i}, \boldsymbol{y}_1^{j-1}) = \sigma(\text{MLP}(\mathbf{W}_t[\mathbf{h}_{i};\mathbf{s}_j] + b_t)). \end{align*} The probability of \textsc{shift} is simply $1-p(a_{i,j} = \textsc{emit})$. In this formulation, the probabilities of aligning $z_j$ to each alignment candidate $i$ can be computed by reading just $\boldsymbol{x}_1^i$ (rather than the entire sequence). The probabilities are also independent of the contents of the suffix $\boldsymbol{x}_{i+1}^{|\boldsymbol{x}|}$. Using the probabilities of the auxiliary $a_{i,j}$ variables, the alignment probabilities needed in Eq.~\ref{eq:model} are computed as: \begin{align*} p(z_j = i \mid z_{j-1}, \boldsymbol{y}_1^{j-1}, \boldsymbol{x}_{1}^{i}) &= \begin{cases} 0 & \text{if }i < z_{j-1} \\ p(a_{i,j} = \textsc{emit}) & \text{if }i=z_{j-1} \\ \left(\prod_{i'=z_{j-1}}^{i-1} p(a_{i',j} = \textsc{shift}) \right) p(a_{i,j} = \textsc{emit}) & \text{if }i>z_{j-1} \end{cases} \end{align*} \subsection{Inference algorithms} In SSNT, the probability of generating each $y_j$ depends only on the current output position's alignment ($z_j$), the current output prefix ($\boldsymbol{y}_1^{j-1}$), the input prefix up to the current alignment ($\boldsymbol{x}_1^{z_j}$). It does \emph{not} depend on the history of the alignment decisions. Likewise, the alignment decisions at each position are also conditionally independent of the history of alignment decisions. Because of these independence assumptions, $\boldsymbol{z}$ can be marginalised using a $O(|\boldsymbol{x}|^2 \cdot |\boldsymbol{y}|)$ time dynamic programming algorithm where each fill in a chart with computing the following marginal probabilities: \begin{align*} \begin{split} \alpha(i,j) & = p(z_j=i, \boldsymbol{y}_1^j \mid \boldsymbol{x}_1^{z_j}) = \sum_{i'=1}^{i} \alpha(i',j-1) \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \end{split} \end{align*} The model is trained to minimize the negative log likelihood of the parallel corpus $S$: \begin{equation} \label{loss} \begin{split} \mathcal{L}(\boldsymbol{\theta}) &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log p(\boldsymbol{y}\ |\ \boldsymbol{x}; \boldsymbol{\theta})\\ &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log \alpha(|\boldsymbol{x}|, |\boldsymbol{y}|).\\ \end{split} \end{equation} The gradients of this objective with respect to the component probability models can be computed using automatic differentiation or using a secondary dynamic program that computes `backward' probabilities. We refer the reader to Section 3.1 of \citet{yu:2016} for details. In this paper, we use a slightly different objective from the one described in \citet{yu:2016}. Rather than marginalizing over the paths that end in any possible input positions $\sum_{i=1}^I\alpha(i, |\boldsymbol{y}|)$, we require that the full input be consumed when the final output symbol is generated. This constraint biases away from predicting outputs without explaining them using the input sequence. \section{Decoding} \label{sec:decoding} We now turn to the problem of decoding, that is, of computing \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y}), \end{align*} where we are using the SSNT model described in the previous section as the channel model and a language model that delivers prior probabilities of the output sequence in left-to-right order, i.e., $p(y_i \mid \boldsymbol{y}^{i-1})$. Marginalizing the latent variable during search is computationally hard \citep{simaan:1996}, and so we approximate the search problem as \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} \max_{\boldsymbol{z}} p(\boldsymbol{x},\boldsymbol{z} \mid \boldsymbol{y}) p(\boldsymbol{y}). \end{align*} However, even with this simplification, the search problem remains nontrivial. On one hand, we must search over the space of all possible outputs with a model that makes no Markovian assumptions. This is similar to the decoding problem faced in standard seq2seq transducers. On the other hand, our model computes the probability of the given input conditional on the predicted output hypothesis. Therefore, instead of just relying on a single softmax to provide a probability for every output word type (as we conveniently can in the direct model), we must loop over each output word type, and run a softmax over the input vocabulary---a computational expense that is quadratic in the size of the vocabulary! To reduce this computational effort, we make use of an auxiliary direct model $q(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x})$ to explore probable extensions of partial hypotheses, rather than trying to perform an exhaustive search over the vocabulary each time we extend an item on the beam. Algorithm~\ref{decode}, in Appendix~\ref{sec:algo_appendix}, describes the decoding algorithm based on a formulation by \citet{tillmann1997dp}. The idea is to create a matrix $Q$ of partial hypotheses. Each hypothesis in cell $(i,j)$ covers the first $i$ words of the input ($\boldsymbol{x}_1^i$) and corresponds to an output hypothesis prefix of length $j$ ($\boldsymbol{y}_1^j$). The hypothesis is associated with a model score. For each cell $(i, j)$, the direct proposal model first calculates the scores of possible extensions of previous cells that could then reach $(i,j)$ by considering every token in the output vocabulary, from all previous candidate cells $(i-1,\le j)$. That gives the top $K_1$ partial output sequences. These partial output sequences are subsequently rescored by the noisy channel model, and the $K_2$ best candidates are kept in the beam and used for further extension. The beam size $K_1$ and $K_2$ are hyperparameters to be tuned in the experiments. \subsection{Model combination} The decoder we have just described makes use of an auxiliary decoding model. This means that, as a generalisation, it is capable of decoding under an objective that is a linear combination of the direct model, channel model, language model and a bias for the output length\footnote{In the experiments, we did not marginalize the probability of the direct model when calculating the general search objective. We found that marginalizing the probability does not give better performance and makes decoding extremely slow.}, \begin{equation} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j} = \lambda_1 \log p(\boldsymbol{y}_1^j\ |\ \boldsymbol{x}_1^i) + \lambda_2 \log p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) + \lambda_3 \log p(\boldsymbol{y}_1^j) + \lambda_4 |\boldsymbol{y}_1^j|. \end{equation} The bias is used to penalize the noisy channel model for generating too-short (or long) sequences. The $\lambda$'s are hyperparameters to be tuned using on a small amount of held-out development data. \section{Experiments} \label{sec:experiments} We evaluate our model on three natural language processing tasks, abstractive sentence summarisation, machine translation and morphological inflection generation. For each task, we compare the performance of the direct model, noisy channel model, and the interpolation of the two models. \subsection{Abstractive Sentence Summarisation} Sentence summarisation is the problem of constructing a shortened version of a sentence while preserving the majority of its meaning. In contrast to extractive summarisation, which can only copy words from the original sentence, abstractive summarisation permits arbitrary rewording of the sentence. The dataset \citep{DBLP:conf/emnlp/RushCW15} that we use is constructed by pairing the first sentence and the headline of each article from the annotated Gigaword corpus \citep{graff2003english,napoles2012annotated}. There are 3.8m, 190k and 381k sentence pairs in the training, validation and test sets, respectively. \cite{yu:2016} filtered the dataset by restricting the lengths of the input and output sentences to be no greater than 50 and 25 tokens, respectively. From the filtered data, they further sampled 1 million sentence pairs for training. We experimented on training the direct model and channel model with both the sampled 1 million and the full 3.8 million parallel data. The language model is trained on the target side of the parallel data, i.e. the headlines. We evaluated the generated summaries of 2000 randomly sampled sentence pairs using full length ROUGE F1. This setup is in line with the previous work on this task \citep{DBLP:conf/emnlp/RushCW15,chopra,gulcehre2016pointing,yu:2016}. The same configuration is used to train the direct model and the channel model. The loss (Equation \ref{loss}) is optimized by Adam \citep{DBLP:journals/corr/KingmaB14}, with initial learning rate of 0.001. We use LSTMs with 1 layer for both the encoder and decoders, with hidden units of 256. The mini-batch size is 32, and dropout of 0.2 is applied to the input and output of LSTMs. For the language model, we use a 2-layer LSTM with 1024 hidden units and 0.5 dropout. The learning rate is 0.0001. All the hyperparameters are optimised via grid search on the perplexity of the validation set. During decoding, beam search is employed with the number of proposals generated by the direct model $K_1 = 20$, and the number of best candidates selected by the noisy channel model $K_2 = 10$. Table \ref{test_rg} presents the ROUGE-F1 scores of the test set from the direct model, noisy channel model (channel + LM + bias), the interpolation of the direct model and the noisy channel model (direct + channel + LM + bias), and the interpolation of the direct model and language model (direct + LM + bias) trained on different sizes of data. The noisy channel model with the language model trained on the target side of the 1 million parallel data outperforms the direct model by approximately 1 point. Such improvement indicates that the language model helps improve the quality of the output sequence when no extra unlabelled data is available. Training the language model with all the headlines in the dataset, i.e. 3.8 million sentences, gives a further boost to the ROUGE score. This is in line with our expectation that the model benefits from adding large amounts of unlabelled data. The interpolation of the direct model, channel model, language model and bias of the output length achieves the best results --- the ROUGE score is close to the direct model trained on all the parallel data. Although there is still improvement, when the direct model is trained with more data, the gap between the direct model and the noisy channel model is smaller. No gains is observed if the language model is combined with the direct model. We find that as we increase the weight of the language model, the result is getting worse. Table \ref{prev_work} surveys published results on this task, and places our best models in the context of the current state-of-the-art results. ABS+ \citep{DBLP:conf/emnlp/RushCW15}, RAS-LSTM and RAS-Elman \citep{chopra} are different variations of the attentive models. {\it Pointing the unkown words} uses pointer networks \citep{vinyals2015pointer} to select the output token from the input sequence in order to avoid generating unknown tokens. ASC + FSC \citep{miao2016} is a semi-supervised model based on a variational autoencoder. Trained on 1m paired samples and 3.8m unpaired samples, the noisy channel achieves comparable or better results than (direct) models trained with 3.8m paired samples. Compared to \cite{miao2016}, whose ASC + FSC models is an alternative strategy for using unpaired data, the noisy channel is significantly more effective --- 33.21 versus 31.09 in ROUGE-1. Finally, motivated by the qualitative observation that noisy channel model outputs were quite fluent and often used reformulations of the input rather than a strict compression (which would be poorly scored by ROUGE), we carried out a human preference evaluation whose results are summarised in Table~\ref{tab:human}. This confirms that noisy channel summaries are strongly preferred over those of the direct model. \subsection{Machine Translation} We next evaluate our models on a Chinese--English machine translation task. We used parallel data with 184k sentence pairs (from the FBIS corpus, LDC2003E14) and monolingual data with 4.3 million of English sentences (selected from the English Gigaword). The training data is preprocessed by lowercasing the English sentences, replacing digits with `\#' token, and replacing tokens appearing less than 5 times with an UNK token. This results in vocabulary sizes of 30k and 20k for Chinese sentences and English sentences, respectively. The models are trained using Adam \citep{DBLP:journals/corr/KingmaB14} with initial learning rate of 0.001 for the direct model and the channel model, and 0.0001 for the language model. The LSTMs for the direct and channel models have 512 hidden units and 1 layer, and 2 layers with 1024 hidden units per layer for the language model. Dropout of 0.5 on the input and output of LSTMs is set for all the model training. The noisy channel decoding uses $K_1$ = 20 and $K_2$ = 10 as the beam sizes. Table \ref{mt-result} lists the translation performance of different models in BLEU scores. To set benchmarks, we train the vanilla and attentional sequence to sequence models \citep{sutskever:2014,bahdanau:2015} using the same parallel data. For direct models, we leverage bidirectional LSTMs as the encoder for this task. We can see that the vanilla sequence to sequence model behaves poorly due to the small amounts of parallel data. By contrast, the direct model (SSNT) and the attentional model work relatively well, with the attentional model outperforming the SSNT direct model. Although these models both directly model $p(\boldsymbol{y} \mid \boldsymbol{x})$, this result is unsurprising because the SSNT direct model is most effective when the alignment between sequences is largely monotonic, and Chinese--English translation word orders diverge considerably. However, despite this limitation, the noisy channel model is approximately 3 points higher in BLEU than the direct model, and the combination of noisy channel and direct model gives extra boost. Confirming the empirical findings of prior work (and in line with theoretical predictions), the interpolation of the direct model and language model is not effective. \subsection{Morphological Inflection Generation} Morphological inflection is the task of generating a target (inflected form) word from a source word (base form), given a morphological attribute, e.g. number, tense, and person etc.. It is useful for reducing data sparsity issues in translating morphologically rich languages. The transformation from the base form to the inflected form is usually to add prefix or suffix, or to do character replacement. The dataset \citep{DBLP:conf/naacl/DurrettD13} that we use in the experiments is created from Wiktionary, including inflections for German nouns, German verbs, Spanish Verbs, Finnish noun and adjective, and Finnish verbs. We only experimented on German nouns and German verbs, as German nouns is the most difficult task\footnote{While state-of-the-art systems can achieve 99\% accuracies on Spanish verbs and Finnish verbs, they can only get 89\% accuracy on German nouns.}, and the direct model does not perform as well as other state-of-the-art systems on German verbs. The train/dev/test split for German nouns is 2364/200/200, and for German verbs is 1617/200/200. There are 8 and 27 inflection types in German nouns and German verbs, respectively. Following previous work, we learn a separate model for each type of inflection independent of the other inflections. We report results on the average accuracy across different inflections. Our language models were trained on word types extracted by running a morphological analysis tool on the WMT 2016 monolingual data and extracting examples of appropriately inflected word forms.\footnote{\url{http://www.statmt.org/wmt16/translation-task.html}} After annotation the number of instances for training the language model ranged from 300k to 3.8m for different inflection types in German nouns, and from 200 to 54k in German verbs. The experimental setup that we use on this task is $K_1$ = 60, $K_2$ = 30, \begin{itemize} \item direct and channel model: 1 layer LSTM with 128 hidden, $\eta = 0.001$, dropout = 0.5. \item language model: 2 layer LSTM with 512 hidden, $\eta = 0.0001$, dropout = 0.5. \end{itemize} Table \ref{morph-result} summarises the results from our models. On both datasets, the noisy channel model (channel + LM + bias) does not perform as well as the direct model, but the interpolation of the direct model and noisy channel model (direct + channel + LM + bias) significantly outperforms the direct model. The interpolation of the direct model and language model (direct + LM + bias) achieves better results than the direct model and the noisy channel model on German nouns, but not on German verbs. For further comparison, we also included the state-of-the-art results as benchmarks. NCK15 \citep{DBLP:conf/naacl/NicolaiCK15} tackles the task based on the three-stage approach: (1) align the source and target word, (2) extract inflection rules, (3) apply the rule to new examples. FTND16 \citep{DBLP:conf/naacl/FaruquiTND16} is based on neural sequence to sequence models. Both models (NCK15+ and FTND16+) rerank the candidate outputs by the scores predicted from n-gram language models, together with other features. \section{Analysis} By observing the output generated by the direct model and noisy channel model, we find (in line with theoretical critiques of conditional models) that the direct model may leave out key information. By contrast, the noisy channel model does seem to avoid this issue. To illustrate, in Example~1 (see Appendix~B) in Table~\ref{example}, the direct model ignores the key phrase `coping with', resulting in incomplete meaning, but the noisy channel model covers it. Similarly, in Example 6, the direct model does not translate the Chinese word corresponding to `investigation'. We also observe that while the direct model mostly copies words from the source sentence, the noisy channel model prefers generating paraphrases. For instance, in Example 2, while the direct model copies the word `accelerate' in the generated output, the noisy channel model generate `speed up' instead. While one might argue that copying is a preferable compression technique than paraphrasing (as long as it produces grammatical outputs), it does show the power of these models. \section{Related work} Noisy channel decompositions have been successfully used in a variety of problems, including speech recognition~\citep{jelinek:1998}, machine translation~\citep{brown:1993}, spelling correction~\citep{brill:2000}, and question answering~\citep{echihabi:2003}. The idea of adding language models and monolingual data in machine translation has been explored in earlier work. \cite{gulcehre:2015} propose two strategies of combining a language model with a neural sequence to sequence model. In shallow fusion, during decoding the sequence to sequence model (direct model) proposes candidate outputs and these candidates are reranked based on the scores calculated by a weighted sum of the probability of the translation model and that of the language model. In deep fusion, the language model is integrated into the decoder of the sequence to sequence model by concatenating their hidden state at each time step. \cite{sennrich:2016} incorporate target language unpaired training data by doing back-translation to create synthetic parallel training data. While this technique is quite effective, its practicality seems limited to problems where the inputs and outputs contain roughly the same information (such as translation). \cite{cheng:2016} leverages the abundant monolingual data by doing multitask learning with an autoencoding objective. A number of papers have remarked on the tendency for content to get dropped (or repeated) in translation. \citet{liu:2016} propose translating in both a left-to-right and a left-to-right direction and seeking a consensus. \citet{tu:2016} propose augmenting a direct model's decoding objective with a reverse translation model (similar to our channel model except it conditions on the direct model's output RNN's hidden states rather than the words); however, that work just reranks complete translation hypotheses rather than developing a model that permits an incremental search. Another trend of work that is related to our model is the investigation of making online prediction for machine translation \citep{gu:2016,grissom:2014,sankaran:2010} and speech recognition \citep{hwang:2016,jaitly2015neural}. Our direct model (and channel model) shares the idea of introducing stochastic latent variables to neural networks with several papers and marginalising these during training. Examples include connectionist temporal classification (CTC) \citep{graves2006connectionist} and the more recent segmental recurrent neural networks (SRNN) \citep{kong2015segmental}. Compared to these models, our direct model has the advantage of capturing unbounded dependencies of output words. The direct model is closely related to the sequence transduction model \citep{graves2012sequence} in the way of modeling the probability of predicting output tokens and marginalizing latent variables using dynamic programming. However, rather than modeling the joint distribution over outputs and alignments by inserting null symbols into the output sequence, our direct model defines a separate latent alignment variable, with alignment distribution defined with neural networks. Similar to our work, the model in \citep{alkhoulialignment} is decomposed into the alignment model and the model of word predictions. The two models are trained separately and combined during decoding, with subsequent refinements using a Viterbi-EM approximation. By contrast, in our direct and channel models, the latent and observed components of the models are trained jointly using a dynamic program to exactly marginalise the unobserved variables. \section{Conclusion} We have presented and empirically validated a noisy channel transduction model that uses component models based on recurrent neural networks. This formulation lets us use unpaired outputs to estimate the parameters of the source model and input-output pairs to train the channel model. Despite the channel model's ability to condition on long sequences, we are able to maintain tractable decoding by using a latent segmentation variable that breaks the conditioning context up into a series of monotonically growing segments. Our experiments show that this model makes excellent use of unpaired training data. \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Algorithm} \label{sec:algo_appendix} \begin{algorithm*}[ht] \caption{Noisy Channel Decoding} \label{decode} \begin{algorithmic} \State \textbf{Notation: } $Q$ is the Viterbi matrix, bp is the backpointer, $W$ stores the predicted tokens, $\mathcal{V}$ refers to the vocabulary, $I=|\boldsymbol{x}|$, and $J_\text{max}$ denotes the maximum number of output tokens that can be predicted. \State \textbf{Input: } source sequence $\boldsymbol{x}$ \State \textbf{Output: } best output sequence $\boldsymbol{y^*}$ \State \textbf{Initialisation: } $Q \in \mathbb{R}^{I \times J_\text{max}\times K_1}$, bp $\in \mathbb{N}^{I \times J_\text{max}\times K_1}$, $W \in \mathbb{N}^{I \times J_\text{max}\times K_1}$, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $Q_{temp} \in \mathbb{R}^{K_1}$, $bp_{temp} \in \mathbb{N}^{K_1}$, $W_{temp} \in \mathbb{N}^{K_1}$ \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}}q(z_1 = i) $ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_{1}^{z_1})$ \Comment Candidates generated by $q(\boldsymbol{y}\ |\ \boldsymbol{x})$. \State $bp_{temp}\gets 0$ \State $W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}}q(z_1 = i)$ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_1^{z_1})$ \State $Q[i, 1] \gets \topk(K_2)_{y \in W_{temp}} O_{\boldsymbol{x}_1^i, y}$ \Comment Rerank the candidates by objective ($O$). \State $W[i,1] \gets \argtopk(K_2)_{y \in W_{temp}}O_{\boldsymbol{x}_1^i, y}$ \EndFor \For{$j\in[2, J_\text{max}]$} \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}, k \in [1, i]} Q[k,j-1] \cdot$ $q(z_j = i\ |\ z_{j-1} = k)q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x}_1^{z_j})$ \State $bp_{temp} , W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}, k \in [1, i]} $ $Q[k,j-1]q(z_j = i\ |\ z_{j-1} = k) \cdot$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x})$ \State $Y \gets \candidate(bp_{temp}, W_{temp})$ \Comment Get partial candidate $\boldsymbol{y}_1^j$. \State $Q[i,j] \gets \topk(K_2)_{\boldsymbol{y}_j \in Y} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \State $bp[i,j] , W[i,j] \gets %\argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) p(\boldsymbol{y}_1^j)$ \argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \EndFor \EndFor \State \Return a sequence of words stored in $W$ by following backpointers starting from $(I,\argmax_j Q[I, j])$. \end{algorithmic} \end{algorithm*} \section{Example outputs} \label{sec:outputs} \end{document}
The Neural Noisy Channel
1611.02554
Table 2: Overview of results on the abstractive sentence summarisation task. ABS+ (DBLP:conf/emnlp/RushCW15) is the attentive model with bag-of-words as the encoder. RAS-LSTM and RAS-Elman (chopra) are the sequence to sequence models with attention with the RNN cell implemented as LSTMs and an Elman architecture (elman1990finding), respectively. Pointing the unknown words (gulcehre2016pointing) uses pointer networks (vinyals2015pointer) to select the output token from the input sequence in order to avoid generating unknown tokens. ASC + FSC (miao2016) is the semi-supervised model based on a variational autoencoder.
[ "Model", "# Parallel data", "# Unpaired data", "RG-1", "RG-2", "RG-L" ]
[ [ "ABS+", "3.8m", "-", "29.55", "11.32", "26.42" ], [ "RAS-LSTM", "3.8m", "-", "32.55", "14.70", "30.03" ], [ "RAS-Elman", "3.8m", "-", "33.78", "15.97", "31.15" ], [ "Pointing unkown words", "3.8m", "-", "[BOLD] 35.19", "16.66", "[BOLD] 32.51" ], [ "ASC + FSC", "1.0m", "3.8m", "31.09", "12.79", "28.97" ], [ "ASC + FSC", "3.8m", "3.8m", "34.17", "15.94", "31.92" ], [ "direct + channel + LM + bias (bi)", "1.0m", "3.8m", "33.21", "15.65", "30.60" ], [ "direct + channel + LM + bias (bi)", "3.8m", "3.8m", "34.41", "[BOLD] 16.86", "31.83" ] ]
ABS+ (DBLP:conf/emnlp/RushCW15), RAS-LSTM and RAS-Elman (chopra) are different variations of the attentive models. Pointing the unkown words uses pointer networks (vinyals2015pointer) to select the output token from the input sequence in order to avoid generating unknown tokens. ASC + FSC (miao2016) is a semi-supervised model based on a variational autoencoder. Trained on 1m paired samples and 3.8m unpaired samples, the noisy channel achieves comparable or better results than (direct) models trained with 3.8m paired samples. Compared to miao2016, whose ASC + FSC models is an alternative strategy for using unpaired data, the noisy channel is significantly more effective — 33.21 versus 31.09 in ROUGE-1.
\documentclass{article} % For LaTeX2e \newcommand{\cjd}[1]{\textcolor{blue}{\bf \small [#1 --CJD]}} \newcommand{\pb}[1]{\textcolor{orange}{\bf \small [#1 --PB]}} \newcommand{\leiyu}[1]{\textcolor{red}{\bf \small [#1 --Lei]}} \newcommand{\etg}[1]{\textcolor{pink}{\bf \small [#1 --ETG]}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\topk}{topk} \DeclareMathOperator*{\argtopk}{arg\,topk} \DeclareMathOperator*{\candidate}{getCandidateOutputs} \title{The Neural Noisy Channel} \author{Lei Yu$^1$\thanks{Work completed at DeepMind.} , Phil Blunsom$^{1,2}$, Chris Dyer$^{2}$, Edward Grefenstette$^{2}$, and Tom\'{a}\v{s} Ko\v{c}isk\'{y}$^{1,2}$ \\ $^1$University of Oxford and $^2$DeepMind \\ {\tt lei.yu@cs.ox.ac.uk, \{pblunsom,cdyer,etg,tkocisky\}@google.com} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects during training, noisy channel models must produce outputs that explain their inputs, and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use. \end{abstract} \section{Introduction} Recurrent neural network sequence to sequence models~\citep{kalchbrenner:2013,sutskever:2014,bahdanau:2015} are excellent models of $p(\textrm{output sequence }\boldsymbol{y} \mid \textrm{input sequence }\boldsymbol{x})$, provided sufficient input--output $(\boldsymbol{x},\boldsymbol{y})$ pairs are available for estimating their parameters. However, in many domains, vastly more unpaired output examples are available than input--output pairs (e.g., transcribed speech is relatively rare although non-spoken texts are abundant; Swahili--English translations are rare although English texts are abundant; etc.). A classic strategy for exploiting both kinds of data is to use Bayes' rule to rewrite $p(\boldsymbol{y} \mid \boldsymbol{x})$ as $p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})/p(\boldsymbol{x})$, a factorisation which is called a \textbf{noisy channel model}~\citep{shannon:1948}. A noisy channel model thus consists of two component models: the conditional \textbf{channel model}, $p(\boldsymbol{x} \mid \boldsymbol{y})$, which characterizes the \emph{reverse} transduction problem and whose parameters are estimated from the paired $(\boldsymbol{x},\boldsymbol{y})$ samples, and the unconditional \textbf{source model}, $p(\boldsymbol{y})$, whose parameters are estimated from both the paired and (usually much more numerous) unpaired samples.\footnote{We do not model $p(\boldsymbol{x})$ since, in general, we will be interested in finding $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x})$, and $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x}) = \argmax_\mathbf{y} \frac{p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})}{p(\mathbf{x})}= \argmax_\mathbf{y} p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})$.} Beyond their data omnivorousness, noisy channel models have other benefits. First, the two component models mean that two different aspects of the transduction problem can be addressed independently. For example, in many applications, source models are language models and innovations in these can be leveraged to obtain improvements in any system that uses them as a component. Second, the component models can have complementary strengths, since inference is carried out in the product space; this simplifies design because a single model does not have to get everything perfectly right. Third, the noisy channel operates by selecting outputs that both are \emph{a priori} likely \emph{and} that explain the input well. This addresses a failure mode that can occur in conditional models in which inputs are ``explained away'' by highly predictive output prefixes, resulting in poor training \citep{klein:2001}. Since the noisy channel formulation requires its outputs to explain the observed input, this problem is avoided. In principle, the noisy channel decomposition is straightforward; however, in practice, decoding (i.e., computing $\arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})$) is a significant computational challenge, and tractability concerns impose restrictions on the form the component models can take. To illustrate, an appealing parameterization would be to use an attentional seq2seq network \citep{bahdanau:2015} to model the channel probability $p(\boldsymbol{x} \mid \boldsymbol{y})$. However, seq2seq models are designed under the assumption that the complete conditioning sequence is available before any prefix probabilities of the output sequence can be computed. This assumption is problematic for channel models since it means that a complete output sequence must be constructed before the channel model can be evaluated (since the channel model conditions on the output). Therefore, to be practical, the channel probability must decompose in terms of prefixes of the conditioning variable, $\boldsymbol{y}$. While the chain rule justifies decomposing output variable probabilities in terms of successive extensions of a partial prefix, no such convenience exists for conditioning variables, and approximations must be introduced. In this work, we use a variant of the newly proposed online seq2seq model of \citet{yu:2016} which uses a latent alignment variable to enable its probabilities to factorize in terms of prefixes of both the input and output, making it an appropriate channel model~(\S\ref{sec:model}). Using this channel model, the decoding problem then becomes similar to the problem faced when decoding with direct models~(\S\ref{sec:decoding}). Experiments on abstractive summarization, machine translation, and morphological inflection show that the noisy channel can significantly improve performance and exploit unpaired output training samples and that models that combine the direct model and a noisy channel model offer further improvements still~(\S\ref{sec:experiments}). \section{Background: Segment to Segment Neural Transduction} \label{sec:model} Our model is based on the Segment to Segment Neural Transduction model (SSNT) of Yu et al., 2016. At a high level, the model alternates between encoding more of the input sequence and decoding output tokens from the encoded representation. This presentation deviates from the Yu et al.'s presentation so as to emphasize the incremental construction of the conditioning context that is enabled by the latent variable. \subsection{Model description} Similar to other neural sequence to sequence models, SSNT models the conditional probability $p(\boldsymbol{y} \mid \boldsymbol{x})$ of a output sequence $\boldsymbol{y}$ given a input sequence $\boldsymbol{x}$. To avoid having to observe the complete input sequence $\boldsymbol{x}$ before making a prediction of the beginning of the output sequence, we introduce a latent alignment variable $\boldsymbol{z}$ which indicates when each token of the output sequence is to be generated as the input sequence is being read. Since we assume that the input is read just once from left to right, we restrict $\boldsymbol{z}$ to be a monotonically increasing alignment (i.e., $z_{j+1} \ge z_j$ is true with probability 1), where $z_j = i$ denotes that the output token at position $j$ ($y_j$) is generated when the input sequence up through position $i$ has been read. The SSNT model is: \begin{align} \begin{split} p(\boldsymbol{y} \mid \boldsymbol{x}) & = \sum_{\boldsymbol{z}} p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) \\ p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) & \approx \prod_{j=1}^{|\boldsymbol{y}|} \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \label{eq:model} \end{split} \end{align} We explain the model in terms of its two components, starting with the word generation term. In the SSNT, the input and output sequences $\boldsymbol{x}$, $\boldsymbol{y}$ are encoded with two separate LSTMs \citep{hochreiter1997long}, resulting in sequences of hidden states representing prefixes of these sequences. In Yu et al.'s formulation, the input sequence encoder (i.e., the conditioning context encoder) can either be a unidirectional or bidirectional LSTM, but here we assume that it is a unidirectional LSTM, which ensures that it will function well as a channel model that can compute probabilities with incomplete conditioning contexts (this is necessary since, at decoding time, we will be constructing the conditioning context incrementally). Let $\mathbf{h}_i$ represent the input sequence encoding for the prefix $\boldsymbol{x}_1^{i}$. Since the final action at timestep $j$ will be to predict $y_j$, it is convenient to let $\mathbf{s}_j$ denote the hidden state that excludes $y_j$, i.e., the encoding of the prefix $\boldsymbol{y}_1^{j-1}$. The probability of the next token $y_j$ is calculated by concatenating the aligned hidden state vectors $\mathbf{s}_j$ and $\mathbf{h}_{z_j}$ followed by a softmax layer, \begin{align*} p(y_j \mid \boldsymbol{x}_1^{z_j}, \boldsymbol{y}_1^{j-1}) \propto \text{exp} (\mathbf{W}_w[\mathbf{h}_{z_j};\mathbf{s}_j] + \mathbf{b}_w). \end{align*} The model thus depends on the current alignment position $z_j$, which determines how far into $\boldsymbol{x}$ it has read. We now discuss how the sequence of $z_j$'s are generated. First, we remark that modelling this distribution requires some care so as to avoid conditioning on the entire input sequence. To illustrate why one might induce a dependency on the entire input sequence in this model, it is useful to compare to a standard attention model. Attention models operate by computing a score using a representation of alignment candidate (in our case, the candidates would be every unread token remaining in the input). If we followed this strategy, it would be necessary to observe the full input sequence when making the first alignment decision. We instead model the alignment transition from timestep $j$ to $j+1$ by decomposing it into a sequence of conditionally independent \textsc{shift} and \textsc{emit} operations that progressively decide whether to read another token or stop reading. That is, at input position $i$, the model decides to \textsc{emit}, i.e., to set $z_j=i$ and predict the next output token $y_j$ from the word model, or it decides to \textsc{shift}, i.e., to read one more input token and increment the input position $i \gets i+1$. The probability $p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_1^{i}, \boldsymbol{y}_1^{j-1})$ is calculated using the encoder and decoder states defined above as: \begin{align*} p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_{1}^{i}, \boldsymbol{y}_1^{j-1}) = \sigma(\text{MLP}(\mathbf{W}_t[\mathbf{h}_{i};\mathbf{s}_j] + b_t)). \end{align*} The probability of \textsc{shift} is simply $1-p(a_{i,j} = \textsc{emit})$. In this formulation, the probabilities of aligning $z_j$ to each alignment candidate $i$ can be computed by reading just $\boldsymbol{x}_1^i$ (rather than the entire sequence). The probabilities are also independent of the contents of the suffix $\boldsymbol{x}_{i+1}^{|\boldsymbol{x}|}$. Using the probabilities of the auxiliary $a_{i,j}$ variables, the alignment probabilities needed in Eq.~\ref{eq:model} are computed as: \begin{align*} p(z_j = i \mid z_{j-1}, \boldsymbol{y}_1^{j-1}, \boldsymbol{x}_{1}^{i}) &= \begin{cases} 0 & \text{if }i < z_{j-1} \\ p(a_{i,j} = \textsc{emit}) & \text{if }i=z_{j-1} \\ \left(\prod_{i'=z_{j-1}}^{i-1} p(a_{i',j} = \textsc{shift}) \right) p(a_{i,j} = \textsc{emit}) & \text{if }i>z_{j-1} \end{cases} \end{align*} \subsection{Inference algorithms} In SSNT, the probability of generating each $y_j$ depends only on the current output position's alignment ($z_j$), the current output prefix ($\boldsymbol{y}_1^{j-1}$), the input prefix up to the current alignment ($\boldsymbol{x}_1^{z_j}$). It does \emph{not} depend on the history of the alignment decisions. Likewise, the alignment decisions at each position are also conditionally independent of the history of alignment decisions. Because of these independence assumptions, $\boldsymbol{z}$ can be marginalised using a $O(|\boldsymbol{x}|^2 \cdot |\boldsymbol{y}|)$ time dynamic programming algorithm where each fill in a chart with computing the following marginal probabilities: \begin{align*} \begin{split} \alpha(i,j) & = p(z_j=i, \boldsymbol{y}_1^j \mid \boldsymbol{x}_1^{z_j}) = \sum_{i'=1}^{i} \alpha(i',j-1) \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \end{split} \end{align*} The model is trained to minimize the negative log likelihood of the parallel corpus $S$: \begin{equation} \label{loss} \begin{split} \mathcal{L}(\boldsymbol{\theta}) &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log p(\boldsymbol{y}\ |\ \boldsymbol{x}; \boldsymbol{\theta})\\ &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log \alpha(|\boldsymbol{x}|, |\boldsymbol{y}|).\\ \end{split} \end{equation} The gradients of this objective with respect to the component probability models can be computed using automatic differentiation or using a secondary dynamic program that computes `backward' probabilities. We refer the reader to Section 3.1 of \citet{yu:2016} for details. In this paper, we use a slightly different objective from the one described in \citet{yu:2016}. Rather than marginalizing over the paths that end in any possible input positions $\sum_{i=1}^I\alpha(i, |\boldsymbol{y}|)$, we require that the full input be consumed when the final output symbol is generated. This constraint biases away from predicting outputs without explaining them using the input sequence. \section{Decoding} \label{sec:decoding} We now turn to the problem of decoding, that is, of computing \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y}), \end{align*} where we are using the SSNT model described in the previous section as the channel model and a language model that delivers prior probabilities of the output sequence in left-to-right order, i.e., $p(y_i \mid \boldsymbol{y}^{i-1})$. Marginalizing the latent variable during search is computationally hard \citep{simaan:1996}, and so we approximate the search problem as \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} \max_{\boldsymbol{z}} p(\boldsymbol{x},\boldsymbol{z} \mid \boldsymbol{y}) p(\boldsymbol{y}). \end{align*} However, even with this simplification, the search problem remains nontrivial. On one hand, we must search over the space of all possible outputs with a model that makes no Markovian assumptions. This is similar to the decoding problem faced in standard seq2seq transducers. On the other hand, our model computes the probability of the given input conditional on the predicted output hypothesis. Therefore, instead of just relying on a single softmax to provide a probability for every output word type (as we conveniently can in the direct model), we must loop over each output word type, and run a softmax over the input vocabulary---a computational expense that is quadratic in the size of the vocabulary! To reduce this computational effort, we make use of an auxiliary direct model $q(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x})$ to explore probable extensions of partial hypotheses, rather than trying to perform an exhaustive search over the vocabulary each time we extend an item on the beam. Algorithm~\ref{decode}, in Appendix~\ref{sec:algo_appendix}, describes the decoding algorithm based on a formulation by \citet{tillmann1997dp}. The idea is to create a matrix $Q$ of partial hypotheses. Each hypothesis in cell $(i,j)$ covers the first $i$ words of the input ($\boldsymbol{x}_1^i$) and corresponds to an output hypothesis prefix of length $j$ ($\boldsymbol{y}_1^j$). The hypothesis is associated with a model score. For each cell $(i, j)$, the direct proposal model first calculates the scores of possible extensions of previous cells that could then reach $(i,j)$ by considering every token in the output vocabulary, from all previous candidate cells $(i-1,\le j)$. That gives the top $K_1$ partial output sequences. These partial output sequences are subsequently rescored by the noisy channel model, and the $K_2$ best candidates are kept in the beam and used for further extension. The beam size $K_1$ and $K_2$ are hyperparameters to be tuned in the experiments. \subsection{Model combination} The decoder we have just described makes use of an auxiliary decoding model. This means that, as a generalisation, it is capable of decoding under an objective that is a linear combination of the direct model, channel model, language model and a bias for the output length\footnote{In the experiments, we did not marginalize the probability of the direct model when calculating the general search objective. We found that marginalizing the probability does not give better performance and makes decoding extremely slow.}, \begin{equation} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j} = \lambda_1 \log p(\boldsymbol{y}_1^j\ |\ \boldsymbol{x}_1^i) + \lambda_2 \log p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) + \lambda_3 \log p(\boldsymbol{y}_1^j) + \lambda_4 |\boldsymbol{y}_1^j|. \end{equation} The bias is used to penalize the noisy channel model for generating too-short (or long) sequences. The $\lambda$'s are hyperparameters to be tuned using on a small amount of held-out development data. \section{Experiments} \label{sec:experiments} We evaluate our model on three natural language processing tasks, abstractive sentence summarisation, machine translation and morphological inflection generation. For each task, we compare the performance of the direct model, noisy channel model, and the interpolation of the two models. \subsection{Abstractive Sentence Summarisation} Sentence summarisation is the problem of constructing a shortened version of a sentence while preserving the majority of its meaning. In contrast to extractive summarisation, which can only copy words from the original sentence, abstractive summarisation permits arbitrary rewording of the sentence. The dataset \citep{DBLP:conf/emnlp/RushCW15} that we use is constructed by pairing the first sentence and the headline of each article from the annotated Gigaword corpus \citep{graff2003english,napoles2012annotated}. There are 3.8m, 190k and 381k sentence pairs in the training, validation and test sets, respectively. \cite{yu:2016} filtered the dataset by restricting the lengths of the input and output sentences to be no greater than 50 and 25 tokens, respectively. From the filtered data, they further sampled 1 million sentence pairs for training. We experimented on training the direct model and channel model with both the sampled 1 million and the full 3.8 million parallel data. The language model is trained on the target side of the parallel data, i.e. the headlines. We evaluated the generated summaries of 2000 randomly sampled sentence pairs using full length ROUGE F1. This setup is in line with the previous work on this task \citep{DBLP:conf/emnlp/RushCW15,chopra,gulcehre2016pointing,yu:2016}. The same configuration is used to train the direct model and the channel model. The loss (Equation \ref{loss}) is optimized by Adam \citep{DBLP:journals/corr/KingmaB14}, with initial learning rate of 0.001. We use LSTMs with 1 layer for both the encoder and decoders, with hidden units of 256. The mini-batch size is 32, and dropout of 0.2 is applied to the input and output of LSTMs. For the language model, we use a 2-layer LSTM with 1024 hidden units and 0.5 dropout. The learning rate is 0.0001. All the hyperparameters are optimised via grid search on the perplexity of the validation set. During decoding, beam search is employed with the number of proposals generated by the direct model $K_1 = 20$, and the number of best candidates selected by the noisy channel model $K_2 = 10$. Table \ref{test_rg} presents the ROUGE-F1 scores of the test set from the direct model, noisy channel model (channel + LM + bias), the interpolation of the direct model and the noisy channel model (direct + channel + LM + bias), and the interpolation of the direct model and language model (direct + LM + bias) trained on different sizes of data. The noisy channel model with the language model trained on the target side of the 1 million parallel data outperforms the direct model by approximately 1 point. Such improvement indicates that the language model helps improve the quality of the output sequence when no extra unlabelled data is available. Training the language model with all the headlines in the dataset, i.e. 3.8 million sentences, gives a further boost to the ROUGE score. This is in line with our expectation that the model benefits from adding large amounts of unlabelled data. The interpolation of the direct model, channel model, language model and bias of the output length achieves the best results --- the ROUGE score is close to the direct model trained on all the parallel data. Although there is still improvement, when the direct model is trained with more data, the gap between the direct model and the noisy channel model is smaller. No gains is observed if the language model is combined with the direct model. We find that as we increase the weight of the language model, the result is getting worse. Table \ref{prev_work} surveys published results on this task, and places our best models in the context of the current state-of-the-art results. ABS+ \citep{DBLP:conf/emnlp/RushCW15}, RAS-LSTM and RAS-Elman \citep{chopra} are different variations of the attentive models. {\it Pointing the unkown words} uses pointer networks \citep{vinyals2015pointer} to select the output token from the input sequence in order to avoid generating unknown tokens. ASC + FSC \citep{miao2016} is a semi-supervised model based on a variational autoencoder. Trained on 1m paired samples and 3.8m unpaired samples, the noisy channel achieves comparable or better results than (direct) models trained with 3.8m paired samples. Compared to \cite{miao2016}, whose ASC + FSC models is an alternative strategy for using unpaired data, the noisy channel is significantly more effective --- 33.21 versus 31.09 in ROUGE-1. Finally, motivated by the qualitative observation that noisy channel model outputs were quite fluent and often used reformulations of the input rather than a strict compression (which would be poorly scored by ROUGE), we carried out a human preference evaluation whose results are summarised in Table~\ref{tab:human}. This confirms that noisy channel summaries are strongly preferred over those of the direct model. \subsection{Machine Translation} We next evaluate our models on a Chinese--English machine translation task. We used parallel data with 184k sentence pairs (from the FBIS corpus, LDC2003E14) and monolingual data with 4.3 million of English sentences (selected from the English Gigaword). The training data is preprocessed by lowercasing the English sentences, replacing digits with `\#' token, and replacing tokens appearing less than 5 times with an UNK token. This results in vocabulary sizes of 30k and 20k for Chinese sentences and English sentences, respectively. The models are trained using Adam \citep{DBLP:journals/corr/KingmaB14} with initial learning rate of 0.001 for the direct model and the channel model, and 0.0001 for the language model. The LSTMs for the direct and channel models have 512 hidden units and 1 layer, and 2 layers with 1024 hidden units per layer for the language model. Dropout of 0.5 on the input and output of LSTMs is set for all the model training. The noisy channel decoding uses $K_1$ = 20 and $K_2$ = 10 as the beam sizes. Table \ref{mt-result} lists the translation performance of different models in BLEU scores. To set benchmarks, we train the vanilla and attentional sequence to sequence models \citep{sutskever:2014,bahdanau:2015} using the same parallel data. For direct models, we leverage bidirectional LSTMs as the encoder for this task. We can see that the vanilla sequence to sequence model behaves poorly due to the small amounts of parallel data. By contrast, the direct model (SSNT) and the attentional model work relatively well, with the attentional model outperforming the SSNT direct model. Although these models both directly model $p(\boldsymbol{y} \mid \boldsymbol{x})$, this result is unsurprising because the SSNT direct model is most effective when the alignment between sequences is largely monotonic, and Chinese--English translation word orders diverge considerably. However, despite this limitation, the noisy channel model is approximately 3 points higher in BLEU than the direct model, and the combination of noisy channel and direct model gives extra boost. Confirming the empirical findings of prior work (and in line with theoretical predictions), the interpolation of the direct model and language model is not effective. \subsection{Morphological Inflection Generation} Morphological inflection is the task of generating a target (inflected form) word from a source word (base form), given a morphological attribute, e.g. number, tense, and person etc.. It is useful for reducing data sparsity issues in translating morphologically rich languages. The transformation from the base form to the inflected form is usually to add prefix or suffix, or to do character replacement. The dataset \citep{DBLP:conf/naacl/DurrettD13} that we use in the experiments is created from Wiktionary, including inflections for German nouns, German verbs, Spanish Verbs, Finnish noun and adjective, and Finnish verbs. We only experimented on German nouns and German verbs, as German nouns is the most difficult task\footnote{While state-of-the-art systems can achieve 99\% accuracies on Spanish verbs and Finnish verbs, they can only get 89\% accuracy on German nouns.}, and the direct model does not perform as well as other state-of-the-art systems on German verbs. The train/dev/test split for German nouns is 2364/200/200, and for German verbs is 1617/200/200. There are 8 and 27 inflection types in German nouns and German verbs, respectively. Following previous work, we learn a separate model for each type of inflection independent of the other inflections. We report results on the average accuracy across different inflections. Our language models were trained on word types extracted by running a morphological analysis tool on the WMT 2016 monolingual data and extracting examples of appropriately inflected word forms.\footnote{\url{http://www.statmt.org/wmt16/translation-task.html}} After annotation the number of instances for training the language model ranged from 300k to 3.8m for different inflection types in German nouns, and from 200 to 54k in German verbs. The experimental setup that we use on this task is $K_1$ = 60, $K_2$ = 30, \begin{itemize} \item direct and channel model: 1 layer LSTM with 128 hidden, $\eta = 0.001$, dropout = 0.5. \item language model: 2 layer LSTM with 512 hidden, $\eta = 0.0001$, dropout = 0.5. \end{itemize} Table \ref{morph-result} summarises the results from our models. On both datasets, the noisy channel model (channel + LM + bias) does not perform as well as the direct model, but the interpolation of the direct model and noisy channel model (direct + channel + LM + bias) significantly outperforms the direct model. The interpolation of the direct model and language model (direct + LM + bias) achieves better results than the direct model and the noisy channel model on German nouns, but not on German verbs. For further comparison, we also included the state-of-the-art results as benchmarks. NCK15 \citep{DBLP:conf/naacl/NicolaiCK15} tackles the task based on the three-stage approach: (1) align the source and target word, (2) extract inflection rules, (3) apply the rule to new examples. FTND16 \citep{DBLP:conf/naacl/FaruquiTND16} is based on neural sequence to sequence models. Both models (NCK15+ and FTND16+) rerank the candidate outputs by the scores predicted from n-gram language models, together with other features. \section{Analysis} By observing the output generated by the direct model and noisy channel model, we find (in line with theoretical critiques of conditional models) that the direct model may leave out key information. By contrast, the noisy channel model does seem to avoid this issue. To illustrate, in Example~1 (see Appendix~B) in Table~\ref{example}, the direct model ignores the key phrase `coping with', resulting in incomplete meaning, but the noisy channel model covers it. Similarly, in Example 6, the direct model does not translate the Chinese word corresponding to `investigation'. We also observe that while the direct model mostly copies words from the source sentence, the noisy channel model prefers generating paraphrases. For instance, in Example 2, while the direct model copies the word `accelerate' in the generated output, the noisy channel model generate `speed up' instead. While one might argue that copying is a preferable compression technique than paraphrasing (as long as it produces grammatical outputs), it does show the power of these models. \section{Related work} Noisy channel decompositions have been successfully used in a variety of problems, including speech recognition~\citep{jelinek:1998}, machine translation~\citep{brown:1993}, spelling correction~\citep{brill:2000}, and question answering~\citep{echihabi:2003}. The idea of adding language models and monolingual data in machine translation has been explored in earlier work. \cite{gulcehre:2015} propose two strategies of combining a language model with a neural sequence to sequence model. In shallow fusion, during decoding the sequence to sequence model (direct model) proposes candidate outputs and these candidates are reranked based on the scores calculated by a weighted sum of the probability of the translation model and that of the language model. In deep fusion, the language model is integrated into the decoder of the sequence to sequence model by concatenating their hidden state at each time step. \cite{sennrich:2016} incorporate target language unpaired training data by doing back-translation to create synthetic parallel training data. While this technique is quite effective, its practicality seems limited to problems where the inputs and outputs contain roughly the same information (such as translation). \cite{cheng:2016} leverages the abundant monolingual data by doing multitask learning with an autoencoding objective. A number of papers have remarked on the tendency for content to get dropped (or repeated) in translation. \citet{liu:2016} propose translating in both a left-to-right and a left-to-right direction and seeking a consensus. \citet{tu:2016} propose augmenting a direct model's decoding objective with a reverse translation model (similar to our channel model except it conditions on the direct model's output RNN's hidden states rather than the words); however, that work just reranks complete translation hypotheses rather than developing a model that permits an incremental search. Another trend of work that is related to our model is the investigation of making online prediction for machine translation \citep{gu:2016,grissom:2014,sankaran:2010} and speech recognition \citep{hwang:2016,jaitly2015neural}. Our direct model (and channel model) shares the idea of introducing stochastic latent variables to neural networks with several papers and marginalising these during training. Examples include connectionist temporal classification (CTC) \citep{graves2006connectionist} and the more recent segmental recurrent neural networks (SRNN) \citep{kong2015segmental}. Compared to these models, our direct model has the advantage of capturing unbounded dependencies of output words. The direct model is closely related to the sequence transduction model \citep{graves2012sequence} in the way of modeling the probability of predicting output tokens and marginalizing latent variables using dynamic programming. However, rather than modeling the joint distribution over outputs and alignments by inserting null symbols into the output sequence, our direct model defines a separate latent alignment variable, with alignment distribution defined with neural networks. Similar to our work, the model in \citep{alkhoulialignment} is decomposed into the alignment model and the model of word predictions. The two models are trained separately and combined during decoding, with subsequent refinements using a Viterbi-EM approximation. By contrast, in our direct and channel models, the latent and observed components of the models are trained jointly using a dynamic program to exactly marginalise the unobserved variables. \section{Conclusion} We have presented and empirically validated a noisy channel transduction model that uses component models based on recurrent neural networks. This formulation lets us use unpaired outputs to estimate the parameters of the source model and input-output pairs to train the channel model. Despite the channel model's ability to condition on long sequences, we are able to maintain tractable decoding by using a latent segmentation variable that breaks the conditioning context up into a series of monotonically growing segments. Our experiments show that this model makes excellent use of unpaired training data. \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Algorithm} \label{sec:algo_appendix} \begin{algorithm*}[ht] \caption{Noisy Channel Decoding} \label{decode} \begin{algorithmic} \State \textbf{Notation: } $Q$ is the Viterbi matrix, bp is the backpointer, $W$ stores the predicted tokens, $\mathcal{V}$ refers to the vocabulary, $I=|\boldsymbol{x}|$, and $J_\text{max}$ denotes the maximum number of output tokens that can be predicted. \State \textbf{Input: } source sequence $\boldsymbol{x}$ \State \textbf{Output: } best output sequence $\boldsymbol{y^*}$ \State \textbf{Initialisation: } $Q \in \mathbb{R}^{I \times J_\text{max}\times K_1}$, bp $\in \mathbb{N}^{I \times J_\text{max}\times K_1}$, $W \in \mathbb{N}^{I \times J_\text{max}\times K_1}$, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $Q_{temp} \in \mathbb{R}^{K_1}$, $bp_{temp} \in \mathbb{N}^{K_1}$, $W_{temp} \in \mathbb{N}^{K_1}$ \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}}q(z_1 = i) $ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_{1}^{z_1})$ \Comment Candidates generated by $q(\boldsymbol{y}\ |\ \boldsymbol{x})$. \State $bp_{temp}\gets 0$ \State $W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}}q(z_1 = i)$ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_1^{z_1})$ \State $Q[i, 1] \gets \topk(K_2)_{y \in W_{temp}} O_{\boldsymbol{x}_1^i, y}$ \Comment Rerank the candidates by objective ($O$). \State $W[i,1] \gets \argtopk(K_2)_{y \in W_{temp}}O_{\boldsymbol{x}_1^i, y}$ \EndFor \For{$j\in[2, J_\text{max}]$} \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}, k \in [1, i]} Q[k,j-1] \cdot$ $q(z_j = i\ |\ z_{j-1} = k)q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x}_1^{z_j})$ \State $bp_{temp} , W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}, k \in [1, i]} $ $Q[k,j-1]q(z_j = i\ |\ z_{j-1} = k) \cdot$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x})$ \State $Y \gets \candidate(bp_{temp}, W_{temp})$ \Comment Get partial candidate $\boldsymbol{y}_1^j$. \State $Q[i,j] \gets \topk(K_2)_{\boldsymbol{y}_j \in Y} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \State $bp[i,j] , W[i,j] \gets %\argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) p(\boldsymbol{y}_1^j)$ \argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \EndFor \EndFor \State \Return a sequence of words stored in $W$ by following backpointers starting from $(I,\argmax_j Q[I, j])$. \end{algorithmic} \end{algorithm*} \section{Example outputs} \label{sec:outputs} \end{document}
The Neural Noisy Channel
1611.02554
(a)
[ "Model", "Acc." ]
[ [ "NCK15", "88.60" ], [ "FTND16", "88.12" ], [ "NCK15+", "89.90" ], [ "FTND16+", "89.31" ], [ "direct (uni)", "82.25" ], [ "direct (bi)", "87.68" ], [ "channel + LM + bias (uni)", "78.38" ], [ "channel + LM + bias (bi)", "78.13" ], [ "direct + LM + bias (bi)", "90.31" ], [ "direct + channel + LM + bias (uni)", "88.44" ], [ "direct + channel + LM + bias (bi)", "[BOLD] 90.94" ] ]
On both datasets, the noisy channel model (channel + LM + bias) does not perform as well as the direct model, but the interpolation of the direct model and noisy channel model (direct + channel + LM + bias) significantly outperforms the direct model. The interpolation of the direct model and language model (direct + LM + bias) achieves better results than the direct model and the noisy channel model on German nouns, but not on German verbs. For further comparison, we also included the state-of-the-art results as benchmarks. NCK15 (DBLP:conf/naacl/NicolaiCK15) tackles the task based on the three-stage approach: (1) align the source and target word, (2) extract inflection rules, (3) apply the rule to new examples. FTND16 (DBLP: conf/naacl/FaruquiTND16) is based on neural sequence to sequence models. Both models (NCK15+ and FTND16 +) rerank the candidate outputs by the scores predicted from n-gram language models, together with other features.
\documentclass{article} % For LaTeX2e \newcommand{\cjd}[1]{\textcolor{blue}{\bf \small [#1 --CJD]}} \newcommand{\pb}[1]{\textcolor{orange}{\bf \small [#1 --PB]}} \newcommand{\leiyu}[1]{\textcolor{red}{\bf \small [#1 --Lei]}} \newcommand{\etg}[1]{\textcolor{pink}{\bf \small [#1 --ETG]}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\topk}{topk} \DeclareMathOperator*{\argtopk}{arg\,topk} \DeclareMathOperator*{\candidate}{getCandidateOutputs} \title{The Neural Noisy Channel} \author{Lei Yu$^1$\thanks{Work completed at DeepMind.} , Phil Blunsom$^{1,2}$, Chris Dyer$^{2}$, Edward Grefenstette$^{2}$, and Tom\'{a}\v{s} Ko\v{c}isk\'{y}$^{1,2}$ \\ $^1$University of Oxford and $^2$DeepMind \\ {\tt lei.yu@cs.ox.ac.uk, \{pblunsom,cdyer,etg,tkocisky\}@google.com} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects during training, noisy channel models must produce outputs that explain their inputs, and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use. \end{abstract} \section{Introduction} Recurrent neural network sequence to sequence models~\citep{kalchbrenner:2013,sutskever:2014,bahdanau:2015} are excellent models of $p(\textrm{output sequence }\boldsymbol{y} \mid \textrm{input sequence }\boldsymbol{x})$, provided sufficient input--output $(\boldsymbol{x},\boldsymbol{y})$ pairs are available for estimating their parameters. However, in many domains, vastly more unpaired output examples are available than input--output pairs (e.g., transcribed speech is relatively rare although non-spoken texts are abundant; Swahili--English translations are rare although English texts are abundant; etc.). A classic strategy for exploiting both kinds of data is to use Bayes' rule to rewrite $p(\boldsymbol{y} \mid \boldsymbol{x})$ as $p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})/p(\boldsymbol{x})$, a factorisation which is called a \textbf{noisy channel model}~\citep{shannon:1948}. A noisy channel model thus consists of two component models: the conditional \textbf{channel model}, $p(\boldsymbol{x} \mid \boldsymbol{y})$, which characterizes the \emph{reverse} transduction problem and whose parameters are estimated from the paired $(\boldsymbol{x},\boldsymbol{y})$ samples, and the unconditional \textbf{source model}, $p(\boldsymbol{y})$, whose parameters are estimated from both the paired and (usually much more numerous) unpaired samples.\footnote{We do not model $p(\boldsymbol{x})$ since, in general, we will be interested in finding $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x})$, and $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x}) = \argmax_\mathbf{y} \frac{p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})}{p(\mathbf{x})}= \argmax_\mathbf{y} p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})$.} Beyond their data omnivorousness, noisy channel models have other benefits. First, the two component models mean that two different aspects of the transduction problem can be addressed independently. For example, in many applications, source models are language models and innovations in these can be leveraged to obtain improvements in any system that uses them as a component. Second, the component models can have complementary strengths, since inference is carried out in the product space; this simplifies design because a single model does not have to get everything perfectly right. Third, the noisy channel operates by selecting outputs that both are \emph{a priori} likely \emph{and} that explain the input well. This addresses a failure mode that can occur in conditional models in which inputs are ``explained away'' by highly predictive output prefixes, resulting in poor training \citep{klein:2001}. Since the noisy channel formulation requires its outputs to explain the observed input, this problem is avoided. In principle, the noisy channel decomposition is straightforward; however, in practice, decoding (i.e., computing $\arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})$) is a significant computational challenge, and tractability concerns impose restrictions on the form the component models can take. To illustrate, an appealing parameterization would be to use an attentional seq2seq network \citep{bahdanau:2015} to model the channel probability $p(\boldsymbol{x} \mid \boldsymbol{y})$. However, seq2seq models are designed under the assumption that the complete conditioning sequence is available before any prefix probabilities of the output sequence can be computed. This assumption is problematic for channel models since it means that a complete output sequence must be constructed before the channel model can be evaluated (since the channel model conditions on the output). Therefore, to be practical, the channel probability must decompose in terms of prefixes of the conditioning variable, $\boldsymbol{y}$. While the chain rule justifies decomposing output variable probabilities in terms of successive extensions of a partial prefix, no such convenience exists for conditioning variables, and approximations must be introduced. In this work, we use a variant of the newly proposed online seq2seq model of \citet{yu:2016} which uses a latent alignment variable to enable its probabilities to factorize in terms of prefixes of both the input and output, making it an appropriate channel model~(\S\ref{sec:model}). Using this channel model, the decoding problem then becomes similar to the problem faced when decoding with direct models~(\S\ref{sec:decoding}). Experiments on abstractive summarization, machine translation, and morphological inflection show that the noisy channel can significantly improve performance and exploit unpaired output training samples and that models that combine the direct model and a noisy channel model offer further improvements still~(\S\ref{sec:experiments}). \section{Background: Segment to Segment Neural Transduction} \label{sec:model} Our model is based on the Segment to Segment Neural Transduction model (SSNT) of Yu et al., 2016. At a high level, the model alternates between encoding more of the input sequence and decoding output tokens from the encoded representation. This presentation deviates from the Yu et al.'s presentation so as to emphasize the incremental construction of the conditioning context that is enabled by the latent variable. \subsection{Model description} Similar to other neural sequence to sequence models, SSNT models the conditional probability $p(\boldsymbol{y} \mid \boldsymbol{x})$ of a output sequence $\boldsymbol{y}$ given a input sequence $\boldsymbol{x}$. To avoid having to observe the complete input sequence $\boldsymbol{x}$ before making a prediction of the beginning of the output sequence, we introduce a latent alignment variable $\boldsymbol{z}$ which indicates when each token of the output sequence is to be generated as the input sequence is being read. Since we assume that the input is read just once from left to right, we restrict $\boldsymbol{z}$ to be a monotonically increasing alignment (i.e., $z_{j+1} \ge z_j$ is true with probability 1), where $z_j = i$ denotes that the output token at position $j$ ($y_j$) is generated when the input sequence up through position $i$ has been read. The SSNT model is: \begin{align} \begin{split} p(\boldsymbol{y} \mid \boldsymbol{x}) & = \sum_{\boldsymbol{z}} p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) \\ p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) & \approx \prod_{j=1}^{|\boldsymbol{y}|} \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \label{eq:model} \end{split} \end{align} We explain the model in terms of its two components, starting with the word generation term. In the SSNT, the input and output sequences $\boldsymbol{x}$, $\boldsymbol{y}$ are encoded with two separate LSTMs \citep{hochreiter1997long}, resulting in sequences of hidden states representing prefixes of these sequences. In Yu et al.'s formulation, the input sequence encoder (i.e., the conditioning context encoder) can either be a unidirectional or bidirectional LSTM, but here we assume that it is a unidirectional LSTM, which ensures that it will function well as a channel model that can compute probabilities with incomplete conditioning contexts (this is necessary since, at decoding time, we will be constructing the conditioning context incrementally). Let $\mathbf{h}_i$ represent the input sequence encoding for the prefix $\boldsymbol{x}_1^{i}$. Since the final action at timestep $j$ will be to predict $y_j$, it is convenient to let $\mathbf{s}_j$ denote the hidden state that excludes $y_j$, i.e., the encoding of the prefix $\boldsymbol{y}_1^{j-1}$. The probability of the next token $y_j$ is calculated by concatenating the aligned hidden state vectors $\mathbf{s}_j$ and $\mathbf{h}_{z_j}$ followed by a softmax layer, \begin{align*} p(y_j \mid \boldsymbol{x}_1^{z_j}, \boldsymbol{y}_1^{j-1}) \propto \text{exp} (\mathbf{W}_w[\mathbf{h}_{z_j};\mathbf{s}_j] + \mathbf{b}_w). \end{align*} The model thus depends on the current alignment position $z_j$, which determines how far into $\boldsymbol{x}$ it has read. We now discuss how the sequence of $z_j$'s are generated. First, we remark that modelling this distribution requires some care so as to avoid conditioning on the entire input sequence. To illustrate why one might induce a dependency on the entire input sequence in this model, it is useful to compare to a standard attention model. Attention models operate by computing a score using a representation of alignment candidate (in our case, the candidates would be every unread token remaining in the input). If we followed this strategy, it would be necessary to observe the full input sequence when making the first alignment decision. We instead model the alignment transition from timestep $j$ to $j+1$ by decomposing it into a sequence of conditionally independent \textsc{shift} and \textsc{emit} operations that progressively decide whether to read another token or stop reading. That is, at input position $i$, the model decides to \textsc{emit}, i.e., to set $z_j=i$ and predict the next output token $y_j$ from the word model, or it decides to \textsc{shift}, i.e., to read one more input token and increment the input position $i \gets i+1$. The probability $p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_1^{i}, \boldsymbol{y}_1^{j-1})$ is calculated using the encoder and decoder states defined above as: \begin{align*} p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_{1}^{i}, \boldsymbol{y}_1^{j-1}) = \sigma(\text{MLP}(\mathbf{W}_t[\mathbf{h}_{i};\mathbf{s}_j] + b_t)). \end{align*} The probability of \textsc{shift} is simply $1-p(a_{i,j} = \textsc{emit})$. In this formulation, the probabilities of aligning $z_j$ to each alignment candidate $i$ can be computed by reading just $\boldsymbol{x}_1^i$ (rather than the entire sequence). The probabilities are also independent of the contents of the suffix $\boldsymbol{x}_{i+1}^{|\boldsymbol{x}|}$. Using the probabilities of the auxiliary $a_{i,j}$ variables, the alignment probabilities needed in Eq.~\ref{eq:model} are computed as: \begin{align*} p(z_j = i \mid z_{j-1}, \boldsymbol{y}_1^{j-1}, \boldsymbol{x}_{1}^{i}) &= \begin{cases} 0 & \text{if }i < z_{j-1} \\ p(a_{i,j} = \textsc{emit}) & \text{if }i=z_{j-1} \\ \left(\prod_{i'=z_{j-1}}^{i-1} p(a_{i',j} = \textsc{shift}) \right) p(a_{i,j} = \textsc{emit}) & \text{if }i>z_{j-1} \end{cases} \end{align*} \subsection{Inference algorithms} In SSNT, the probability of generating each $y_j$ depends only on the current output position's alignment ($z_j$), the current output prefix ($\boldsymbol{y}_1^{j-1}$), the input prefix up to the current alignment ($\boldsymbol{x}_1^{z_j}$). It does \emph{not} depend on the history of the alignment decisions. Likewise, the alignment decisions at each position are also conditionally independent of the history of alignment decisions. Because of these independence assumptions, $\boldsymbol{z}$ can be marginalised using a $O(|\boldsymbol{x}|^2 \cdot |\boldsymbol{y}|)$ time dynamic programming algorithm where each fill in a chart with computing the following marginal probabilities: \begin{align*} \begin{split} \alpha(i,j) & = p(z_j=i, \boldsymbol{y}_1^j \mid \boldsymbol{x}_1^{z_j}) = \sum_{i'=1}^{i} \alpha(i',j-1) \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \end{split} \end{align*} The model is trained to minimize the negative log likelihood of the parallel corpus $S$: \begin{equation} \label{loss} \begin{split} \mathcal{L}(\boldsymbol{\theta}) &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log p(\boldsymbol{y}\ |\ \boldsymbol{x}; \boldsymbol{\theta})\\ &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log \alpha(|\boldsymbol{x}|, |\boldsymbol{y}|).\\ \end{split} \end{equation} The gradients of this objective with respect to the component probability models can be computed using automatic differentiation or using a secondary dynamic program that computes `backward' probabilities. We refer the reader to Section 3.1 of \citet{yu:2016} for details. In this paper, we use a slightly different objective from the one described in \citet{yu:2016}. Rather than marginalizing over the paths that end in any possible input positions $\sum_{i=1}^I\alpha(i, |\boldsymbol{y}|)$, we require that the full input be consumed when the final output symbol is generated. This constraint biases away from predicting outputs without explaining them using the input sequence. \section{Decoding} \label{sec:decoding} We now turn to the problem of decoding, that is, of computing \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y}), \end{align*} where we are using the SSNT model described in the previous section as the channel model and a language model that delivers prior probabilities of the output sequence in left-to-right order, i.e., $p(y_i \mid \boldsymbol{y}^{i-1})$. Marginalizing the latent variable during search is computationally hard \citep{simaan:1996}, and so we approximate the search problem as \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} \max_{\boldsymbol{z}} p(\boldsymbol{x},\boldsymbol{z} \mid \boldsymbol{y}) p(\boldsymbol{y}). \end{align*} However, even with this simplification, the search problem remains nontrivial. On one hand, we must search over the space of all possible outputs with a model that makes no Markovian assumptions. This is similar to the decoding problem faced in standard seq2seq transducers. On the other hand, our model computes the probability of the given input conditional on the predicted output hypothesis. Therefore, instead of just relying on a single softmax to provide a probability for every output word type (as we conveniently can in the direct model), we must loop over each output word type, and run a softmax over the input vocabulary---a computational expense that is quadratic in the size of the vocabulary! To reduce this computational effort, we make use of an auxiliary direct model $q(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x})$ to explore probable extensions of partial hypotheses, rather than trying to perform an exhaustive search over the vocabulary each time we extend an item on the beam. Algorithm~\ref{decode}, in Appendix~\ref{sec:algo_appendix}, describes the decoding algorithm based on a formulation by \citet{tillmann1997dp}. The idea is to create a matrix $Q$ of partial hypotheses. Each hypothesis in cell $(i,j)$ covers the first $i$ words of the input ($\boldsymbol{x}_1^i$) and corresponds to an output hypothesis prefix of length $j$ ($\boldsymbol{y}_1^j$). The hypothesis is associated with a model score. For each cell $(i, j)$, the direct proposal model first calculates the scores of possible extensions of previous cells that could then reach $(i,j)$ by considering every token in the output vocabulary, from all previous candidate cells $(i-1,\le j)$. That gives the top $K_1$ partial output sequences. These partial output sequences are subsequently rescored by the noisy channel model, and the $K_2$ best candidates are kept in the beam and used for further extension. The beam size $K_1$ and $K_2$ are hyperparameters to be tuned in the experiments. \subsection{Model combination} The decoder we have just described makes use of an auxiliary decoding model. This means that, as a generalisation, it is capable of decoding under an objective that is a linear combination of the direct model, channel model, language model and a bias for the output length\footnote{In the experiments, we did not marginalize the probability of the direct model when calculating the general search objective. We found that marginalizing the probability does not give better performance and makes decoding extremely slow.}, \begin{equation} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j} = \lambda_1 \log p(\boldsymbol{y}_1^j\ |\ \boldsymbol{x}_1^i) + \lambda_2 \log p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) + \lambda_3 \log p(\boldsymbol{y}_1^j) + \lambda_4 |\boldsymbol{y}_1^j|. \end{equation} The bias is used to penalize the noisy channel model for generating too-short (or long) sequences. The $\lambda$'s are hyperparameters to be tuned using on a small amount of held-out development data. \section{Experiments} \label{sec:experiments} We evaluate our model on three natural language processing tasks, abstractive sentence summarisation, machine translation and morphological inflection generation. For each task, we compare the performance of the direct model, noisy channel model, and the interpolation of the two models. \subsection{Abstractive Sentence Summarisation} Sentence summarisation is the problem of constructing a shortened version of a sentence while preserving the majority of its meaning. In contrast to extractive summarisation, which can only copy words from the original sentence, abstractive summarisation permits arbitrary rewording of the sentence. The dataset \citep{DBLP:conf/emnlp/RushCW15} that we use is constructed by pairing the first sentence and the headline of each article from the annotated Gigaword corpus \citep{graff2003english,napoles2012annotated}. There are 3.8m, 190k and 381k sentence pairs in the training, validation and test sets, respectively. \cite{yu:2016} filtered the dataset by restricting the lengths of the input and output sentences to be no greater than 50 and 25 tokens, respectively. From the filtered data, they further sampled 1 million sentence pairs for training. We experimented on training the direct model and channel model with both the sampled 1 million and the full 3.8 million parallel data. The language model is trained on the target side of the parallel data, i.e. the headlines. We evaluated the generated summaries of 2000 randomly sampled sentence pairs using full length ROUGE F1. This setup is in line with the previous work on this task \citep{DBLP:conf/emnlp/RushCW15,chopra,gulcehre2016pointing,yu:2016}. The same configuration is used to train the direct model and the channel model. The loss (Equation \ref{loss}) is optimized by Adam \citep{DBLP:journals/corr/KingmaB14}, with initial learning rate of 0.001. We use LSTMs with 1 layer for both the encoder and decoders, with hidden units of 256. The mini-batch size is 32, and dropout of 0.2 is applied to the input and output of LSTMs. For the language model, we use a 2-layer LSTM with 1024 hidden units and 0.5 dropout. The learning rate is 0.0001. All the hyperparameters are optimised via grid search on the perplexity of the validation set. During decoding, beam search is employed with the number of proposals generated by the direct model $K_1 = 20$, and the number of best candidates selected by the noisy channel model $K_2 = 10$. Table \ref{test_rg} presents the ROUGE-F1 scores of the test set from the direct model, noisy channel model (channel + LM + bias), the interpolation of the direct model and the noisy channel model (direct + channel + LM + bias), and the interpolation of the direct model and language model (direct + LM + bias) trained on different sizes of data. The noisy channel model with the language model trained on the target side of the 1 million parallel data outperforms the direct model by approximately 1 point. Such improvement indicates that the language model helps improve the quality of the output sequence when no extra unlabelled data is available. Training the language model with all the headlines in the dataset, i.e. 3.8 million sentences, gives a further boost to the ROUGE score. This is in line with our expectation that the model benefits from adding large amounts of unlabelled data. The interpolation of the direct model, channel model, language model and bias of the output length achieves the best results --- the ROUGE score is close to the direct model trained on all the parallel data. Although there is still improvement, when the direct model is trained with more data, the gap between the direct model and the noisy channel model is smaller. No gains is observed if the language model is combined with the direct model. We find that as we increase the weight of the language model, the result is getting worse. Table \ref{prev_work} surveys published results on this task, and places our best models in the context of the current state-of-the-art results. ABS+ \citep{DBLP:conf/emnlp/RushCW15}, RAS-LSTM and RAS-Elman \citep{chopra} are different variations of the attentive models. {\it Pointing the unkown words} uses pointer networks \citep{vinyals2015pointer} to select the output token from the input sequence in order to avoid generating unknown tokens. ASC + FSC \citep{miao2016} is a semi-supervised model based on a variational autoencoder. Trained on 1m paired samples and 3.8m unpaired samples, the noisy channel achieves comparable or better results than (direct) models trained with 3.8m paired samples. Compared to \cite{miao2016}, whose ASC + FSC models is an alternative strategy for using unpaired data, the noisy channel is significantly more effective --- 33.21 versus 31.09 in ROUGE-1. Finally, motivated by the qualitative observation that noisy channel model outputs were quite fluent and often used reformulations of the input rather than a strict compression (which would be poorly scored by ROUGE), we carried out a human preference evaluation whose results are summarised in Table~\ref{tab:human}. This confirms that noisy channel summaries are strongly preferred over those of the direct model. \subsection{Machine Translation} We next evaluate our models on a Chinese--English machine translation task. We used parallel data with 184k sentence pairs (from the FBIS corpus, LDC2003E14) and monolingual data with 4.3 million of English sentences (selected from the English Gigaword). The training data is preprocessed by lowercasing the English sentences, replacing digits with `\#' token, and replacing tokens appearing less than 5 times with an UNK token. This results in vocabulary sizes of 30k and 20k for Chinese sentences and English sentences, respectively. The models are trained using Adam \citep{DBLP:journals/corr/KingmaB14} with initial learning rate of 0.001 for the direct model and the channel model, and 0.0001 for the language model. The LSTMs for the direct and channel models have 512 hidden units and 1 layer, and 2 layers with 1024 hidden units per layer for the language model. Dropout of 0.5 on the input and output of LSTMs is set for all the model training. The noisy channel decoding uses $K_1$ = 20 and $K_2$ = 10 as the beam sizes. Table \ref{mt-result} lists the translation performance of different models in BLEU scores. To set benchmarks, we train the vanilla and attentional sequence to sequence models \citep{sutskever:2014,bahdanau:2015} using the same parallel data. For direct models, we leverage bidirectional LSTMs as the encoder for this task. We can see that the vanilla sequence to sequence model behaves poorly due to the small amounts of parallel data. By contrast, the direct model (SSNT) and the attentional model work relatively well, with the attentional model outperforming the SSNT direct model. Although these models both directly model $p(\boldsymbol{y} \mid \boldsymbol{x})$, this result is unsurprising because the SSNT direct model is most effective when the alignment between sequences is largely monotonic, and Chinese--English translation word orders diverge considerably. However, despite this limitation, the noisy channel model is approximately 3 points higher in BLEU than the direct model, and the combination of noisy channel and direct model gives extra boost. Confirming the empirical findings of prior work (and in line with theoretical predictions), the interpolation of the direct model and language model is not effective. \subsection{Morphological Inflection Generation} Morphological inflection is the task of generating a target (inflected form) word from a source word (base form), given a morphological attribute, e.g. number, tense, and person etc.. It is useful for reducing data sparsity issues in translating morphologically rich languages. The transformation from the base form to the inflected form is usually to add prefix or suffix, or to do character replacement. The dataset \citep{DBLP:conf/naacl/DurrettD13} that we use in the experiments is created from Wiktionary, including inflections for German nouns, German verbs, Spanish Verbs, Finnish noun and adjective, and Finnish verbs. We only experimented on German nouns and German verbs, as German nouns is the most difficult task\footnote{While state-of-the-art systems can achieve 99\% accuracies on Spanish verbs and Finnish verbs, they can only get 89\% accuracy on German nouns.}, and the direct model does not perform as well as other state-of-the-art systems on German verbs. The train/dev/test split for German nouns is 2364/200/200, and for German verbs is 1617/200/200. There are 8 and 27 inflection types in German nouns and German verbs, respectively. Following previous work, we learn a separate model for each type of inflection independent of the other inflections. We report results on the average accuracy across different inflections. Our language models were trained on word types extracted by running a morphological analysis tool on the WMT 2016 monolingual data and extracting examples of appropriately inflected word forms.\footnote{\url{http://www.statmt.org/wmt16/translation-task.html}} After annotation the number of instances for training the language model ranged from 300k to 3.8m for different inflection types in German nouns, and from 200 to 54k in German verbs. The experimental setup that we use on this task is $K_1$ = 60, $K_2$ = 30, \begin{itemize} \item direct and channel model: 1 layer LSTM with 128 hidden, $\eta = 0.001$, dropout = 0.5. \item language model: 2 layer LSTM with 512 hidden, $\eta = 0.0001$, dropout = 0.5. \end{itemize} Table \ref{morph-result} summarises the results from our models. On both datasets, the noisy channel model (channel + LM + bias) does not perform as well as the direct model, but the interpolation of the direct model and noisy channel model (direct + channel + LM + bias) significantly outperforms the direct model. The interpolation of the direct model and language model (direct + LM + bias) achieves better results than the direct model and the noisy channel model on German nouns, but not on German verbs. For further comparison, we also included the state-of-the-art results as benchmarks. NCK15 \citep{DBLP:conf/naacl/NicolaiCK15} tackles the task based on the three-stage approach: (1) align the source and target word, (2) extract inflection rules, (3) apply the rule to new examples. FTND16 \citep{DBLP:conf/naacl/FaruquiTND16} is based on neural sequence to sequence models. Both models (NCK15+ and FTND16+) rerank the candidate outputs by the scores predicted from n-gram language models, together with other features. \section{Analysis} By observing the output generated by the direct model and noisy channel model, we find (in line with theoretical critiques of conditional models) that the direct model may leave out key information. By contrast, the noisy channel model does seem to avoid this issue. To illustrate, in Example~1 (see Appendix~B) in Table~\ref{example}, the direct model ignores the key phrase `coping with', resulting in incomplete meaning, but the noisy channel model covers it. Similarly, in Example 6, the direct model does not translate the Chinese word corresponding to `investigation'. We also observe that while the direct model mostly copies words from the source sentence, the noisy channel model prefers generating paraphrases. For instance, in Example 2, while the direct model copies the word `accelerate' in the generated output, the noisy channel model generate `speed up' instead. While one might argue that copying is a preferable compression technique than paraphrasing (as long as it produces grammatical outputs), it does show the power of these models. \section{Related work} Noisy channel decompositions have been successfully used in a variety of problems, including speech recognition~\citep{jelinek:1998}, machine translation~\citep{brown:1993}, spelling correction~\citep{brill:2000}, and question answering~\citep{echihabi:2003}. The idea of adding language models and monolingual data in machine translation has been explored in earlier work. \cite{gulcehre:2015} propose two strategies of combining a language model with a neural sequence to sequence model. In shallow fusion, during decoding the sequence to sequence model (direct model) proposes candidate outputs and these candidates are reranked based on the scores calculated by a weighted sum of the probability of the translation model and that of the language model. In deep fusion, the language model is integrated into the decoder of the sequence to sequence model by concatenating their hidden state at each time step. \cite{sennrich:2016} incorporate target language unpaired training data by doing back-translation to create synthetic parallel training data. While this technique is quite effective, its practicality seems limited to problems where the inputs and outputs contain roughly the same information (such as translation). \cite{cheng:2016} leverages the abundant monolingual data by doing multitask learning with an autoencoding objective. A number of papers have remarked on the tendency for content to get dropped (or repeated) in translation. \citet{liu:2016} propose translating in both a left-to-right and a left-to-right direction and seeking a consensus. \citet{tu:2016} propose augmenting a direct model's decoding objective with a reverse translation model (similar to our channel model except it conditions on the direct model's output RNN's hidden states rather than the words); however, that work just reranks complete translation hypotheses rather than developing a model that permits an incremental search. Another trend of work that is related to our model is the investigation of making online prediction for machine translation \citep{gu:2016,grissom:2014,sankaran:2010} and speech recognition \citep{hwang:2016,jaitly2015neural}. Our direct model (and channel model) shares the idea of introducing stochastic latent variables to neural networks with several papers and marginalising these during training. Examples include connectionist temporal classification (CTC) \citep{graves2006connectionist} and the more recent segmental recurrent neural networks (SRNN) \citep{kong2015segmental}. Compared to these models, our direct model has the advantage of capturing unbounded dependencies of output words. The direct model is closely related to the sequence transduction model \citep{graves2012sequence} in the way of modeling the probability of predicting output tokens and marginalizing latent variables using dynamic programming. However, rather than modeling the joint distribution over outputs and alignments by inserting null symbols into the output sequence, our direct model defines a separate latent alignment variable, with alignment distribution defined with neural networks. Similar to our work, the model in \citep{alkhoulialignment} is decomposed into the alignment model and the model of word predictions. The two models are trained separately and combined during decoding, with subsequent refinements using a Viterbi-EM approximation. By contrast, in our direct and channel models, the latent and observed components of the models are trained jointly using a dynamic program to exactly marginalise the unobserved variables. \section{Conclusion} We have presented and empirically validated a noisy channel transduction model that uses component models based on recurrent neural networks. This formulation lets us use unpaired outputs to estimate the parameters of the source model and input-output pairs to train the channel model. Despite the channel model's ability to condition on long sequences, we are able to maintain tractable decoding by using a latent segmentation variable that breaks the conditioning context up into a series of monotonically growing segments. Our experiments show that this model makes excellent use of unpaired training data. \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Algorithm} \label{sec:algo_appendix} \begin{algorithm*}[ht] \caption{Noisy Channel Decoding} \label{decode} \begin{algorithmic} \State \textbf{Notation: } $Q$ is the Viterbi matrix, bp is the backpointer, $W$ stores the predicted tokens, $\mathcal{V}$ refers to the vocabulary, $I=|\boldsymbol{x}|$, and $J_\text{max}$ denotes the maximum number of output tokens that can be predicted. \State \textbf{Input: } source sequence $\boldsymbol{x}$ \State \textbf{Output: } best output sequence $\boldsymbol{y^*}$ \State \textbf{Initialisation: } $Q \in \mathbb{R}^{I \times J_\text{max}\times K_1}$, bp $\in \mathbb{N}^{I \times J_\text{max}\times K_1}$, $W \in \mathbb{N}^{I \times J_\text{max}\times K_1}$, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $Q_{temp} \in \mathbb{R}^{K_1}$, $bp_{temp} \in \mathbb{N}^{K_1}$, $W_{temp} \in \mathbb{N}^{K_1}$ \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}}q(z_1 = i) $ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_{1}^{z_1})$ \Comment Candidates generated by $q(\boldsymbol{y}\ |\ \boldsymbol{x})$. \State $bp_{temp}\gets 0$ \State $W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}}q(z_1 = i)$ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_1^{z_1})$ \State $Q[i, 1] \gets \topk(K_2)_{y \in W_{temp}} O_{\boldsymbol{x}_1^i, y}$ \Comment Rerank the candidates by objective ($O$). \State $W[i,1] \gets \argtopk(K_2)_{y \in W_{temp}}O_{\boldsymbol{x}_1^i, y}$ \EndFor \For{$j\in[2, J_\text{max}]$} \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}, k \in [1, i]} Q[k,j-1] \cdot$ $q(z_j = i\ |\ z_{j-1} = k)q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x}_1^{z_j})$ \State $bp_{temp} , W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}, k \in [1, i]} $ $Q[k,j-1]q(z_j = i\ |\ z_{j-1} = k) \cdot$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x})$ \State $Y \gets \candidate(bp_{temp}, W_{temp})$ \Comment Get partial candidate $\boldsymbol{y}_1^j$. \State $Q[i,j] \gets \topk(K_2)_{\boldsymbol{y}_j \in Y} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \State $bp[i,j] , W[i,j] \gets %\argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) p(\boldsymbol{y}_1^j)$ \argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \EndFor \EndFor \State \Return a sequence of words stored in $W$ by following backpointers starting from $(I,\argmax_j Q[I, j])$. \end{algorithmic} \end{algorithm*} \section{Example outputs} \label{sec:outputs} \end{document}
The Neural Noisy Channel
1611.02554
(b)
[ "Model", "Acc." ]
[ [ "NCK15", "97.50" ], [ "FTND16", "[BOLD] 97.92" ], [ "NCK15+", "97.90" ], [ "FTND16+", "97.11" ], [ "direct (uni)", "87.85" ], [ "direct (bi)", "94.83" ], [ "channel + LM + bias (uni)", "84.42" ], [ "channel + LM + bias (bi)", "92.13" ], [ "direct + LM + bias (bi)", "94.83" ], [ "direct + channel + LM + bias (uni)", "92.20" ], [ "direct + channel + LM + bias (bi)", "97.15" ] ]
On both datasets, the noisy channel model (channel + LM + bias) does not perform as well as the direct model, but the interpolation of the direct model and noisy channel model (direct + channel + LM + bias) significantly outperforms the direct model. The interpolation of the direct model and language model (direct + LM + bias) achieves better results than the direct model and the noisy channel model on German nouns, but not on German verbs. For further comparison, we also included the state-of-the-art results as benchmarks. NCK15 (DBLP:conf/naacl/NicolaiCK15) tackles the task based on the three-stage approach: (1) align the source and target word, (2) extract inflection rules, (3) apply the rule to new examples. FTND16 (DBLP: conf/naacl/FaruquiTND16) is based on neural sequence to sequence models. Both models (NCK15+ and FTND16 +) rerank the candidate outputs by the scores predicted from n-gram language models, together with other features.
\documentclass{article} % For LaTeX2e \newcommand{\cjd}[1]{\textcolor{blue}{\bf \small [#1 --CJD]}} \newcommand{\pb}[1]{\textcolor{orange}{\bf \small [#1 --PB]}} \newcommand{\leiyu}[1]{\textcolor{red}{\bf \small [#1 --Lei]}} \newcommand{\etg}[1]{\textcolor{pink}{\bf \small [#1 --ETG]}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\topk}{topk} \DeclareMathOperator*{\argtopk}{arg\,topk} \DeclareMathOperator*{\candidate}{getCandidateOutputs} \title{The Neural Noisy Channel} \author{Lei Yu$^1$\thanks{Work completed at DeepMind.} , Phil Blunsom$^{1,2}$, Chris Dyer$^{2}$, Edward Grefenstette$^{2}$, and Tom\'{a}\v{s} Ko\v{c}isk\'{y}$^{1,2}$ \\ $^1$University of Oxford and $^2$DeepMind \\ {\tt lei.yu@cs.ox.ac.uk, \{pblunsom,cdyer,etg,tkocisky\}@google.com} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects during training, noisy channel models must produce outputs that explain their inputs, and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use. \end{abstract} \section{Introduction} Recurrent neural network sequence to sequence models~\citep{kalchbrenner:2013,sutskever:2014,bahdanau:2015} are excellent models of $p(\textrm{output sequence }\boldsymbol{y} \mid \textrm{input sequence }\boldsymbol{x})$, provided sufficient input--output $(\boldsymbol{x},\boldsymbol{y})$ pairs are available for estimating their parameters. However, in many domains, vastly more unpaired output examples are available than input--output pairs (e.g., transcribed speech is relatively rare although non-spoken texts are abundant; Swahili--English translations are rare although English texts are abundant; etc.). A classic strategy for exploiting both kinds of data is to use Bayes' rule to rewrite $p(\boldsymbol{y} \mid \boldsymbol{x})$ as $p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})/p(\boldsymbol{x})$, a factorisation which is called a \textbf{noisy channel model}~\citep{shannon:1948}. A noisy channel model thus consists of two component models: the conditional \textbf{channel model}, $p(\boldsymbol{x} \mid \boldsymbol{y})$, which characterizes the \emph{reverse} transduction problem and whose parameters are estimated from the paired $(\boldsymbol{x},\boldsymbol{y})$ samples, and the unconditional \textbf{source model}, $p(\boldsymbol{y})$, whose parameters are estimated from both the paired and (usually much more numerous) unpaired samples.\footnote{We do not model $p(\boldsymbol{x})$ since, in general, we will be interested in finding $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x})$, and $\argmax_\mathbf{y}p(\mathbf{y} \mid \mathbf{x}) = \argmax_\mathbf{y} \frac{p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})}{p(\mathbf{x})}= \argmax_\mathbf{y} p(\mathbf{x} \mid \mathbf{y})p(\mathbf{y})$.} Beyond their data omnivorousness, noisy channel models have other benefits. First, the two component models mean that two different aspects of the transduction problem can be addressed independently. For example, in many applications, source models are language models and innovations in these can be leveraged to obtain improvements in any system that uses them as a component. Second, the component models can have complementary strengths, since inference is carried out in the product space; this simplifies design because a single model does not have to get everything perfectly right. Third, the noisy channel operates by selecting outputs that both are \emph{a priori} likely \emph{and} that explain the input well. This addresses a failure mode that can occur in conditional models in which inputs are ``explained away'' by highly predictive output prefixes, resulting in poor training \citep{klein:2001}. Since the noisy channel formulation requires its outputs to explain the observed input, this problem is avoided. In principle, the noisy channel decomposition is straightforward; however, in practice, decoding (i.e., computing $\arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y})$) is a significant computational challenge, and tractability concerns impose restrictions on the form the component models can take. To illustrate, an appealing parameterization would be to use an attentional seq2seq network \citep{bahdanau:2015} to model the channel probability $p(\boldsymbol{x} \mid \boldsymbol{y})$. However, seq2seq models are designed under the assumption that the complete conditioning sequence is available before any prefix probabilities of the output sequence can be computed. This assumption is problematic for channel models since it means that a complete output sequence must be constructed before the channel model can be evaluated (since the channel model conditions on the output). Therefore, to be practical, the channel probability must decompose in terms of prefixes of the conditioning variable, $\boldsymbol{y}$. While the chain rule justifies decomposing output variable probabilities in terms of successive extensions of a partial prefix, no such convenience exists for conditioning variables, and approximations must be introduced. In this work, we use a variant of the newly proposed online seq2seq model of \citet{yu:2016} which uses a latent alignment variable to enable its probabilities to factorize in terms of prefixes of both the input and output, making it an appropriate channel model~(\S\ref{sec:model}). Using this channel model, the decoding problem then becomes similar to the problem faced when decoding with direct models~(\S\ref{sec:decoding}). Experiments on abstractive summarization, machine translation, and morphological inflection show that the noisy channel can significantly improve performance and exploit unpaired output training samples and that models that combine the direct model and a noisy channel model offer further improvements still~(\S\ref{sec:experiments}). \section{Background: Segment to Segment Neural Transduction} \label{sec:model} Our model is based on the Segment to Segment Neural Transduction model (SSNT) of Yu et al., 2016. At a high level, the model alternates between encoding more of the input sequence and decoding output tokens from the encoded representation. This presentation deviates from the Yu et al.'s presentation so as to emphasize the incremental construction of the conditioning context that is enabled by the latent variable. \subsection{Model description} Similar to other neural sequence to sequence models, SSNT models the conditional probability $p(\boldsymbol{y} \mid \boldsymbol{x})$ of a output sequence $\boldsymbol{y}$ given a input sequence $\boldsymbol{x}$. To avoid having to observe the complete input sequence $\boldsymbol{x}$ before making a prediction of the beginning of the output sequence, we introduce a latent alignment variable $\boldsymbol{z}$ which indicates when each token of the output sequence is to be generated as the input sequence is being read. Since we assume that the input is read just once from left to right, we restrict $\boldsymbol{z}$ to be a monotonically increasing alignment (i.e., $z_{j+1} \ge z_j$ is true with probability 1), where $z_j = i$ denotes that the output token at position $j$ ($y_j$) is generated when the input sequence up through position $i$ has been read. The SSNT model is: \begin{align} \begin{split} p(\boldsymbol{y} \mid \boldsymbol{x}) & = \sum_{\boldsymbol{z}} p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) \\ p(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x}) & \approx \prod_{j=1}^{|\boldsymbol{y}|} \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \label{eq:model} \end{split} \end{align} We explain the model in terms of its two components, starting with the word generation term. In the SSNT, the input and output sequences $\boldsymbol{x}$, $\boldsymbol{y}$ are encoded with two separate LSTMs \citep{hochreiter1997long}, resulting in sequences of hidden states representing prefixes of these sequences. In Yu et al.'s formulation, the input sequence encoder (i.e., the conditioning context encoder) can either be a unidirectional or bidirectional LSTM, but here we assume that it is a unidirectional LSTM, which ensures that it will function well as a channel model that can compute probabilities with incomplete conditioning contexts (this is necessary since, at decoding time, we will be constructing the conditioning context incrementally). Let $\mathbf{h}_i$ represent the input sequence encoding for the prefix $\boldsymbol{x}_1^{i}$. Since the final action at timestep $j$ will be to predict $y_j$, it is convenient to let $\mathbf{s}_j$ denote the hidden state that excludes $y_j$, i.e., the encoding of the prefix $\boldsymbol{y}_1^{j-1}$. The probability of the next token $y_j$ is calculated by concatenating the aligned hidden state vectors $\mathbf{s}_j$ and $\mathbf{h}_{z_j}$ followed by a softmax layer, \begin{align*} p(y_j \mid \boldsymbol{x}_1^{z_j}, \boldsymbol{y}_1^{j-1}) \propto \text{exp} (\mathbf{W}_w[\mathbf{h}_{z_j};\mathbf{s}_j] + \mathbf{b}_w). \end{align*} The model thus depends on the current alignment position $z_j$, which determines how far into $\boldsymbol{x}$ it has read. We now discuss how the sequence of $z_j$'s are generated. First, we remark that modelling this distribution requires some care so as to avoid conditioning on the entire input sequence. To illustrate why one might induce a dependency on the entire input sequence in this model, it is useful to compare to a standard attention model. Attention models operate by computing a score using a representation of alignment candidate (in our case, the candidates would be every unread token remaining in the input). If we followed this strategy, it would be necessary to observe the full input sequence when making the first alignment decision. We instead model the alignment transition from timestep $j$ to $j+1$ by decomposing it into a sequence of conditionally independent \textsc{shift} and \textsc{emit} operations that progressively decide whether to read another token or stop reading. That is, at input position $i$, the model decides to \textsc{emit}, i.e., to set $z_j=i$ and predict the next output token $y_j$ from the word model, or it decides to \textsc{shift}, i.e., to read one more input token and increment the input position $i \gets i+1$. The probability $p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_1^{i}, \boldsymbol{y}_1^{j-1})$ is calculated using the encoder and decoder states defined above as: \begin{align*} p(a_{i,j} = \textsc{emit} \mid \boldsymbol{x}_{1}^{i}, \boldsymbol{y}_1^{j-1}) = \sigma(\text{MLP}(\mathbf{W}_t[\mathbf{h}_{i};\mathbf{s}_j] + b_t)). \end{align*} The probability of \textsc{shift} is simply $1-p(a_{i,j} = \textsc{emit})$. In this formulation, the probabilities of aligning $z_j$ to each alignment candidate $i$ can be computed by reading just $\boldsymbol{x}_1^i$ (rather than the entire sequence). The probabilities are also independent of the contents of the suffix $\boldsymbol{x}_{i+1}^{|\boldsymbol{x}|}$. Using the probabilities of the auxiliary $a_{i,j}$ variables, the alignment probabilities needed in Eq.~\ref{eq:model} are computed as: \begin{align*} p(z_j = i \mid z_{j-1}, \boldsymbol{y}_1^{j-1}, \boldsymbol{x}_{1}^{i}) &= \begin{cases} 0 & \text{if }i < z_{j-1} \\ p(a_{i,j} = \textsc{emit}) & \text{if }i=z_{j-1} \\ \left(\prod_{i'=z_{j-1}}^{i-1} p(a_{i',j} = \textsc{shift}) \right) p(a_{i,j} = \textsc{emit}) & \text{if }i>z_{j-1} \end{cases} \end{align*} \subsection{Inference algorithms} In SSNT, the probability of generating each $y_j$ depends only on the current output position's alignment ($z_j$), the current output prefix ($\boldsymbol{y}_1^{j-1}$), the input prefix up to the current alignment ($\boldsymbol{x}_1^{z_j}$). It does \emph{not} depend on the history of the alignment decisions. Likewise, the alignment decisions at each position are also conditionally independent of the history of alignment decisions. Because of these independence assumptions, $\boldsymbol{z}$ can be marginalised using a $O(|\boldsymbol{x}|^2 \cdot |\boldsymbol{y}|)$ time dynamic programming algorithm where each fill in a chart with computing the following marginal probabilities: \begin{align*} \begin{split} \alpha(i,j) & = p(z_j=i, \boldsymbol{y}_1^j \mid \boldsymbol{x}_1^{z_j}) = \sum_{i'=1}^{i} \alpha(i',j-1) \underbrace{p(z_j \mid z_{j-1}, \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{alignment probability}} \underbrace{p(y_j \mid \boldsymbol{x}_{1}^{z_j}, \boldsymbol{y}_1^{j-1})}_{\text{word probability}}. \end{split} \end{align*} The model is trained to minimize the negative log likelihood of the parallel corpus $S$: \begin{equation} \label{loss} \begin{split} \mathcal{L}(\boldsymbol{\theta}) &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log p(\boldsymbol{y}\ |\ \boldsymbol{x}; \boldsymbol{\theta})\\ &= - \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \log \alpha(|\boldsymbol{x}|, |\boldsymbol{y}|).\\ \end{split} \end{equation} The gradients of this objective with respect to the component probability models can be computed using automatic differentiation or using a secondary dynamic program that computes `backward' probabilities. We refer the reader to Section 3.1 of \citet{yu:2016} for details. In this paper, we use a slightly different objective from the one described in \citet{yu:2016}. Rather than marginalizing over the paths that end in any possible input positions $\sum_{i=1}^I\alpha(i, |\boldsymbol{y}|)$, we require that the full input be consumed when the final output symbol is generated. This constraint biases away from predicting outputs without explaining them using the input sequence. \section{Decoding} \label{sec:decoding} We now turn to the problem of decoding, that is, of computing \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} p(\boldsymbol{x} \mid \boldsymbol{y}) p(\boldsymbol{y}), \end{align*} where we are using the SSNT model described in the previous section as the channel model and a language model that delivers prior probabilities of the output sequence in left-to-right order, i.e., $p(y_i \mid \boldsymbol{y}^{i-1})$. Marginalizing the latent variable during search is computationally hard \citep{simaan:1996}, and so we approximate the search problem as \begin{align*} \hat{\boldsymbol{y}} = \arg \max_{\boldsymbol{y}} \max_{\boldsymbol{z}} p(\boldsymbol{x},\boldsymbol{z} \mid \boldsymbol{y}) p(\boldsymbol{y}). \end{align*} However, even with this simplification, the search problem remains nontrivial. On one hand, we must search over the space of all possible outputs with a model that makes no Markovian assumptions. This is similar to the decoding problem faced in standard seq2seq transducers. On the other hand, our model computes the probability of the given input conditional on the predicted output hypothesis. Therefore, instead of just relying on a single softmax to provide a probability for every output word type (as we conveniently can in the direct model), we must loop over each output word type, and run a softmax over the input vocabulary---a computational expense that is quadratic in the size of the vocabulary! To reduce this computational effort, we make use of an auxiliary direct model $q(\boldsymbol{y}, \boldsymbol{z} \mid \boldsymbol{x})$ to explore probable extensions of partial hypotheses, rather than trying to perform an exhaustive search over the vocabulary each time we extend an item on the beam. Algorithm~\ref{decode}, in Appendix~\ref{sec:algo_appendix}, describes the decoding algorithm based on a formulation by \citet{tillmann1997dp}. The idea is to create a matrix $Q$ of partial hypotheses. Each hypothesis in cell $(i,j)$ covers the first $i$ words of the input ($\boldsymbol{x}_1^i$) and corresponds to an output hypothesis prefix of length $j$ ($\boldsymbol{y}_1^j$). The hypothesis is associated with a model score. For each cell $(i, j)$, the direct proposal model first calculates the scores of possible extensions of previous cells that could then reach $(i,j)$ by considering every token in the output vocabulary, from all previous candidate cells $(i-1,\le j)$. That gives the top $K_1$ partial output sequences. These partial output sequences are subsequently rescored by the noisy channel model, and the $K_2$ best candidates are kept in the beam and used for further extension. The beam size $K_1$ and $K_2$ are hyperparameters to be tuned in the experiments. \subsection{Model combination} The decoder we have just described makes use of an auxiliary decoding model. This means that, as a generalisation, it is capable of decoding under an objective that is a linear combination of the direct model, channel model, language model and a bias for the output length\footnote{In the experiments, we did not marginalize the probability of the direct model when calculating the general search objective. We found that marginalizing the probability does not give better performance and makes decoding extremely slow.}, \begin{equation} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j} = \lambda_1 \log p(\boldsymbol{y}_1^j\ |\ \boldsymbol{x}_1^i) + \lambda_2 \log p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) + \lambda_3 \log p(\boldsymbol{y}_1^j) + \lambda_4 |\boldsymbol{y}_1^j|. \end{equation} The bias is used to penalize the noisy channel model for generating too-short (or long) sequences. The $\lambda$'s are hyperparameters to be tuned using on a small amount of held-out development data. \section{Experiments} \label{sec:experiments} We evaluate our model on three natural language processing tasks, abstractive sentence summarisation, machine translation and morphological inflection generation. For each task, we compare the performance of the direct model, noisy channel model, and the interpolation of the two models. \subsection{Abstractive Sentence Summarisation} Sentence summarisation is the problem of constructing a shortened version of a sentence while preserving the majority of its meaning. In contrast to extractive summarisation, which can only copy words from the original sentence, abstractive summarisation permits arbitrary rewording of the sentence. The dataset \citep{DBLP:conf/emnlp/RushCW15} that we use is constructed by pairing the first sentence and the headline of each article from the annotated Gigaword corpus \citep{graff2003english,napoles2012annotated}. There are 3.8m, 190k and 381k sentence pairs in the training, validation and test sets, respectively. \cite{yu:2016} filtered the dataset by restricting the lengths of the input and output sentences to be no greater than 50 and 25 tokens, respectively. From the filtered data, they further sampled 1 million sentence pairs for training. We experimented on training the direct model and channel model with both the sampled 1 million and the full 3.8 million parallel data. The language model is trained on the target side of the parallel data, i.e. the headlines. We evaluated the generated summaries of 2000 randomly sampled sentence pairs using full length ROUGE F1. This setup is in line with the previous work on this task \citep{DBLP:conf/emnlp/RushCW15,chopra,gulcehre2016pointing,yu:2016}. The same configuration is used to train the direct model and the channel model. The loss (Equation \ref{loss}) is optimized by Adam \citep{DBLP:journals/corr/KingmaB14}, with initial learning rate of 0.001. We use LSTMs with 1 layer for both the encoder and decoders, with hidden units of 256. The mini-batch size is 32, and dropout of 0.2 is applied to the input and output of LSTMs. For the language model, we use a 2-layer LSTM with 1024 hidden units and 0.5 dropout. The learning rate is 0.0001. All the hyperparameters are optimised via grid search on the perplexity of the validation set. During decoding, beam search is employed with the number of proposals generated by the direct model $K_1 = 20$, and the number of best candidates selected by the noisy channel model $K_2 = 10$. Table \ref{test_rg} presents the ROUGE-F1 scores of the test set from the direct model, noisy channel model (channel + LM + bias), the interpolation of the direct model and the noisy channel model (direct + channel + LM + bias), and the interpolation of the direct model and language model (direct + LM + bias) trained on different sizes of data. The noisy channel model with the language model trained on the target side of the 1 million parallel data outperforms the direct model by approximately 1 point. Such improvement indicates that the language model helps improve the quality of the output sequence when no extra unlabelled data is available. Training the language model with all the headlines in the dataset, i.e. 3.8 million sentences, gives a further boost to the ROUGE score. This is in line with our expectation that the model benefits from adding large amounts of unlabelled data. The interpolation of the direct model, channel model, language model and bias of the output length achieves the best results --- the ROUGE score is close to the direct model trained on all the parallel data. Although there is still improvement, when the direct model is trained with more data, the gap between the direct model and the noisy channel model is smaller. No gains is observed if the language model is combined with the direct model. We find that as we increase the weight of the language model, the result is getting worse. Table \ref{prev_work} surveys published results on this task, and places our best models in the context of the current state-of-the-art results. ABS+ \citep{DBLP:conf/emnlp/RushCW15}, RAS-LSTM and RAS-Elman \citep{chopra} are different variations of the attentive models. {\it Pointing the unkown words} uses pointer networks \citep{vinyals2015pointer} to select the output token from the input sequence in order to avoid generating unknown tokens. ASC + FSC \citep{miao2016} is a semi-supervised model based on a variational autoencoder. Trained on 1m paired samples and 3.8m unpaired samples, the noisy channel achieves comparable or better results than (direct) models trained with 3.8m paired samples. Compared to \cite{miao2016}, whose ASC + FSC models is an alternative strategy for using unpaired data, the noisy channel is significantly more effective --- 33.21 versus 31.09 in ROUGE-1. Finally, motivated by the qualitative observation that noisy channel model outputs were quite fluent and often used reformulations of the input rather than a strict compression (which would be poorly scored by ROUGE), we carried out a human preference evaluation whose results are summarised in Table~\ref{tab:human}. This confirms that noisy channel summaries are strongly preferred over those of the direct model. \subsection{Machine Translation} We next evaluate our models on a Chinese--English machine translation task. We used parallel data with 184k sentence pairs (from the FBIS corpus, LDC2003E14) and monolingual data with 4.3 million of English sentences (selected from the English Gigaword). The training data is preprocessed by lowercasing the English sentences, replacing digits with `\#' token, and replacing tokens appearing less than 5 times with an UNK token. This results in vocabulary sizes of 30k and 20k for Chinese sentences and English sentences, respectively. The models are trained using Adam \citep{DBLP:journals/corr/KingmaB14} with initial learning rate of 0.001 for the direct model and the channel model, and 0.0001 for the language model. The LSTMs for the direct and channel models have 512 hidden units and 1 layer, and 2 layers with 1024 hidden units per layer for the language model. Dropout of 0.5 on the input and output of LSTMs is set for all the model training. The noisy channel decoding uses $K_1$ = 20 and $K_2$ = 10 as the beam sizes. Table \ref{mt-result} lists the translation performance of different models in BLEU scores. To set benchmarks, we train the vanilla and attentional sequence to sequence models \citep{sutskever:2014,bahdanau:2015} using the same parallel data. For direct models, we leverage bidirectional LSTMs as the encoder for this task. We can see that the vanilla sequence to sequence model behaves poorly due to the small amounts of parallel data. By contrast, the direct model (SSNT) and the attentional model work relatively well, with the attentional model outperforming the SSNT direct model. Although these models both directly model $p(\boldsymbol{y} \mid \boldsymbol{x})$, this result is unsurprising because the SSNT direct model is most effective when the alignment between sequences is largely monotonic, and Chinese--English translation word orders diverge considerably. However, despite this limitation, the noisy channel model is approximately 3 points higher in BLEU than the direct model, and the combination of noisy channel and direct model gives extra boost. Confirming the empirical findings of prior work (and in line with theoretical predictions), the interpolation of the direct model and language model is not effective. \subsection{Morphological Inflection Generation} Morphological inflection is the task of generating a target (inflected form) word from a source word (base form), given a morphological attribute, e.g. number, tense, and person etc.. It is useful for reducing data sparsity issues in translating morphologically rich languages. The transformation from the base form to the inflected form is usually to add prefix or suffix, or to do character replacement. The dataset \citep{DBLP:conf/naacl/DurrettD13} that we use in the experiments is created from Wiktionary, including inflections for German nouns, German verbs, Spanish Verbs, Finnish noun and adjective, and Finnish verbs. We only experimented on German nouns and German verbs, as German nouns is the most difficult task\footnote{While state-of-the-art systems can achieve 99\% accuracies on Spanish verbs and Finnish verbs, they can only get 89\% accuracy on German nouns.}, and the direct model does not perform as well as other state-of-the-art systems on German verbs. The train/dev/test split for German nouns is 2364/200/200, and for German verbs is 1617/200/200. There are 8 and 27 inflection types in German nouns and German verbs, respectively. Following previous work, we learn a separate model for each type of inflection independent of the other inflections. We report results on the average accuracy across different inflections. Our language models were trained on word types extracted by running a morphological analysis tool on the WMT 2016 monolingual data and extracting examples of appropriately inflected word forms.\footnote{\url{http://www.statmt.org/wmt16/translation-task.html}} After annotation the number of instances for training the language model ranged from 300k to 3.8m for different inflection types in German nouns, and from 200 to 54k in German verbs. The experimental setup that we use on this task is $K_1$ = 60, $K_2$ = 30, \begin{itemize} \item direct and channel model: 1 layer LSTM with 128 hidden, $\eta = 0.001$, dropout = 0.5. \item language model: 2 layer LSTM with 512 hidden, $\eta = 0.0001$, dropout = 0.5. \end{itemize} Table \ref{morph-result} summarises the results from our models. On both datasets, the noisy channel model (channel + LM + bias) does not perform as well as the direct model, but the interpolation of the direct model and noisy channel model (direct + channel + LM + bias) significantly outperforms the direct model. The interpolation of the direct model and language model (direct + LM + bias) achieves better results than the direct model and the noisy channel model on German nouns, but not on German verbs. For further comparison, we also included the state-of-the-art results as benchmarks. NCK15 \citep{DBLP:conf/naacl/NicolaiCK15} tackles the task based on the three-stage approach: (1) align the source and target word, (2) extract inflection rules, (3) apply the rule to new examples. FTND16 \citep{DBLP:conf/naacl/FaruquiTND16} is based on neural sequence to sequence models. Both models (NCK15+ and FTND16+) rerank the candidate outputs by the scores predicted from n-gram language models, together with other features. \section{Analysis} By observing the output generated by the direct model and noisy channel model, we find (in line with theoretical critiques of conditional models) that the direct model may leave out key information. By contrast, the noisy channel model does seem to avoid this issue. To illustrate, in Example~1 (see Appendix~B) in Table~\ref{example}, the direct model ignores the key phrase `coping with', resulting in incomplete meaning, but the noisy channel model covers it. Similarly, in Example 6, the direct model does not translate the Chinese word corresponding to `investigation'. We also observe that while the direct model mostly copies words from the source sentence, the noisy channel model prefers generating paraphrases. For instance, in Example 2, while the direct model copies the word `accelerate' in the generated output, the noisy channel model generate `speed up' instead. While one might argue that copying is a preferable compression technique than paraphrasing (as long as it produces grammatical outputs), it does show the power of these models. \section{Related work} Noisy channel decompositions have been successfully used in a variety of problems, including speech recognition~\citep{jelinek:1998}, machine translation~\citep{brown:1993}, spelling correction~\citep{brill:2000}, and question answering~\citep{echihabi:2003}. The idea of adding language models and monolingual data in machine translation has been explored in earlier work. \cite{gulcehre:2015} propose two strategies of combining a language model with a neural sequence to sequence model. In shallow fusion, during decoding the sequence to sequence model (direct model) proposes candidate outputs and these candidates are reranked based on the scores calculated by a weighted sum of the probability of the translation model and that of the language model. In deep fusion, the language model is integrated into the decoder of the sequence to sequence model by concatenating their hidden state at each time step. \cite{sennrich:2016} incorporate target language unpaired training data by doing back-translation to create synthetic parallel training data. While this technique is quite effective, its practicality seems limited to problems where the inputs and outputs contain roughly the same information (such as translation). \cite{cheng:2016} leverages the abundant monolingual data by doing multitask learning with an autoencoding objective. A number of papers have remarked on the tendency for content to get dropped (or repeated) in translation. \citet{liu:2016} propose translating in both a left-to-right and a left-to-right direction and seeking a consensus. \citet{tu:2016} propose augmenting a direct model's decoding objective with a reverse translation model (similar to our channel model except it conditions on the direct model's output RNN's hidden states rather than the words); however, that work just reranks complete translation hypotheses rather than developing a model that permits an incremental search. Another trend of work that is related to our model is the investigation of making online prediction for machine translation \citep{gu:2016,grissom:2014,sankaran:2010} and speech recognition \citep{hwang:2016,jaitly2015neural}. Our direct model (and channel model) shares the idea of introducing stochastic latent variables to neural networks with several papers and marginalising these during training. Examples include connectionist temporal classification (CTC) \citep{graves2006connectionist} and the more recent segmental recurrent neural networks (SRNN) \citep{kong2015segmental}. Compared to these models, our direct model has the advantage of capturing unbounded dependencies of output words. The direct model is closely related to the sequence transduction model \citep{graves2012sequence} in the way of modeling the probability of predicting output tokens and marginalizing latent variables using dynamic programming. However, rather than modeling the joint distribution over outputs and alignments by inserting null symbols into the output sequence, our direct model defines a separate latent alignment variable, with alignment distribution defined with neural networks. Similar to our work, the model in \citep{alkhoulialignment} is decomposed into the alignment model and the model of word predictions. The two models are trained separately and combined during decoding, with subsequent refinements using a Viterbi-EM approximation. By contrast, in our direct and channel models, the latent and observed components of the models are trained jointly using a dynamic program to exactly marginalise the unobserved variables. \section{Conclusion} We have presented and empirically validated a noisy channel transduction model that uses component models based on recurrent neural networks. This formulation lets us use unpaired outputs to estimate the parameters of the source model and input-output pairs to train the channel model. Despite the channel model's ability to condition on long sequences, we are able to maintain tractable decoding by using a latent segmentation variable that breaks the conditioning context up into a series of monotonically growing segments. Our experiments show that this model makes excellent use of unpaired training data. \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Algorithm} \label{sec:algo_appendix} \begin{algorithm*}[ht] \caption{Noisy Channel Decoding} \label{decode} \begin{algorithmic} \State \textbf{Notation: } $Q$ is the Viterbi matrix, bp is the backpointer, $W$ stores the predicted tokens, $\mathcal{V}$ refers to the vocabulary, $I=|\boldsymbol{x}|$, and $J_\text{max}$ denotes the maximum number of output tokens that can be predicted. \State \textbf{Input: } source sequence $\boldsymbol{x}$ \State \textbf{Output: } best output sequence $\boldsymbol{y^*}$ \State \textbf{Initialisation: } $Q \in \mathbb{R}^{I \times J_\text{max}\times K_1}$, bp $\in \mathbb{N}^{I \times J_\text{max}\times K_1}$, $W \in \mathbb{N}^{I \times J_\text{max}\times K_1}$, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $Q_{temp} \in \mathbb{R}^{K_1}$, $bp_{temp} \in \mathbb{N}^{K_1}$, $W_{temp} \in \mathbb{N}^{K_1}$ \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}}q(z_1 = i) $ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_{1}^{z_1})$ \Comment Candidates generated by $q(\boldsymbol{y}\ |\ \boldsymbol{x})$. \State $bp_{temp}\gets 0$ \State $W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}}q(z_1 = i)$ $q(y\ |\ \textsc{start}, z_1, \boldsymbol{x}_1^{z_1})$ \State $Q[i, 1] \gets \topk(K_2)_{y \in W_{temp}} O_{\boldsymbol{x}_1^i, y}$ \Comment Rerank the candidates by objective ($O$). \State $W[i,1] \gets \argtopk(K_2)_{y \in W_{temp}}O_{\boldsymbol{x}_1^i, y}$ \EndFor \For{$j\in[2, J_\text{max}]$} \For{$i \in [1, I]$} \State $Q_{temp} \gets \topk(K_1)_{y \in \mathcal{V}, k \in [1, i]} Q[k,j-1] \cdot$ $q(z_j = i\ |\ z_{j-1} = k)q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x}_1^{z_j})$ \State $bp_{temp} , W_{temp} \gets \argtopk(K_1)_{y \in \mathcal{V}, k \in [1, i]} $ $Q[k,j-1]q(z_j = i\ |\ z_{j-1} = k) \cdot$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $q(y\ |\ \boldsymbol{y}_1^{j-1}, z_j, \boldsymbol{x})$ \State $Y \gets \candidate(bp_{temp}, W_{temp})$ \Comment Get partial candidate $\boldsymbol{y}_1^j$. \State $Q[i,j] \gets \topk(K_2)_{\boldsymbol{y}_j \in Y} O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \State $bp[i,j] , W[i,j] \gets %\argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $p(\boldsymbol{x}_1^i\ |\ \boldsymbol{y}_1^j) p(\boldsymbol{y}_1^j)$ \argtopk(K_2)_{\boldsymbol{y}_1^j \in Y}$ $O_{\boldsymbol{x}_1^i, \boldsymbol{y}_1^j}$ \EndFor \EndFor \State \Return a sequence of words stored in $W$ by following backpointers starting from $(I,\argmax_j Q[I, j])$. \end{algorithmic} \end{algorithm*} \section{Example outputs} \label{sec:outputs} \end{document}
Joint Modeling of Content and Discourse Relations in Dialogues
1705.05039
Table 5: Consistency of Understanding (COU) prediction results on AMI-sub. Results that statistically significantly outperform ngrams-based baseline and kim2016improving are highlighted with ∗ (p<0.05, paired t-test). For reference, we also show the prediction performance based on gold-standard discourse relations and phrase selection labels.
[ "[EMPTY]", "[BOLD] Acc", "[BOLD] F1" ]
[ [ "[BOLD] Comparisons", "[EMPTY]", "[EMPTY]" ], [ "Baseline (Majority)", "66.7", "40.0" ], [ "Ngrams (SVM)", "51.2", "50.6" ], [ "kim2016improving", "60.5", "50.5" ], [ "[BOLD] Features from Our Model", "[EMPTY]", "[EMPTY]" ], [ "Consistency Probability (Prob)", "52.7", "52.1" ], [ "Discourse Relation (Disc)", "63.6", "57.1∗" ], [ "Word Entrainment (Ent)", "60.5∗", "57.1∗" ], [ "Prob + Disc+ Ent", "[BOLD] 68.2∗", "[BOLD] 63.1∗" ], [ "[BOLD] Oracles", "[EMPTY]", "[EMPTY]" ], [ "Discourse Relation", "69.8", "62.7" ], [ "Word Entrainment", "61.2", "57.8" ] ]
All SVMs trained with our features surpass the ngrams-based baseline. Especially, the discourse features, word entrainment feature, and the combination of the three, all significantly outperform the state-of-the-art system by \newcitekim2016improving.
Goal-oriented dialogues, such as meetings, negotiations, or customer service transcripts, play an important role in our daily life. Automatically extracting the critical points and important outcomes from dialogues would facilitate generating summaries for complicated conversations, understanding the decision-making process of meetings, or analyzing the effectiveness of collaborations. We are interested in a specific type of dialogues --- spoken meetings, which is a common way for collaboration and idea sharing. Previous work~\cite{kirschner2012visualizing} has shown that discourse structure can be used to capture the main discussion points and arguments put forward during problem-solving and decision-making processes in meetings. Indeed, content of different speaker turns do not occur in isolation, and should be interpreted within the context of discourse. Meanwhile, content can also reflect the purpose of speaker turns, thus facilitate with discourse relation understanding. Take the meeting snippet from AMI corpus~\cite{ami} in Figure~\ref{fig:example_intro} as an example. This discussion is annotated with discourse structure based on the Twente Argumentation Schema (TAS) by~\newcite{so65562}, which focuses on argumentative discourse information. As can be seen, meeting participants evaluate different options by showing doubt (\textsc{uncertain}), bringing up alternative solution (\textsc{option}), or giving feedback. The discourse information helps with the identification of the key discussion point, i.e., ``which type of battery to use", by revealing the discussion flow. To date, most efforts to leverage discourse information to detect salient content from dialogues have focused on encoding gold-standard discourse relations as features for use in classifier training~\cite{murray2006incorporating,Galley:2006:SCR:1610075.1610126,mckeown2007using,Bui:2009:EDM:1708376.1708410}. However, automatic discourse parsing in dialogues is still a challenging problem~\cite{perret-EtAl:2016:N16-1}. Moreover, acquiring human annotation on discourse relations is a time-consuming and expensive process, and does not scale for large datasets. In this paper, we propose \textit{a joint modeling approach to select salient phrases reflecting key discussion points as well as label the discourse relations between speaker turns in spoken meetings}. We hypothesize that leveraging the interaction between content and discourse has the potential to yield better prediction performance on both \textit{phrase-based content selection} and \textit{discourse relation prediction}. Specifically, we utilize argumentative discourse relations as defined in Twente Argument Schema (TAS)~\cite{so65562}, where discussions are organized into tree structures with discourse relations labeled between nodes (as shown in Figure~\ref{fig:example_intro}). Algorithms for joint learning and joint inference are proposed for our model. We also present a variation of our model to treat discourse relations as latent variables when true labels are not available for learning. We envision that the extracted salient phrases by our model can be used as input to abstractive meeting summarization systems~\cite{wang-cardie:2013:ACL2013,mehdad-carenini-ng:2014:P14-1}. Combined with the predicted discourse structure, a visualization tool can be exploited to display conversation flow to support intelligent meeting assistant systems. To the best of our knowledge, our work is the first to jointly model content and discourse relations in meetings. We test our model with two meeting corpora --- the AMI corpus~\cite{ami} and the ICSI corpus~\cite{icsi}. Experimental results show that our model yields an accuracy of 63.2 on phrase selection, which is significantly better than a classifier based on Support Vector Machines (SVM). Our discourse prediction component also obtains better accuracy than a state-of-the-art neural network-based approach (59.2 vs. 54.2). Moreover, our model trained with latent discourse outperforms SVMs on both AMI and ICSI corpora for phrase selection. We further evaluate the usage of selected phrases as extractive meeting summaries. Results evaluated by ROUGE~\cite{Lin:2003:AES:1073445.1073465} demonstrate that our system summaries obtain a ROUGE-SU4 F1 score of 21.3 on AMI corpus, which outperforms non-trivial extractive summarization baselines and a keyword selection algorithm proposed in~\newcite{liu2009unsupervised}. Moreover, since both content and discourse structure are critical for building shared understanding among participants~\cite{mulder2002assessing,mercer2004sociocultural}, we further investigate whether our learned model can be utilized to predict the consistency among team members' understanding of their group decisions. This task is first defined as \textit{consistency of understanding} (COU) prediction by~\newcite{kim2016improving}, who have labeled a portion of AMI discussions with consistency or inconsistency labels. We construct features from our model predictions to capture different discourse patterns and word entrainment scores for discussion with different COU level. Results on AMI discussions show that SVM classifiers trained with our features significantly outperform the state-of-the-art results~\cite{kim2016improving} (F1: 63.1 vs. 50.5) and non-trivial baselines. The rest of the paper is structured as follows: we first summarize related work in Section~\ref{sec:related}. The joint model is presented in Section~\ref{sec:model}. Datasets and experimental setup are described in Section~\ref{sec:data}, which is followed by experimental results (Section~\ref{sec:result}). We then study the usage of our model for predicting consistency of understanding in groups in Section~\ref{sec:consistency}. We finally conclude in Section~\ref{sec:conclusion}. We presented a joint model for performing phrase-level content selection and discourse relation prediction in spoken meetings. Experimental results on AMI and ICSI meeting corpora showed that our model can outperform state-of-the-art methods for both tasks. Further evaluation on the task of predicting consistency-of-understanding in meetings demonstrated that classifiers trained with features constructed from our model output produced superior performance compared to the state-of-the-art model. This provides an evidence of our model being successfully applied in other prediction tasks in spoken meetings.\begin{abstract} \fontsize{10}{12}\selectfont We present a joint modeling approach to identify salient discussion points in spoken meetings as well as to label the discourse relations between speaker turns. A variation of our model is also discussed when discourse relations are treated as latent variables. Experimental results on two popular meeting corpora show that our joint model can outperform state-of-the-art approaches for both phrase-based content selection and discourse relation prediction tasks. We also evaluate our model on predicting the consistency among team members' understanding of their group decisions. Classifiers trained with features constructed from our model achieve significant better predictive performance than the state-of-the-art. \end{abstract} \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[noend]{algpseudocode} \usepackage[lined,boxed,commentsnumbered]{algorithm2e} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{-5pt}}m{#1}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{***} % Enter the acl Paper ID here \newcommand{\kechen}[1]{\textcolor{blue}{[Kechen]: {#1}}} \newcommand{\lu}[1]{\textcolor{magenta}{[Lu]: {#1}}} \newcommand{\joe}[1]{\textcolor{cyan}{[Joe]: {#1}}} \title{Joint Modeling of Content and Discourse Relations in Dialogues} \author{Kechen Qin$^{1}$ ~~~~ Lu Wang$^{1}$ ~~~~ Joseph Kim$^{2}$\\ $^{1}$College of Computer and Information Science, Northeastern University\\ $^{2}$Computer Science and Artificial Intelligence Laboratory, \\ Massachusetts Institute of Technology\\ {\tt $^{1}$qin.ke@husky.neu.edu},~{\tt luwang@ccs.neu.edu}\\ {\tt $^{2}$joseph\_kim@csail.mit.edu} \\} \begin{document} \maketitle \input{abstract.tex} \section{Introduction} \input{intro.tex} \section{Related Work} \label{sec:related} \input{related.tex} \section{The Joint Model of Content and Discourse Relations} \label{sec:model} \input{model.tex} \section{Datasets and Experimental Setup} \label{sec:data} \input{data.tex} \section{Experimental Results} \label{sec:result} \input{result.tex} \section{Predicting Consistency of Understanding} \label{sec:consistency} \input{consistency.tex} \section{Conclusion} \label{sec:conclusion} \input{conclusion.tex} \section*{Acknowledgments} This work was supported in part by National Science Foundation Grant IIS-1566382 and a GPU gift from Nvidia. We thank three anonymous reviewers for their valuable suggestions on various aspects of this work. \bibliographystyle{acl_natbib} \appendix \end{document} \noindent \textbf{Meeting Corpora.} We evaluate our joint model on two meeting corpora with rich annotations: the AMI meeting corpus~\cite{ami} and the ICSI meeting corpus~\cite{icsi}. AMI corpus consists of 139 scenario-driven meetings, and ICSI corpus contains 75 naturally occurring meetings. Both of the corpora are annotated with dialogue acts, adjacency pairs, and topic segmentation. We treat each topic segment as one discussion, and remove discussions with less than 10 turns or labeled as ``opening" and ``chitchat". %There are 52.1 turns per discussion for AMI and 50.8 turns for ICSI on average. 694 discussions from AMI and 1139 discussions from ICSI are extracted, and these two datasets are henceforth referred as \textsc{AMI-full} and \textsc{ICSI-full}. \noindent \textbf{Acquiring Gold-Standard Labels.} Both corpora contain human constructed abstractive summaries and extractive summaries on meeting level. Short abstracts, usually in one sentence, are constructed by meeting participants --- \textit{participant summaries}, and external annotators --- \textit{abstractive summaries}. Dialogue acts that contribute to important output of the meeting, e.g. decisions, are identified and used as extractive summaries, and some of them are also linked to the corresponding abstracts. Since the corpora do not contain phrase-level importance annotation, we induce gold-standard labels for candidate phrases based on the following rule. A candidate phrase is considered as a positive sample if its head word is contained in any abstractive summary or participant summary. On average, 71.9 candidate phrases are identified per discussion for \textsc{AMI-full} with 31.3\% labeled as positive, and 73.4 for \textsc{ICSI-full} with 24.0\% of them as positive samples. Furthermore, a subset of discussions in \textsc{AMI-full} are annotated with discourse structure and relations based on Twente Argumentation Schema (TAS) by~\newcite{so65562}\footnote{There are 9 types of relations in TAS: \textsc{positive}, \textsc{negative}, \textsc{uncertain}, \textsc{request}, \textsc{specialization}, \textsc{elaboration}, \textsc{option}, \textsc{option exclusion}, and \textsc{subject-to}.}. A tree-structured argument diagram (as shown in Figure~\ref{fig:example_intro}) is created for each discussion or a part of the discussion. The nodes of the tree contain partial or complete speaker turns, and discourse relation types are labeled on the links between the nodes. In total, we have 129 discussions annotated with discourse labels. %, with 20.0 turns per discussion. This dataset is called \textsc{AMI-sub} hereafter. \noindent \textbf{Experimental Setup.} 5-fold cross validation is used for all experiments. All real-valued features are uniformly normalized to [0,1]. For the joint learning algorithm, we use 10 epochs and carry out 50 sampling for MCMC for each training sample. The learning rate is set to 0.01. We run the learning algorithm for 20 times, and use the average of the learned weights as the final parameter values. For models trained with latent discourse relations, we fix the number of relations to $9$. \noindent \textbf{Baselines and Comparisons.} For both phrase-based content selection and discourse relation prediction tasks, we consider a baseline that always predicts the majority label (Majority). Previous work has shown that Support Vector Machines (SVMs)-based classifiers achieve state-of-the-art performance for keyphrase selection in meetings~\cite{FernandezFDAEP08,wang-cardie:2013:ACL2013} and discourse parsing for formal text~\cite{HernaultPdI10}. Therefore, we compare with linear SVM-based classifiers, trained with the same feature set of content features or discourse features. We fix the trade-off parameter to $1.0$ for all SVM-based experiments. For discourse relation prediction, we use one-vs-rest strategy to build multiple binary classifiers.\footnote{Multi-class classifier was also experimented with, but gave inferior performance.} We also compare with a state-of-the-art discourse parser~\cite{ji-haffari-eisenstein:2016:N16-1}, which employs neural language model to predict discourse relations. Our model is inspired by research work that leverages discourse structure for identifying salient content in conversations, which is still largely reliant on features derived from gold-standard discourse labels~\cite{mckeown2007using,Murray:2010:GVA:1873738.1873753,bokaei2016extractive}. For instance, adjacency pairs, which are paired utterances with question-answer or offer-accept relations, are found to frequently appear in meeting summaries together and thus are utilized to extract summary-worthy utterances by~\newcite{Galley:2006:SCR:1610075.1610126}. There is much less work that jointly predicts the importance of content along with the discourse structure in dialogus. \newcite{oya2014extractive} employs Dynamic Conditional Random Field to recognize sentences in email threads for use in summary as well as their dialogue acts. Only local discourse structures from adjacent utterances are considered. Our model is built on tree structures, which captures more global information. Our work is also in line with keyphrase identification or phrase-based summarization for conversations. Due to the noisy nature of dialogues, recent work focuses on identifying summary-worthy phrases from meetings~\cite{FernandezFDAEP08,Riedhammer:2010:LSS:1837521.1837625} or email threads~\cite{loza2014building}. For instance, \newcite{wang-cardie:2012:SIGDIAL20122} treat the problem as an information extraction task, where summary-worthy content represented as indicator and argument pairs is identified by an unsupervised latent variable model. Our work also targets at detecting salient phrases from meetings, but focuses on the joint modeling of critical discussion points and discourse relations held between them. For the area of discourse analysis in dialogues, a significant amount of work has been done in predicting local discourse structures, such as recognizing dialogue acts or social acts of adjacent utterances from phone conversations~\cite{stolcke2000dialogue,kalchbrenner2013recurrent,ji-haffari-eisenstein:2016:N16-1}, spoken meetings~\cite{dielmann2008recognition}, or emails~\cite{Cohen04learningto}. %, or discussion forums~\cite{wang2011predicting,bender2011annotating}. Although discourse information from non-adjacent turns has been studied in the context of online discussion forums~\cite{ghosh2014analyzing} and meetings~\cite{hakkani2009towards}, none of them models the effect of discourse structure on content selection, which is a gap that this work fills in. \subsection{Phrase Selection and Discourse Labeling} Here we present the experimental results on phrase-based content selection and discourse relation prediction. We experiment with two variations of our joint model: one is trained on gold-standard discourse relations, the other is trained by treating discourse relations as latent models as described in Section~\ref{sec:modeldescription}. Remember that we have gold-standard argument diagrams on the \textsc{AMI-sub} dataset, we can thus conduct experiments by assuming the \textit{True Attachment Structure} is given for latent versions. When argument diagrams are not available, we build a tree among the turns in each discussion as follows. Two turns are attached if there is any adjacency pair between them. If one turn is attached to more than one previous turns, the closest one is considered. For the rest of the turns, they are attached to the preceding turn. This construction is applied on \textsc{AMI-full} and \textsc{ICSI-full}. We also investigate whether joint learning and joint inference can produce better prediction performance. We consider joint learning with separate inference, where only content features or discourse features are used for prediction (Separate-Inference). We further study learning separate classifiers for content selection and discourse relations without joint features (Separate-Learn). We first show the phrase selection and discourse relation prediction results on \textsc{AMI-sub} in Tables~\ref{tab:ami_disc_phrase} and~\ref{tab:ami_disc_discourse}. As shown in Table~\ref{tab:ami_disc_phrase}, our models, trained with gold-standard discourse relations or latent ones with true attachment structure, yield significant better accuracy and F1 scores than SVM-based classifiers trained with the same feature sets for phrase selection (paired $t$-test, $p<0.05$). Our joint learning model with separate inference also outperforms neural network-based discourse parsing model~\cite{ji-haffari-eisenstein:2016:N16-1} in Table~\ref{tab:ami_disc_discourse}. Moreover, Tables~\ref{tab:ami_disc_phrase} and~\ref{tab:ami_disc_discourse} demonstrate that joint learning usually produces superior performance for both tasks than separate learning. Combined with joint inference, our model obtains the best accuracy and F1 on phrase selection. This indicates that leveraging the interplay between content and discourse boost the prediction performance. Similar results are achieved on \textsc{AMI-full} and \textsc{ICSI-full} in Table~\ref{tab:ami_topic}, where latent discourse relations without true attachment structure are employed for training. \subsection{Phrase-Based Extractive Summarization} We further evaluate whether the prediction of the content selection component can be used for summarizing the key points on discussion level. For each discussion, salient phrases identified by our model are concatenated in sequence for use as the summary. We consider two types of gold-standard summaries. One is utterance-level extractive summary, which consists of human labeled summary-worthy utterances. The other is abstractive summary, where we collect human abstract with at least one link from summary-worthy utterances. We calculate scores based on ROUGE~\cite{Lin:2003:AES:1073445.1073465}, which is a popular tool for evaluating text summarization~\cite{gillick2009global,liu2010using}. ROUGE-1 (unigrams) and ROUGE-SU4 (skip-bigrams with at most 4 words in between) are used. Following previous work on meeting summarization~\cite{Riedhammer:2010:LSS:1837521.1837625,wang-cardie:2013:ACL2013}, we consider two dialogue act-level summarization baselines: (1) \textsc{longest DA} in each discussion is selected as the summary, and (2) \textsc{centroid DA}, the one with the highest TF-IDF similarity with all DAs in the discussion. We also compare with an unsupervised keyword extraction approach by \newcite{liu2009unsupervised}, where word importance is estimated by its TF-IDF score, POS tag, and the salience of its corresponding sentence. With the same candidate phrases as in our model, we extend \newcite{liu2009unsupervised} by scoring each phrase based on its average score of the words. Top phrases, with the same number of phrases output by our model, are included into the summaries. Finally, we compare with summaries consisting of salient phrases predicted by an SVM classifier trained with our content features. From the results in Table~\ref{tab:summ}, we can see that phrase-based extractive summarization methods can yield better ROUGE scores for recall and F1 than baselines that extract the whole sentences. Meanwhile, our system significantly outperforms the SVM-based classifiers when evaluated on ROUGE recall and F1, while achieving comparable precision. Compared to \newcite{liu2009unsupervised}, our system also yields better results on all metrics. Sample summaries by our model along with two baselines are displayed in Figure~\ref{fig:example_summary}. Utterance-level extract-based baselines unavoidably contain disfluency and unnecessary details. Our phrase-based extractive summary is able to capture the key points from both the argumentation process and important outcomes of the conversation. This implies that our model output can be used as input for an abstractive summarization system. It can also facilitate the visualization of decision-making processes. \subsection{Further Analysis and Discussions} \noindent \textbf{Features Analysis.} We first discuss salient features with top weights learned by our joint model. For content features, main speaker tends to utter more salient content. Higher TF-IDF scores also indicate important phrases. If a phrase is mentioned in previous turn and repeated in the current turn, it is likely to be a key point. For discourse features, structure features matter the most. For instance, jointly modeling the discourse relation of the parent node along with the current node can lead to better inference. An example is that giving more details on the proposal (\textsc{elaboration}) tends to lead to \textsc{positive} feedback. Moreover, \textsc{request} usually appears close to the root of the argument diagram tree, while \textsc{positive} feedback is usually observed on leaves. Adjacency pairs also play an important role for discourse prediction. For joint features, features that composite ``phrase mentioned in previous turn" and relation \textsc{positive} feedback or \textsc{request} yield higher weight, which are indicators for both key phrases and discourse relations. We also find that main speaker information composite with \textsc{elaboration} and \textsc{uncertain} are associated with high weights. \noindent \textbf{Error Analysis and Potential Directions.} Taking a closer look at our prediction results, one major source of incorrect prediction for phrase selection is based on the fact that similar concepts might be expressed in different ways, and our model predicts inconsistently for different variations. For example, participants use both ``thick" and ``two centimeters" to talk about the desired shape of a remote control. However, our model does not group them into the same cluster and later makes different predictions. For future work, semantic similarity with context information can be leveraged to produce better clustering results. Furthermore, identifying discourse relations in dialogues is still a challenging task. For instance, ``I wouldn't choose a plastic case" should be labeled as \textsc{option exclusion}, if the previous turns talk about different options. Otherwise, it can be labeled as \textsc{negative}. Therefore, models that better handle semantics and context need to be considered. In this section, we first present our joint model in Section~\ref{sec:modeldescription}. The algorithms for learning and inference are described in Sections~\ref{sec:learning} and~\ref{sec:inference}, followed by feature description (Section~\ref{sec:feature}). \subsection{Model Description} \label{sec:modeldescription} Our proposed model learns to jointly perform phrase-based content selection and discourse relation prediction by making use of the interaction between the two sources of information. Assume that a meeting discussion is denoted as $\mathbf{x}$, where $\mathbf{x}$ consists of a sequence of discourse units $\mathbf{x}=\{x_{1}, x_{2}, \cdots ,x_{n}\}$. Each discourse unit can be a complete speaker turn or a part of it. As demonstrated in Figure~\ref{fig:example_intro}, a tree-structured discourse diagram is constructed for each discussion with each discourse unit $x_{i}$ as a node of the tree. In this work, we consider the argumentative discourse structure by Twente Argument Schema (TAS)~\cite{so65562}. For each node $x_{i}$, it is attached to another node $x_{i^\prime}$ ($i^\prime<i$) in the discussion, and a discourse relation $d_{i}$ is hold on the link $\langle x_{i}, x_{i^\prime} \rangle$ ($d_{i}$ is empty if $x_{i}$ is the root). Let $\mathbf{t}$ denote the set of links $\langle x_{i}, x_{i^\prime} \rangle$ in $\mathbf{x}$. Following previous work on discourse analysis in meetings~\cite{so65562,hakkani2009towards}, we assume that the attachment structure between discourse units are given during both training and testing. A set of candidate phrases are extracted from each discourse unit $x_{i}$, from which salient phrases that contain gist information will be identified. We obtain constituent and dependency parses for utterances using Stanford parser~\cite{Klein:2003:AUP:1075096.1075150}. We restrict eligible candidate to be a noun phrase (NP), verb phrase (VP), prepositional phrase (PP), or adjective phrase (ADJP) with at most 5 words, and its head word cannot be a stop word.\footnote{Other methods for mining candidate phrases, such as frequency-based method~\cite{liu2015mining}, will be studied for future work.} If a candidate is a parent of another candidate in the constituent parse tree, we will only keep the parent. We further merge a verb and a candidate noun phrase into one candidate if the later is the direct object or subject of the verb. For example, from utterance ``let's use a rubber case as well as rubber buttons", we can identify candidates ``use a rubber case" and ``rubber buttons". For $x_{i}$, the set of candidate phrases are denoted as $c_{i}=\{c_{i,1},c_{i,2},\cdots,c_{i,m_i}\}$, where $m_i$ is the number of candidates. $c_{i,j}$ takes a value of $1$ if the corresponding candidate is selected as salient phrase; otherwise, $c_{i,j}$ is equal to $0$. All candidate phrases in discussion $\mathbf{x}$ are represented as $\mathbf{c}$. We then define a log-linear model with feature parameters $\mathbf{w}$ for the candidate phrases $\mathbf{c}$ and discourse relations $\mathbf{d}$ in $\mathbf{x}$ as: {\fontsize{10}{10}\selectfont \begin{equation} \begin{split} & p(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w}) \propto \exp [\mathbf{w}\cdot \Phi (\mathbf{c}, \mathbf{d}, \mathbf{x})]\\ & \propto \exp [\mathbf{w}\cdot \sum_{i=1, <x_i, x_{i^\prime}>\in \mathbf{t}}^{n} \phi (c_{i}, d_{i}, d_{i^\prime}, \mathbf{x})]\\ & \propto \exp [\sum_{i=1,<x_i, x_{i^\prime}>\in \mathbf{t}}^{n} ( \mathbf{w_c}\cdot \sum_{j=1}^{m_i} \phi_c (c_{i,j}, \mathbf{x})\\ &+ \mathbf{w_d}\cdot \phi_d (d_{i},d_{i^\prime}, \mathbf{x}) + \mathbf{w_{cd}}\cdot \sum_{j=1}^{m_i} \phi_{cd} (c_{i,j}, d_{i}, \mathbf{x}) ) ]\\ \end{split} \end{equation} \label{eq:objective} } Here $\Phi(\cdot)$ and $\phi(\cdot)$ denote feature vectors. We utilize three types of feature functions: (1) content-only features $\phi_c(\cdot)$, which capture the importance of phrases, (2) discourse-only features $\phi_d(\cdot)$, which characterize the (potentially higher-order) discourse relations, and (3) joint features of content and discourse $\phi_{cd}(\cdot)$, which model the interaction between the two. $\mathbf{w_c}$, $\mathbf{w_d}$, and $\mathbf{w_{cd}}$ are corresponding feature parameters. Detailed feature descriptions can be found in Section~\ref{sec:feature}. \noindent \textbf{Discourse Relations as Latent Variables.} As we mentioned in the introduction, acquiring labeled training data for discourse relations is a time-consuming process since it would require human annotators to inspect the full discussions. Therefore, we further propose a variation of our model where it treats the discourse relations as latent variables, so that $p(\mathbf{c} |\mathbf{x}, \mathbf{w})=\sum_{\mathbf{d}} p(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w})$. Its learning algorithm is slightly different as described in the next section. \subsection{Joint Learning for Parameter Estimation} \label{sec:learning} For learning the model parameters $\mathbf{w}$, we employ an algorithm based on SampleRank~\cite{rohanimanesh2011samplerank}, which is a stochastic structure learning method. In general, the learning algorithm constructs a sequence of configurations for sample labels as a Markov chain Monte Carlo (MCMC) chain based on a task-specific loss function, where stochastic gradients are distributed across the chain. %This is suitable for our learning problem because we aim to optimize the prediction performance for both phrase selection and discourse relations with various types of features. The full learning procedure is described in Algorithm~\ref{alg:learning}. To start with, the feature weights $\mathbf{w}$ is initialized with each value randomly drawn from $[-1, 1]$. Multiple epochs are run through all samples. For each sample, we randomly initialize the assignment of candidate phrases labels $\mathbf{c}$ and discourse relations $\mathbf{d}$. Then an MCMC chain is constructed with a series of configurations $\sigma =(\mathbf{c}$, $\mathbf{d})$: at each step, it first samples a discourse structure $\mathbf{d}$ based on the proposal distribution $q(\mathbf{d^\prime} |\mathbf{d},\mathbf{x})$, and then samples phrase labels conditional on the new discourse relations and previous phrase labels based on $q(\mathbf{c^\prime} |\mathbf{c}, \mathbf{d^\prime},\mathbf{x})$. Local search is used for both proposal distributions.\footnote{For future work, we can explore other proposal distributions that utilize the conditional distribution of salient phrases given sampled discourse relations.} The new configuration is accepted if it improves on the score by $\omega (\sigma ^\prime)$. The parameters $\mathbf{w}$ are updated accordingly. For the scorer $\omega$, we use a weighted combination of F1 scores of phrase selection ($F1_{c}$) and discourse relation prediction ($F1_{d}$): $\omega (\sigma)= \alpha \cdot F1_{c} + (1-\alpha) \cdot F1_{d}$. We fix $\alpha$ to $0.1$.%\footnote{We give discourse a higher weight since it is a more difficult task.} When discourse relations are treated as latent, we initialize discourse relations for each sample with a label in $\{1, 2, \ldots, K\}$ if there are $K$ relations indicated, and we only use $F1_{c}$ as the scorer. \begin{algorithm}[t] { \setstretch{0.3} \fontsize{9}{10}\selectfont \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$\mathbf{X}=\{\mathbf{x}\}$: discussions in the training set, \\ $\eta$: learning rate, $\epsilon$: number of epochs, \\ $\delta$: number of sampling rounds, \\ $\omega (\cdot)$: scoring function, $\Phi (\cdot)$: feature functions } \Output{feature weights $\frac{1}{|\mathcal{W}|}\sum_{\mathbf{w}\in \mathcal{W}} \mathbf{w}$} \BlankLine Initialize $\mathbf{w}$\; $\mathcal{W} \leftarrow \{\mathbf{w} \}$\; \For {$e=1$ to $\epsilon$} { \For {$\mathbf{x}$ in $\mathbf{X}$} { \tcp{Initialize configuration for $\mathbf{x}$} Initialize $\mathbf{c}$ and $\mathbf{d}$\; $\sigma=(\mathbf{c}, \mathbf{d})$\; \For {$s=1$ to $\delta$} { \tcp{New configuration via local search} $\mathbf{d^\prime} \sim q_d(\cdot | \mathbf{x}, \mathbf{d})$\; $\mathbf{c^\prime} \sim q_d(\cdot | \mathbf{x}, \mathbf{c}, \mathbf{d^\prime})$\; $\sigma^\prime=(\mathbf{c^\prime}, \mathbf{d^\prime})$\; $\sigma^+=\arg \max_{\tilde{\sigma} \in \{\sigma, \sigma^\prime \}} \omega (\tilde{\sigma}) $\; $\sigma^-=\arg \min_{\tilde{\sigma} \in \{\sigma, \sigma^\prime \}} \omega (\tilde{\sigma}) $\; $\hat{\nabla}=\Phi (\sigma^+)- \Phi (\sigma^-)$\; $\Delta \omega= \omega (\sigma^+)- \omega (\sigma^-)$\; \tcp{Update parameters} \If {$\mathbf{w}\cdot \hat{\nabla} < \Delta \omega$ \& $\Delta \omega\neq 0$}{ $\mathbf{w} \leftarrow \mathbf{w}+\eta \cdot \hat{\nabla}$\; Add $\mathbf{w}$ in $\mathcal{W}$\; } \tcp{Accept or reject new configuration} \If {$\sigma^+ == \sigma^\prime$}{ $\sigma=\sigma^\prime$ } } } } } \caption{\fontsize{10}{12}\selectfont SampleRank-based joint learning.} \label{alg:learning} \end{algorithm} \subsection{Joint Inference for Prediction} \label{sec:inference} Given a new sample $\mathbf{x}$ and learned parameters $\mathbf{w}$, we predict phrase labels and discourse relations as $\arg \max_{\mathbf{c}, \mathbf{d}} p(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w})$. Dynamic programming can be employed to carry out joint inference, however, it would be time-consuming since our objective function has a large search space for both content and discourse labels. Hence we propose an alternating optimizing algorithm to search for $\mathbf{c}$ and $\mathbf{d}$ iteratively. Concretely, for each iteration, we first optimize on $\mathbf{d}$ by maximizing $\sum_{i=1,<x_i, x_i^\prime>\in \mathbf{t}}^{n} (\mathbf{w_d}\cdot \phi_d (d_{i},d_{i^\prime}, \mathbf{x}) + \mathbf{w_{cd}}\cdot \sum_{j=1}^{m_i} \phi_{cd} (c_{i,j}, d_{i}, \mathbf{x}))$. Message-passing~\cite{smith2008dependency} is used to find the best $\mathbf{d}$. In the second step, we search for $\mathbf{c}$ that maximizes $\sum_{i=1,<x_i, x_i^\prime>\in \mathbf{t}}^{n} (\mathbf{w_c}\cdot \sum_{j=1}^{m_i} \phi_c (c_{i,j}, \mathbf{x}) + \mathbf{w_{cd}}\cdot \sum_{j=1}^{m_i} \phi_{cd} (c_{i,j}, d_{i}, \mathbf{x}) )$. We believe that candidate phrases based on the same concepts should have the same predicted label. Therefore, candidates of the same phrase type and sharing the same head word are grouped into one cluster. We then cast our task as an integer linear programming problem.\footnote{We use lpsolve: \url{http://lpsolve.sourceforge.net/5.5/}.} We optimize our objective function under constraints: (1) $c_{i,j}=c_{i^\prime, j^\prime}$ if $c_{i,j}$ and $c_{i^\prime, j^\prime}$ are in the same cluster, and (2) $c_{i,j}\in \{0, 1\} $, $\forall i, j$. The inference process is the same for models trained with latent discourse relations. \subsection{Features} \label{sec:feature} We use features that characterize content, discourse relations, and the combination of both. \noindent \textbf{Content Features.} For modeling the salience of content, we calculate the minimum, maximum, and average of \texttt{TF-IDF} scores of words and \texttt{number of content words} in each phrase based on the intuition that important phrases tend to have more content words with high TF-IDF scores~\cite{FernandezFDAEP08}. We also consider whether the head word of the phrase has been \texttt{mentioned in preceding turn}, which implies the focus of a discussion. The \texttt{size of the cluster} each phrase belongs to is also included. \texttt{Number of POS tags} and \texttt{phrase types} are counted to characterize the syntactic structure. Previous work~\cite{wang-cardie:2012:SIGDIAL20122} has found that a discussion usually ends with decision-relevant information. We thus identify the \texttt{absolute and relative positions} of the turn containing the candidate phrase in the discussion. Finally, we record whether the candidate phrase is \texttt{uttered by the main speaker}, who speakers the most words in the discussion. \noindent \textbf{Discourse Features.} For each discourse unit, we collect the \texttt{dialogue act types} of the current unit and its parent node in discourse tree, whether there is any \texttt{adjacency pair} held between the two nodes~\cite{hakkani2009towards}, and the \texttt{Jaccard similarity} between them. We record whether two turns are \texttt{uttered by the same speaker}, for example, \textsc{elaboration} is commonly observed between the turns from the same participant. We also calculate the \texttt{number of candidate phrases} based on the observation that \textsc{option} and \textsc{specialization} tend to contain more informative words than \textsc{positive} feedback. Length of the discourse unit is also relevant. Therefore, we compute the \texttt{time span} and \texttt{number of words}. To incorporate global structure features, we encode the \texttt{depth of the node} in the discourse tree and \texttt{the number of its siblings}. Finally, we include an \texttt{order-2 discourse relation} feature that encodes the relation between current discourse unit and its parent, and the relation between the parent and its grandparent if it exists. \noindent \textbf{Joint Features.} For modeling the interaction between content and discourse, the discourse relation is added to each content feature to compose a joint feature. For example, if candidate $c$ in discussion $x$ has a content feature $\phi_{[avg-TFIDF]} (c, \mathbf{x})$ with a value of $0.5$, and its discourse relation $d$ is \textsc{positive}, then the joint feature takes the form of $\phi_{[avg-TFIDF, Positive]} (c, d, \mathbf{x})=0.5$. As discussed in previous work~\cite{mulder2002assessing,mercer2004sociocultural}, both content and discourse structure are critical for building shared understanding among discussants. In this section, we test whether our joint model can be utilized to predict the consistency among team members' understanding of their group decisions, which is defined as consistency of understanding (COU) in \newcite{kim2016improving}. %with an objective to develop intelligent systems that can suggest review of topics potentially resulting in inconsistent understandings among participants. \newcite{kim2016improving} establish gold-standard COU labels on a portion of AMI discussions, by comparing participant summaries to determine whether participants report the same decisions. If all decision points are consistent, the associated topic discussion is labeled as \textit{consistent}; otherwise, the discussion is identified as \textit{inconsistent}. Their annotation covers the \textsc{AMI-sub} dataset. Therefore, we run the prediction experiments on \textsc{AMI-sub} by using the same annotation. Out of total 129 discussions in \textsc{AMI-sub}, 86 discussions are labeled as consistent and 43 are inconsistent. We construct three types of features by using our model's predicted labels. Firstly, we learn two versions of our model based on the ``consistent" discussions and the ``inconsistent" ones in the training set, with learned parameters $\mathbf{w_{con}}$ and $\mathbf{w_{incon}}$. For a discussion in the test set, these two models output two probabilities $p_{con}=\max_{\mathbf{c}, \mathbf{d}} P(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w_{con}})$ and $p_{incon}=\max_{\mathbf{c}, \mathbf{d}} P(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w_{incon}})$. We use $p_{con}-p_{incon}$ as a feature. Furthermore, we consider discourse relations of length one and two from the discourse structure tree. Intuitively, some discourse relations, e.g., \textsc{elaboration} followed by multiple \textsc{positive} feedback, imply consistent understanding. The third feature is based on word entrainment, which has been shown to correlate with task success for groups~\cite{nenkova2008high}. Using the formula in~\newcite{nenkova2008high}, we compute the average word entrainment between the main speaker who utters the most words and all the other participants. The content words in the salient phrases predicted by our model is considered for entrainment computation. \noindent \textbf{Results.} Leave-one-out is used for experiments. For training, our features are constructed from gold-standard phrase and discourse labels. Predicted labels by our model is used for constructing features during testing. SVM-based classifier is used for experimenting with different sets of features output by our model. A majority class baseline is constructed as well. We also consider an SVM classifier trained with ngram features (unigrams and bigrams). Finally, we compare with the state-of-the-art method in~\newcite{kim2016improving}, where discourse-relevant features and head gesture features are utilized in Hidden Markov Models to predict the consistency label. The results are displayed in Table~\ref{tab:consistency}. All SVMs trained with our features surpass the ngrams-based baseline. Especially, the discourse features, word entrainment feature, and the combination of the three, all significantly outperform the state-of-the-art system by \newcite{kim2016improving}.\footnote{We also experiment with other popular classifiers, e.g. logistic regression or decision tree, and similar trend is respected.}
Joint Modeling of Content and Discourse Relations in Dialogues
1705.05039
Table 4: ROUGE scores for phrase-based extractive summarization evaluated against human-constructed utterance-level extractive summaries and abstractive summaries. Our models that statistically significantly outperform SVM and liu2009unsupervised are highlighted with ∗ (p<0.05, paired t-test). Best ROUGE score for each column is in bold.
[ "[ITALIC] Extractive Summaries as Gold-Standard", "[ITALIC] Extractive Summaries as Gold-Standard", "[ITALIC] Extractive Summaries as Gold-Standard ROUGE-1", "[ITALIC] Extractive Summaries as Gold-Standard ROUGE-1", "[ITALIC] Extractive Summaries as Gold-Standard ROUGE-1", "[ITALIC] Extractive Summaries as Gold-Standard ROUGE-SU4", "[ITALIC] Extractive Summaries as Gold-Standard ROUGE-SU4", "[ITALIC] Extractive Summaries as Gold-Standard ROUGE-SU4" ]
[ [ "[EMPTY]", "Len", "Prec", "Rec", "F1", "Prec", "Rec", "F1" ], [ "Longest DA", "30.9", "64.4", "15.0", "23.1", "58.6", "9.3", "15.3" ], [ "Centroid DA", "17.5", "[BOLD] 73.9", "13.4", "20.8", "[BOLD] 62.5", "6.9", "11.3" ], [ "SVM", "49.8", "47.1", "24.1", "27.5", "22.7", "10.7", "11.8" ], [ "liu2009unsupervised", "62.4", "40.4", "39.2", "36.2", "15.5", "15.2", "13.5" ], [ "Our Model", "66.6", "45.4", "44.7", "41.1∗", "24.1∗", "23.4∗", "20.9∗" ], [ "Our Model-latent", "85.9", "42.9", "[BOLD] 49.3", "[BOLD] 42.4∗", "21.6", "[BOLD] 25.7∗", "[BOLD] 21.3∗" ], [ "[ITALIC] Abstractive Summaries as Gold-Standard", "[ITALIC] Abstractive Summaries as Gold-Standard", "[ITALIC] Abstractive Summaries as Gold-Standard", "[ITALIC] Abstractive Summaries as Gold-Standard", "[ITALIC] Abstractive Summaries as Gold-Standard", "[ITALIC] Abstractive Summaries as Gold-Standard", "[ITALIC] Abstractive Summaries as Gold-Standard", "[ITALIC] Abstractive Summaries as Gold-Standard" ], [ "[EMPTY]", "[EMPTY]", "ROUGE1", "ROUGE1", "ROUGE1", "ROUGE-SU4", "ROUGE-SU4", "ROUGE-SU4" ], [ "[EMPTY]", "Len", "Prec", "Rec", "F1", "Prec", "Rec", "F1" ], [ "Longest DA", "30.9", "14.8", "5.5", "7.4", "4.8", "1.4", "1.9" ], [ "Centroid DA", "17.5", "[BOLD] 24.9", "5.6", "8.5", "[BOLD] 11.6", "1.4", "2.2" ], [ "SVM", "49.8", "13.3", "9.7", "9.5", "4.4", "2.4", "2.4" ], [ "liu2009unsupervised", "62.4", "10.3", "16.7", "11.3", "2.7", "4.5", "2.8" ], [ "Our Model", "66.6", "12.6", "18.9", "[BOLD] 13.1∗", "3.8", "5.5∗", "[BOLD] 3.7∗" ], [ "Our Model-latent", "85.9", "11.4", "[BOLD] 20.0", "12.4∗", "3.3", "[BOLD] 6.1∗", "3.5∗" ] ]
Meanwhile, our system significantly outperforms the SVM-based classifiers when evaluated on ROUGE recall and F1, while achieving comparable precision. Compared to \newciteliu2009unsupervised, our system also yields better results on all metrics.
Goal-oriented dialogues, such as meetings, negotiations, or customer service transcripts, play an important role in our daily life. Automatically extracting the critical points and important outcomes from dialogues would facilitate generating summaries for complicated conversations, understanding the decision-making process of meetings, or analyzing the effectiveness of collaborations. We are interested in a specific type of dialogues --- spoken meetings, which is a common way for collaboration and idea sharing. Previous work~\cite{kirschner2012visualizing} has shown that discourse structure can be used to capture the main discussion points and arguments put forward during problem-solving and decision-making processes in meetings. Indeed, content of different speaker turns do not occur in isolation, and should be interpreted within the context of discourse. Meanwhile, content can also reflect the purpose of speaker turns, thus facilitate with discourse relation understanding. Take the meeting snippet from AMI corpus~\cite{ami} in Figure~\ref{fig:example_intro} as an example. This discussion is annotated with discourse structure based on the Twente Argumentation Schema (TAS) by~\newcite{so65562}, which focuses on argumentative discourse information. As can be seen, meeting participants evaluate different options by showing doubt (\textsc{uncertain}), bringing up alternative solution (\textsc{option}), or giving feedback. The discourse information helps with the identification of the key discussion point, i.e., ``which type of battery to use", by revealing the discussion flow. To date, most efforts to leverage discourse information to detect salient content from dialogues have focused on encoding gold-standard discourse relations as features for use in classifier training~\cite{murray2006incorporating,Galley:2006:SCR:1610075.1610126,mckeown2007using,Bui:2009:EDM:1708376.1708410}. However, automatic discourse parsing in dialogues is still a challenging problem~\cite{perret-EtAl:2016:N16-1}. Moreover, acquiring human annotation on discourse relations is a time-consuming and expensive process, and does not scale for large datasets. In this paper, we propose \textit{a joint modeling approach to select salient phrases reflecting key discussion points as well as label the discourse relations between speaker turns in spoken meetings}. We hypothesize that leveraging the interaction between content and discourse has the potential to yield better prediction performance on both \textit{phrase-based content selection} and \textit{discourse relation prediction}. Specifically, we utilize argumentative discourse relations as defined in Twente Argument Schema (TAS)~\cite{so65562}, where discussions are organized into tree structures with discourse relations labeled between nodes (as shown in Figure~\ref{fig:example_intro}). Algorithms for joint learning and joint inference are proposed for our model. We also present a variation of our model to treat discourse relations as latent variables when true labels are not available for learning. We envision that the extracted salient phrases by our model can be used as input to abstractive meeting summarization systems~\cite{wang-cardie:2013:ACL2013,mehdad-carenini-ng:2014:P14-1}. Combined with the predicted discourse structure, a visualization tool can be exploited to display conversation flow to support intelligent meeting assistant systems. To the best of our knowledge, our work is the first to jointly model content and discourse relations in meetings. We test our model with two meeting corpora --- the AMI corpus~\cite{ami} and the ICSI corpus~\cite{icsi}. Experimental results show that our model yields an accuracy of 63.2 on phrase selection, which is significantly better than a classifier based on Support Vector Machines (SVM). Our discourse prediction component also obtains better accuracy than a state-of-the-art neural network-based approach (59.2 vs. 54.2). Moreover, our model trained with latent discourse outperforms SVMs on both AMI and ICSI corpora for phrase selection. We further evaluate the usage of selected phrases as extractive meeting summaries. Results evaluated by ROUGE~\cite{Lin:2003:AES:1073445.1073465} demonstrate that our system summaries obtain a ROUGE-SU4 F1 score of 21.3 on AMI corpus, which outperforms non-trivial extractive summarization baselines and a keyword selection algorithm proposed in~\newcite{liu2009unsupervised}. Moreover, since both content and discourse structure are critical for building shared understanding among participants~\cite{mulder2002assessing,mercer2004sociocultural}, we further investigate whether our learned model can be utilized to predict the consistency among team members' understanding of their group decisions. This task is first defined as \textit{consistency of understanding} (COU) prediction by~\newcite{kim2016improving}, who have labeled a portion of AMI discussions with consistency or inconsistency labels. We construct features from our model predictions to capture different discourse patterns and word entrainment scores for discussion with different COU level. Results on AMI discussions show that SVM classifiers trained with our features significantly outperform the state-of-the-art results~\cite{kim2016improving} (F1: 63.1 vs. 50.5) and non-trivial baselines. The rest of the paper is structured as follows: we first summarize related work in Section~\ref{sec:related}. The joint model is presented in Section~\ref{sec:model}. Datasets and experimental setup are described in Section~\ref{sec:data}, which is followed by experimental results (Section~\ref{sec:result}). We then study the usage of our model for predicting consistency of understanding in groups in Section~\ref{sec:consistency}. We finally conclude in Section~\ref{sec:conclusion}. We presented a joint model for performing phrase-level content selection and discourse relation prediction in spoken meetings. Experimental results on AMI and ICSI meeting corpora showed that our model can outperform state-of-the-art methods for both tasks. Further evaluation on the task of predicting consistency-of-understanding in meetings demonstrated that classifiers trained with features constructed from our model output produced superior performance compared to the state-of-the-art model. This provides an evidence of our model being successfully applied in other prediction tasks in spoken meetings.\begin{abstract} \fontsize{10}{12}\selectfont We present a joint modeling approach to identify salient discussion points in spoken meetings as well as to label the discourse relations between speaker turns. A variation of our model is also discussed when discourse relations are treated as latent variables. Experimental results on two popular meeting corpora show that our joint model can outperform state-of-the-art approaches for both phrase-based content selection and discourse relation prediction tasks. We also evaluate our model on predicting the consistency among team members' understanding of their group decisions. Classifiers trained with features constructed from our model achieve significant better predictive performance than the state-of-the-art. \end{abstract} \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[noend]{algpseudocode} \usepackage[lined,boxed,commentsnumbered]{algorithm2e} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{-5pt}}m{#1}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{***} % Enter the acl Paper ID here \newcommand{\kechen}[1]{\textcolor{blue}{[Kechen]: {#1}}} \newcommand{\lu}[1]{\textcolor{magenta}{[Lu]: {#1}}} \newcommand{\joe}[1]{\textcolor{cyan}{[Joe]: {#1}}} \title{Joint Modeling of Content and Discourse Relations in Dialogues} \author{Kechen Qin$^{1}$ ~~~~ Lu Wang$^{1}$ ~~~~ Joseph Kim$^{2}$\\ $^{1}$College of Computer and Information Science, Northeastern University\\ $^{2}$Computer Science and Artificial Intelligence Laboratory, \\ Massachusetts Institute of Technology\\ {\tt $^{1}$qin.ke@husky.neu.edu},~{\tt luwang@ccs.neu.edu}\\ {\tt $^{2}$joseph\_kim@csail.mit.edu} \\} \begin{document} \maketitle \input{abstract.tex} \section{Introduction} \input{intro.tex} \section{Related Work} \label{sec:related} \input{related.tex} \section{The Joint Model of Content and Discourse Relations} \label{sec:model} \input{model.tex} \section{Datasets and Experimental Setup} \label{sec:data} \input{data.tex} \section{Experimental Results} \label{sec:result} \input{result.tex} \section{Predicting Consistency of Understanding} \label{sec:consistency} \input{consistency.tex} \section{Conclusion} \label{sec:conclusion} \input{conclusion.tex} \section*{Acknowledgments} This work was supported in part by National Science Foundation Grant IIS-1566382 and a GPU gift from Nvidia. We thank three anonymous reviewers for their valuable suggestions on various aspects of this work. \bibliographystyle{acl_natbib} \appendix \end{document} \noindent \textbf{Meeting Corpora.} We evaluate our joint model on two meeting corpora with rich annotations: the AMI meeting corpus~\cite{ami} and the ICSI meeting corpus~\cite{icsi}. AMI corpus consists of 139 scenario-driven meetings, and ICSI corpus contains 75 naturally occurring meetings. Both of the corpora are annotated with dialogue acts, adjacency pairs, and topic segmentation. We treat each topic segment as one discussion, and remove discussions with less than 10 turns or labeled as ``opening" and ``chitchat". %There are 52.1 turns per discussion for AMI and 50.8 turns for ICSI on average. 694 discussions from AMI and 1139 discussions from ICSI are extracted, and these two datasets are henceforth referred as \textsc{AMI-full} and \textsc{ICSI-full}. \noindent \textbf{Acquiring Gold-Standard Labels.} Both corpora contain human constructed abstractive summaries and extractive summaries on meeting level. Short abstracts, usually in one sentence, are constructed by meeting participants --- \textit{participant summaries}, and external annotators --- \textit{abstractive summaries}. Dialogue acts that contribute to important output of the meeting, e.g. decisions, are identified and used as extractive summaries, and some of them are also linked to the corresponding abstracts. Since the corpora do not contain phrase-level importance annotation, we induce gold-standard labels for candidate phrases based on the following rule. A candidate phrase is considered as a positive sample if its head word is contained in any abstractive summary or participant summary. On average, 71.9 candidate phrases are identified per discussion for \textsc{AMI-full} with 31.3\% labeled as positive, and 73.4 for \textsc{ICSI-full} with 24.0\% of them as positive samples. Furthermore, a subset of discussions in \textsc{AMI-full} are annotated with discourse structure and relations based on Twente Argumentation Schema (TAS) by~\newcite{so65562}\footnote{There are 9 types of relations in TAS: \textsc{positive}, \textsc{negative}, \textsc{uncertain}, \textsc{request}, \textsc{specialization}, \textsc{elaboration}, \textsc{option}, \textsc{option exclusion}, and \textsc{subject-to}.}. A tree-structured argument diagram (as shown in Figure~\ref{fig:example_intro}) is created for each discussion or a part of the discussion. The nodes of the tree contain partial or complete speaker turns, and discourse relation types are labeled on the links between the nodes. In total, we have 129 discussions annotated with discourse labels. %, with 20.0 turns per discussion. This dataset is called \textsc{AMI-sub} hereafter. \noindent \textbf{Experimental Setup.} 5-fold cross validation is used for all experiments. All real-valued features are uniformly normalized to [0,1]. For the joint learning algorithm, we use 10 epochs and carry out 50 sampling for MCMC for each training sample. The learning rate is set to 0.01. We run the learning algorithm for 20 times, and use the average of the learned weights as the final parameter values. For models trained with latent discourse relations, we fix the number of relations to $9$. \noindent \textbf{Baselines and Comparisons.} For both phrase-based content selection and discourse relation prediction tasks, we consider a baseline that always predicts the majority label (Majority). Previous work has shown that Support Vector Machines (SVMs)-based classifiers achieve state-of-the-art performance for keyphrase selection in meetings~\cite{FernandezFDAEP08,wang-cardie:2013:ACL2013} and discourse parsing for formal text~\cite{HernaultPdI10}. Therefore, we compare with linear SVM-based classifiers, trained with the same feature set of content features or discourse features. We fix the trade-off parameter to $1.0$ for all SVM-based experiments. For discourse relation prediction, we use one-vs-rest strategy to build multiple binary classifiers.\footnote{Multi-class classifier was also experimented with, but gave inferior performance.} We also compare with a state-of-the-art discourse parser~\cite{ji-haffari-eisenstein:2016:N16-1}, which employs neural language model to predict discourse relations. Our model is inspired by research work that leverages discourse structure for identifying salient content in conversations, which is still largely reliant on features derived from gold-standard discourse labels~\cite{mckeown2007using,Murray:2010:GVA:1873738.1873753,bokaei2016extractive}. For instance, adjacency pairs, which are paired utterances with question-answer or offer-accept relations, are found to frequently appear in meeting summaries together and thus are utilized to extract summary-worthy utterances by~\newcite{Galley:2006:SCR:1610075.1610126}. There is much less work that jointly predicts the importance of content along with the discourse structure in dialogus. \newcite{oya2014extractive} employs Dynamic Conditional Random Field to recognize sentences in email threads for use in summary as well as their dialogue acts. Only local discourse structures from adjacent utterances are considered. Our model is built on tree structures, which captures more global information. Our work is also in line with keyphrase identification or phrase-based summarization for conversations. Due to the noisy nature of dialogues, recent work focuses on identifying summary-worthy phrases from meetings~\cite{FernandezFDAEP08,Riedhammer:2010:LSS:1837521.1837625} or email threads~\cite{loza2014building}. For instance, \newcite{wang-cardie:2012:SIGDIAL20122} treat the problem as an information extraction task, where summary-worthy content represented as indicator and argument pairs is identified by an unsupervised latent variable model. Our work also targets at detecting salient phrases from meetings, but focuses on the joint modeling of critical discussion points and discourse relations held between them. For the area of discourse analysis in dialogues, a significant amount of work has been done in predicting local discourse structures, such as recognizing dialogue acts or social acts of adjacent utterances from phone conversations~\cite{stolcke2000dialogue,kalchbrenner2013recurrent,ji-haffari-eisenstein:2016:N16-1}, spoken meetings~\cite{dielmann2008recognition}, or emails~\cite{Cohen04learningto}. %, or discussion forums~\cite{wang2011predicting,bender2011annotating}. Although discourse information from non-adjacent turns has been studied in the context of online discussion forums~\cite{ghosh2014analyzing} and meetings~\cite{hakkani2009towards}, none of them models the effect of discourse structure on content selection, which is a gap that this work fills in. \subsection{Phrase Selection and Discourse Labeling} Here we present the experimental results on phrase-based content selection and discourse relation prediction. We experiment with two variations of our joint model: one is trained on gold-standard discourse relations, the other is trained by treating discourse relations as latent models as described in Section~\ref{sec:modeldescription}. Remember that we have gold-standard argument diagrams on the \textsc{AMI-sub} dataset, we can thus conduct experiments by assuming the \textit{True Attachment Structure} is given for latent versions. When argument diagrams are not available, we build a tree among the turns in each discussion as follows. Two turns are attached if there is any adjacency pair between them. If one turn is attached to more than one previous turns, the closest one is considered. For the rest of the turns, they are attached to the preceding turn. This construction is applied on \textsc{AMI-full} and \textsc{ICSI-full}. We also investigate whether joint learning and joint inference can produce better prediction performance. We consider joint learning with separate inference, where only content features or discourse features are used for prediction (Separate-Inference). We further study learning separate classifiers for content selection and discourse relations without joint features (Separate-Learn). We first show the phrase selection and discourse relation prediction results on \textsc{AMI-sub} in Tables~\ref{tab:ami_disc_phrase} and~\ref{tab:ami_disc_discourse}. As shown in Table~\ref{tab:ami_disc_phrase}, our models, trained with gold-standard discourse relations or latent ones with true attachment structure, yield significant better accuracy and F1 scores than SVM-based classifiers trained with the same feature sets for phrase selection (paired $t$-test, $p<0.05$). Our joint learning model with separate inference also outperforms neural network-based discourse parsing model~\cite{ji-haffari-eisenstein:2016:N16-1} in Table~\ref{tab:ami_disc_discourse}. Moreover, Tables~\ref{tab:ami_disc_phrase} and~\ref{tab:ami_disc_discourse} demonstrate that joint learning usually produces superior performance for both tasks than separate learning. Combined with joint inference, our model obtains the best accuracy and F1 on phrase selection. This indicates that leveraging the interplay between content and discourse boost the prediction performance. Similar results are achieved on \textsc{AMI-full} and \textsc{ICSI-full} in Table~\ref{tab:ami_topic}, where latent discourse relations without true attachment structure are employed for training. \subsection{Phrase-Based Extractive Summarization} We further evaluate whether the prediction of the content selection component can be used for summarizing the key points on discussion level. For each discussion, salient phrases identified by our model are concatenated in sequence for use as the summary. We consider two types of gold-standard summaries. One is utterance-level extractive summary, which consists of human labeled summary-worthy utterances. The other is abstractive summary, where we collect human abstract with at least one link from summary-worthy utterances. We calculate scores based on ROUGE~\cite{Lin:2003:AES:1073445.1073465}, which is a popular tool for evaluating text summarization~\cite{gillick2009global,liu2010using}. ROUGE-1 (unigrams) and ROUGE-SU4 (skip-bigrams with at most 4 words in between) are used. Following previous work on meeting summarization~\cite{Riedhammer:2010:LSS:1837521.1837625,wang-cardie:2013:ACL2013}, we consider two dialogue act-level summarization baselines: (1) \textsc{longest DA} in each discussion is selected as the summary, and (2) \textsc{centroid DA}, the one with the highest TF-IDF similarity with all DAs in the discussion. We also compare with an unsupervised keyword extraction approach by \newcite{liu2009unsupervised}, where word importance is estimated by its TF-IDF score, POS tag, and the salience of its corresponding sentence. With the same candidate phrases as in our model, we extend \newcite{liu2009unsupervised} by scoring each phrase based on its average score of the words. Top phrases, with the same number of phrases output by our model, are included into the summaries. Finally, we compare with summaries consisting of salient phrases predicted by an SVM classifier trained with our content features. From the results in Table~\ref{tab:summ}, we can see that phrase-based extractive summarization methods can yield better ROUGE scores for recall and F1 than baselines that extract the whole sentences. Meanwhile, our system significantly outperforms the SVM-based classifiers when evaluated on ROUGE recall and F1, while achieving comparable precision. Compared to \newcite{liu2009unsupervised}, our system also yields better results on all metrics. Sample summaries by our model along with two baselines are displayed in Figure~\ref{fig:example_summary}. Utterance-level extract-based baselines unavoidably contain disfluency and unnecessary details. Our phrase-based extractive summary is able to capture the key points from both the argumentation process and important outcomes of the conversation. This implies that our model output can be used as input for an abstractive summarization system. It can also facilitate the visualization of decision-making processes. \subsection{Further Analysis and Discussions} \noindent \textbf{Features Analysis.} We first discuss salient features with top weights learned by our joint model. For content features, main speaker tends to utter more salient content. Higher TF-IDF scores also indicate important phrases. If a phrase is mentioned in previous turn and repeated in the current turn, it is likely to be a key point. For discourse features, structure features matter the most. For instance, jointly modeling the discourse relation of the parent node along with the current node can lead to better inference. An example is that giving more details on the proposal (\textsc{elaboration}) tends to lead to \textsc{positive} feedback. Moreover, \textsc{request} usually appears close to the root of the argument diagram tree, while \textsc{positive} feedback is usually observed on leaves. Adjacency pairs also play an important role for discourse prediction. For joint features, features that composite ``phrase mentioned in previous turn" and relation \textsc{positive} feedback or \textsc{request} yield higher weight, which are indicators for both key phrases and discourse relations. We also find that main speaker information composite with \textsc{elaboration} and \textsc{uncertain} are associated with high weights. \noindent \textbf{Error Analysis and Potential Directions.} Taking a closer look at our prediction results, one major source of incorrect prediction for phrase selection is based on the fact that similar concepts might be expressed in different ways, and our model predicts inconsistently for different variations. For example, participants use both ``thick" and ``two centimeters" to talk about the desired shape of a remote control. However, our model does not group them into the same cluster and later makes different predictions. For future work, semantic similarity with context information can be leveraged to produce better clustering results. Furthermore, identifying discourse relations in dialogues is still a challenging task. For instance, ``I wouldn't choose a plastic case" should be labeled as \textsc{option exclusion}, if the previous turns talk about different options. Otherwise, it can be labeled as \textsc{negative}. Therefore, models that better handle semantics and context need to be considered. In this section, we first present our joint model in Section~\ref{sec:modeldescription}. The algorithms for learning and inference are described in Sections~\ref{sec:learning} and~\ref{sec:inference}, followed by feature description (Section~\ref{sec:feature}). \subsection{Model Description} \label{sec:modeldescription} Our proposed model learns to jointly perform phrase-based content selection and discourse relation prediction by making use of the interaction between the two sources of information. Assume that a meeting discussion is denoted as $\mathbf{x}$, where $\mathbf{x}$ consists of a sequence of discourse units $\mathbf{x}=\{x_{1}, x_{2}, \cdots ,x_{n}\}$. Each discourse unit can be a complete speaker turn or a part of it. As demonstrated in Figure~\ref{fig:example_intro}, a tree-structured discourse diagram is constructed for each discussion with each discourse unit $x_{i}$ as a node of the tree. In this work, we consider the argumentative discourse structure by Twente Argument Schema (TAS)~\cite{so65562}. For each node $x_{i}$, it is attached to another node $x_{i^\prime}$ ($i^\prime<i$) in the discussion, and a discourse relation $d_{i}$ is hold on the link $\langle x_{i}, x_{i^\prime} \rangle$ ($d_{i}$ is empty if $x_{i}$ is the root). Let $\mathbf{t}$ denote the set of links $\langle x_{i}, x_{i^\prime} \rangle$ in $\mathbf{x}$. Following previous work on discourse analysis in meetings~\cite{so65562,hakkani2009towards}, we assume that the attachment structure between discourse units are given during both training and testing. A set of candidate phrases are extracted from each discourse unit $x_{i}$, from which salient phrases that contain gist information will be identified. We obtain constituent and dependency parses for utterances using Stanford parser~\cite{Klein:2003:AUP:1075096.1075150}. We restrict eligible candidate to be a noun phrase (NP), verb phrase (VP), prepositional phrase (PP), or adjective phrase (ADJP) with at most 5 words, and its head word cannot be a stop word.\footnote{Other methods for mining candidate phrases, such as frequency-based method~\cite{liu2015mining}, will be studied for future work.} If a candidate is a parent of another candidate in the constituent parse tree, we will only keep the parent. We further merge a verb and a candidate noun phrase into one candidate if the later is the direct object or subject of the verb. For example, from utterance ``let's use a rubber case as well as rubber buttons", we can identify candidates ``use a rubber case" and ``rubber buttons". For $x_{i}$, the set of candidate phrases are denoted as $c_{i}=\{c_{i,1},c_{i,2},\cdots,c_{i,m_i}\}$, where $m_i$ is the number of candidates. $c_{i,j}$ takes a value of $1$ if the corresponding candidate is selected as salient phrase; otherwise, $c_{i,j}$ is equal to $0$. All candidate phrases in discussion $\mathbf{x}$ are represented as $\mathbf{c}$. We then define a log-linear model with feature parameters $\mathbf{w}$ for the candidate phrases $\mathbf{c}$ and discourse relations $\mathbf{d}$ in $\mathbf{x}$ as: {\fontsize{10}{10}\selectfont \begin{equation} \begin{split} & p(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w}) \propto \exp [\mathbf{w}\cdot \Phi (\mathbf{c}, \mathbf{d}, \mathbf{x})]\\ & \propto \exp [\mathbf{w}\cdot \sum_{i=1, <x_i, x_{i^\prime}>\in \mathbf{t}}^{n} \phi (c_{i}, d_{i}, d_{i^\prime}, \mathbf{x})]\\ & \propto \exp [\sum_{i=1,<x_i, x_{i^\prime}>\in \mathbf{t}}^{n} ( \mathbf{w_c}\cdot \sum_{j=1}^{m_i} \phi_c (c_{i,j}, \mathbf{x})\\ &+ \mathbf{w_d}\cdot \phi_d (d_{i},d_{i^\prime}, \mathbf{x}) + \mathbf{w_{cd}}\cdot \sum_{j=1}^{m_i} \phi_{cd} (c_{i,j}, d_{i}, \mathbf{x}) ) ]\\ \end{split} \end{equation} \label{eq:objective} } Here $\Phi(\cdot)$ and $\phi(\cdot)$ denote feature vectors. We utilize three types of feature functions: (1) content-only features $\phi_c(\cdot)$, which capture the importance of phrases, (2) discourse-only features $\phi_d(\cdot)$, which characterize the (potentially higher-order) discourse relations, and (3) joint features of content and discourse $\phi_{cd}(\cdot)$, which model the interaction between the two. $\mathbf{w_c}$, $\mathbf{w_d}$, and $\mathbf{w_{cd}}$ are corresponding feature parameters. Detailed feature descriptions can be found in Section~\ref{sec:feature}. \noindent \textbf{Discourse Relations as Latent Variables.} As we mentioned in the introduction, acquiring labeled training data for discourse relations is a time-consuming process since it would require human annotators to inspect the full discussions. Therefore, we further propose a variation of our model where it treats the discourse relations as latent variables, so that $p(\mathbf{c} |\mathbf{x}, \mathbf{w})=\sum_{\mathbf{d}} p(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w})$. Its learning algorithm is slightly different as described in the next section. \subsection{Joint Learning for Parameter Estimation} \label{sec:learning} For learning the model parameters $\mathbf{w}$, we employ an algorithm based on SampleRank~\cite{rohanimanesh2011samplerank}, which is a stochastic structure learning method. In general, the learning algorithm constructs a sequence of configurations for sample labels as a Markov chain Monte Carlo (MCMC) chain based on a task-specific loss function, where stochastic gradients are distributed across the chain. %This is suitable for our learning problem because we aim to optimize the prediction performance for both phrase selection and discourse relations with various types of features. The full learning procedure is described in Algorithm~\ref{alg:learning}. To start with, the feature weights $\mathbf{w}$ is initialized with each value randomly drawn from $[-1, 1]$. Multiple epochs are run through all samples. For each sample, we randomly initialize the assignment of candidate phrases labels $\mathbf{c}$ and discourse relations $\mathbf{d}$. Then an MCMC chain is constructed with a series of configurations $\sigma =(\mathbf{c}$, $\mathbf{d})$: at each step, it first samples a discourse structure $\mathbf{d}$ based on the proposal distribution $q(\mathbf{d^\prime} |\mathbf{d},\mathbf{x})$, and then samples phrase labels conditional on the new discourse relations and previous phrase labels based on $q(\mathbf{c^\prime} |\mathbf{c}, \mathbf{d^\prime},\mathbf{x})$. Local search is used for both proposal distributions.\footnote{For future work, we can explore other proposal distributions that utilize the conditional distribution of salient phrases given sampled discourse relations.} The new configuration is accepted if it improves on the score by $\omega (\sigma ^\prime)$. The parameters $\mathbf{w}$ are updated accordingly. For the scorer $\omega$, we use a weighted combination of F1 scores of phrase selection ($F1_{c}$) and discourse relation prediction ($F1_{d}$): $\omega (\sigma)= \alpha \cdot F1_{c} + (1-\alpha) \cdot F1_{d}$. We fix $\alpha$ to $0.1$.%\footnote{We give discourse a higher weight since it is a more difficult task.} When discourse relations are treated as latent, we initialize discourse relations for each sample with a label in $\{1, 2, \ldots, K\}$ if there are $K$ relations indicated, and we only use $F1_{c}$ as the scorer. \begin{algorithm}[t] { \setstretch{0.3} \fontsize{9}{10}\selectfont \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$\mathbf{X}=\{\mathbf{x}\}$: discussions in the training set, \\ $\eta$: learning rate, $\epsilon$: number of epochs, \\ $\delta$: number of sampling rounds, \\ $\omega (\cdot)$: scoring function, $\Phi (\cdot)$: feature functions } \Output{feature weights $\frac{1}{|\mathcal{W}|}\sum_{\mathbf{w}\in \mathcal{W}} \mathbf{w}$} \BlankLine Initialize $\mathbf{w}$\; $\mathcal{W} \leftarrow \{\mathbf{w} \}$\; \For {$e=1$ to $\epsilon$} { \For {$\mathbf{x}$ in $\mathbf{X}$} { \tcp{Initialize configuration for $\mathbf{x}$} Initialize $\mathbf{c}$ and $\mathbf{d}$\; $\sigma=(\mathbf{c}, \mathbf{d})$\; \For {$s=1$ to $\delta$} { \tcp{New configuration via local search} $\mathbf{d^\prime} \sim q_d(\cdot | \mathbf{x}, \mathbf{d})$\; $\mathbf{c^\prime} \sim q_d(\cdot | \mathbf{x}, \mathbf{c}, \mathbf{d^\prime})$\; $\sigma^\prime=(\mathbf{c^\prime}, \mathbf{d^\prime})$\; $\sigma^+=\arg \max_{\tilde{\sigma} \in \{\sigma, \sigma^\prime \}} \omega (\tilde{\sigma}) $\; $\sigma^-=\arg \min_{\tilde{\sigma} \in \{\sigma, \sigma^\prime \}} \omega (\tilde{\sigma}) $\; $\hat{\nabla}=\Phi (\sigma^+)- \Phi (\sigma^-)$\; $\Delta \omega= \omega (\sigma^+)- \omega (\sigma^-)$\; \tcp{Update parameters} \If {$\mathbf{w}\cdot \hat{\nabla} < \Delta \omega$ \& $\Delta \omega\neq 0$}{ $\mathbf{w} \leftarrow \mathbf{w}+\eta \cdot \hat{\nabla}$\; Add $\mathbf{w}$ in $\mathcal{W}$\; } \tcp{Accept or reject new configuration} \If {$\sigma^+ == \sigma^\prime$}{ $\sigma=\sigma^\prime$ } } } } } \caption{\fontsize{10}{12}\selectfont SampleRank-based joint learning.} \label{alg:learning} \end{algorithm} \subsection{Joint Inference for Prediction} \label{sec:inference} Given a new sample $\mathbf{x}$ and learned parameters $\mathbf{w}$, we predict phrase labels and discourse relations as $\arg \max_{\mathbf{c}, \mathbf{d}} p(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w})$. Dynamic programming can be employed to carry out joint inference, however, it would be time-consuming since our objective function has a large search space for both content and discourse labels. Hence we propose an alternating optimizing algorithm to search for $\mathbf{c}$ and $\mathbf{d}$ iteratively. Concretely, for each iteration, we first optimize on $\mathbf{d}$ by maximizing $\sum_{i=1,<x_i, x_i^\prime>\in \mathbf{t}}^{n} (\mathbf{w_d}\cdot \phi_d (d_{i},d_{i^\prime}, \mathbf{x}) + \mathbf{w_{cd}}\cdot \sum_{j=1}^{m_i} \phi_{cd} (c_{i,j}, d_{i}, \mathbf{x}))$. Message-passing~\cite{smith2008dependency} is used to find the best $\mathbf{d}$. In the second step, we search for $\mathbf{c}$ that maximizes $\sum_{i=1,<x_i, x_i^\prime>\in \mathbf{t}}^{n} (\mathbf{w_c}\cdot \sum_{j=1}^{m_i} \phi_c (c_{i,j}, \mathbf{x}) + \mathbf{w_{cd}}\cdot \sum_{j=1}^{m_i} \phi_{cd} (c_{i,j}, d_{i}, \mathbf{x}) )$. We believe that candidate phrases based on the same concepts should have the same predicted label. Therefore, candidates of the same phrase type and sharing the same head word are grouped into one cluster. We then cast our task as an integer linear programming problem.\footnote{We use lpsolve: \url{http://lpsolve.sourceforge.net/5.5/}.} We optimize our objective function under constraints: (1) $c_{i,j}=c_{i^\prime, j^\prime}$ if $c_{i,j}$ and $c_{i^\prime, j^\prime}$ are in the same cluster, and (2) $c_{i,j}\in \{0, 1\} $, $\forall i, j$. The inference process is the same for models trained with latent discourse relations. \subsection{Features} \label{sec:feature} We use features that characterize content, discourse relations, and the combination of both. \noindent \textbf{Content Features.} For modeling the salience of content, we calculate the minimum, maximum, and average of \texttt{TF-IDF} scores of words and \texttt{number of content words} in each phrase based on the intuition that important phrases tend to have more content words with high TF-IDF scores~\cite{FernandezFDAEP08}. We also consider whether the head word of the phrase has been \texttt{mentioned in preceding turn}, which implies the focus of a discussion. The \texttt{size of the cluster} each phrase belongs to is also included. \texttt{Number of POS tags} and \texttt{phrase types} are counted to characterize the syntactic structure. Previous work~\cite{wang-cardie:2012:SIGDIAL20122} has found that a discussion usually ends with decision-relevant information. We thus identify the \texttt{absolute and relative positions} of the turn containing the candidate phrase in the discussion. Finally, we record whether the candidate phrase is \texttt{uttered by the main speaker}, who speakers the most words in the discussion. \noindent \textbf{Discourse Features.} For each discourse unit, we collect the \texttt{dialogue act types} of the current unit and its parent node in discourse tree, whether there is any \texttt{adjacency pair} held between the two nodes~\cite{hakkani2009towards}, and the \texttt{Jaccard similarity} between them. We record whether two turns are \texttt{uttered by the same speaker}, for example, \textsc{elaboration} is commonly observed between the turns from the same participant. We also calculate the \texttt{number of candidate phrases} based on the observation that \textsc{option} and \textsc{specialization} tend to contain more informative words than \textsc{positive} feedback. Length of the discourse unit is also relevant. Therefore, we compute the \texttt{time span} and \texttt{number of words}. To incorporate global structure features, we encode the \texttt{depth of the node} in the discourse tree and \texttt{the number of its siblings}. Finally, we include an \texttt{order-2 discourse relation} feature that encodes the relation between current discourse unit and its parent, and the relation between the parent and its grandparent if it exists. \noindent \textbf{Joint Features.} For modeling the interaction between content and discourse, the discourse relation is added to each content feature to compose a joint feature. For example, if candidate $c$ in discussion $x$ has a content feature $\phi_{[avg-TFIDF]} (c, \mathbf{x})$ with a value of $0.5$, and its discourse relation $d$ is \textsc{positive}, then the joint feature takes the form of $\phi_{[avg-TFIDF, Positive]} (c, d, \mathbf{x})=0.5$. As discussed in previous work~\cite{mulder2002assessing,mercer2004sociocultural}, both content and discourse structure are critical for building shared understanding among discussants. In this section, we test whether our joint model can be utilized to predict the consistency among team members' understanding of their group decisions, which is defined as consistency of understanding (COU) in \newcite{kim2016improving}. %with an objective to develop intelligent systems that can suggest review of topics potentially resulting in inconsistent understandings among participants. \newcite{kim2016improving} establish gold-standard COU labels on a portion of AMI discussions, by comparing participant summaries to determine whether participants report the same decisions. If all decision points are consistent, the associated topic discussion is labeled as \textit{consistent}; otherwise, the discussion is identified as \textit{inconsistent}. Their annotation covers the \textsc{AMI-sub} dataset. Therefore, we run the prediction experiments on \textsc{AMI-sub} by using the same annotation. Out of total 129 discussions in \textsc{AMI-sub}, 86 discussions are labeled as consistent and 43 are inconsistent. We construct three types of features by using our model's predicted labels. Firstly, we learn two versions of our model based on the ``consistent" discussions and the ``inconsistent" ones in the training set, with learned parameters $\mathbf{w_{con}}$ and $\mathbf{w_{incon}}$. For a discussion in the test set, these two models output two probabilities $p_{con}=\max_{\mathbf{c}, \mathbf{d}} P(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w_{con}})$ and $p_{incon}=\max_{\mathbf{c}, \mathbf{d}} P(\mathbf{c}, \mathbf{d}|\mathbf{x}, \mathbf{w_{incon}})$. We use $p_{con}-p_{incon}$ as a feature. Furthermore, we consider discourse relations of length one and two from the discourse structure tree. Intuitively, some discourse relations, e.g., \textsc{elaboration} followed by multiple \textsc{positive} feedback, imply consistent understanding. The third feature is based on word entrainment, which has been shown to correlate with task success for groups~\cite{nenkova2008high}. Using the formula in~\newcite{nenkova2008high}, we compute the average word entrainment between the main speaker who utters the most words and all the other participants. The content words in the salient phrases predicted by our model is considered for entrainment computation. \noindent \textbf{Results.} Leave-one-out is used for experiments. For training, our features are constructed from gold-standard phrase and discourse labels. Predicted labels by our model is used for constructing features during testing. SVM-based classifier is used for experimenting with different sets of features output by our model. A majority class baseline is constructed as well. We also consider an SVM classifier trained with ngram features (unigrams and bigrams). Finally, we compare with the state-of-the-art method in~\newcite{kim2016improving}, where discourse-relevant features and head gesture features are utilized in Hidden Markov Models to predict the consistency label. The results are displayed in Table~\ref{tab:consistency}. All SVMs trained with our features surpass the ngrams-based baseline. Especially, the discourse features, word entrainment feature, and the combination of the three, all significantly outperform the state-of-the-art system by \newcite{kim2016improving}.\footnote{We also experiment with other popular classifiers, e.g. logistic regression or decision tree, and similar trend is respected.}
Latent Sequence Decompositions
1610.03035
Table 2: Wall Street Journal test eval92 Word Error Rate (WER) results across Connectionist Temporal Classification (CTC) and Sequence-to-sequence (seq2seq) models. The Latent Sequence Decomposition (LSD) models use a n=4 word piece vocabulary (LSD4). The Convolutional Neural Network (CNN) model is with deep residual connections, batch normalization and convolutions. The best end-to-end model is seq2seq + LSD + CNN at 9.6% WER.
[ "[BOLD] Model", "[BOLD] WER" ]
[ [ "Graves & Jaitly ( 2014 )", "[EMPTY]" ], [ "CTC", "30.1" ], [ "CTC + WER", "27.3" ], [ "Hannun et al. ( 2014 )", "[EMPTY]" ], [ "CTC", "35.8" ], [ "Bahdanau et al. ( 2016a )", "[EMPTY]" ], [ "seq2seq", "18.6" ], [ "Bahdanau et al. ( 2016b )", "[EMPTY]" ], [ "seq2seq + TLE", "18.0" ], [ "Zhang et al. ( 2017 )", "[EMPTY]" ], [ "seq2seq + CNN 22footnotemark: 2", "11.8" ], [ "Our Work", "[EMPTY]" ], [ "seq2seq", "14.8" ], [ "seq2seq + LSD4", "12.9" ], [ "seq2seq + LSD4 + CNN", "9.6" ] ]
The previously best reported basic seq2seq model on WSJ WER achieved 18.0% WER Our baseline, also a seq2seq model, achieved 14.8% WER. Main differences between our models is that we did not use convolutional locational-based priors and we used weight noise during training.
\section{Conclusion} We presented the Latent Sequence Decompositions (LSD) framework. LSD allows us to learn decompositions of sequences that are a function of both the input and output sequence. We presented a biased training algorithm based on sampling valid extensions with an $\epsilon$-greedy strategy, and an approximate decoding algorithm. On the Wall Street Journal speech recognition task, the sequence-to-sequence character model baseline achieves 14.8\% WER while the LSD model achieves 12.9\%. Using a a deep convolutional neural network on the encoder with LSD, we achieve 9.6\% WER. \section{Latent Sequence Decompositions} \label{sec:lsd} In this section, we describe LSD more formally. Let $\mathbf x$ be our input sequence, $\mathbf y$ be our output sequence and $\mathbf z$ be a latent sequence decomposition of $\mathbf y$. The latent sequence decomposition $\mathbf z$ consists of a sequence of $z_i \in \mathcal{Z}$ where $\mathcal{Z}$ is the constructed token space. Each token $z_i$ need not be the same length, but rather in our framework, we expect the tokens to have different lengths. Specifically, $\mathcal Z \subseteq \cup_{i=1}^n \mathcal C^i$ where $\mathcal C$ is the set of singleton tokens and $n$ is the length of the largest output token. In ASR , $\mathcal C$ would typically be the set of English characters, while $\mathcal Z$ would be word pieces (i.e., $n$-grams of characters). To give a concrete example, consider a set of tokens $\left\{\textrm{``a''}, \textrm{``b''}, \textrm{``c''}, \textrm{``at''}, \textrm{``ca''}, \textrm{``cat''}\right\}$. With this set of tokens, the word ``cat'' may be represented as the sequence ``c'', ``a'', ``t'', or the sequence ``ca'', ``t'', or alternatively as the single token ``cat''. Since the appropriate decomposition of the word ``cat'' is not known a priori, the decomposition itself is latent. Note that the length $|\mathbf z_a|$ of a decomposition $\mathbf z_a$ need not be the same as the length of the output sequence, $|\mathbf y|$ (for example ``ca'', ``t'' has a length of 2, whereas the sequence is 3 characters long). Similarly, a different decomposition $\mathbf z_b$ (for example the 3-gram token ``cat'') of the same sequence may be of a different length (in this case 1). Each decomposition, collapses to the target output sequence using a trivial collapsing function $\mathbf y = \mathrm{collapse}(\mathbf z)$. Clearly, the set of decompositions, $\{\mathbf z : \mathrm{collapse}(\mathbf z) = \mathbf y\}$, of a sequence, $\mathbf y$, using a non-trivial token set, $\mathcal Z$, can be combinatorially large. If there was a known, unique, correct segmentation $\mathbf z^*$ for a given pair, $(\mathbf x, \mathbf y)$, one could simply train the model to output the fixed deterministic decomposition $\mathbf z^*$. However, in most problems, we do not know the best possible decomposition $\mathbf z^*$; indeed it may be possible that the output can be correctly decomposed into multiple alternative but valid segmentations. For example, in end-to-end ASR we typically use characters as the output unit of choice \citep{chan-icassp-2016, bahdanau-icassp-2016} but word pieces may be better units as they more closely align with the acoustic entities such as syllables. However, the most appropriate decomposition $\mathbf z^*$ for a given is $(\mathbf x, \mathbf y)$ pair is often unknown. Given a particular $\mathbf y$, the best $\mathbf z^*$ could even change depending on the input sequence $\mathbf x$ (i.e., speaking style). In LSD, we want to learn a probabilistic segmentation mapping from $\mathbf x \rightarrow \mathbf z \rightarrow \mathbf y$. The model produces a distribution of decompositions, $\mathbf z$, given an input sequence $\mathbf x$, and the objective is to maximize the log-likelihood of the ground truth sequence $\mathbf y$. We can accomplish this by factorizing and marginalizing over all possible $\mathbf z$ latent sequence decompositions under our model $p(\mathbf z | \mathbf x; \theta)$ with parameters $\theta$: \begin{align} \log p(\mathbf y | \mathbf x; \theta) &= \log \sum_{\mathbf z} p(\mathbf y, \mathbf z | \mathbf x; \theta) \\ &= \log \sum_{\mathbf z} p(\mathbf y | \mathbf z, \mathbf x) p(\mathbf z | \mathbf x; \theta) \\ &= \log \sum_{\mathbf z} p(\mathbf y | \mathbf z) p(\mathbf z | \mathbf x; \theta) \end{align} where $p(\mathbf y | \mathbf z) = \mathbbm{1}(\mathrm{collapse}(\mathbf z) = \mathbf y)$ captures path decompositions $\mathbf z$ that collapses to $\mathbf y$. Due to the exponential number of decompositions of $\mathbf y$, exact inference and search is intractable for any non-trivial token set $\mathcal Z$ and sequence length $|\mathbf y|$. We describe a beam search algorithm to do approximate inference decoding in Section \ref{sec:decoding}. Similarly, computing the exact gradient is intractable. However, we can derive a gradient estimator by differentiating w.r.t. to $\theta$ and taking its expectation: \begin{align} \frac{\partial}{\partial \theta} \log p(\mathbf y | \mathbf x; \theta) &= \frac{1}{p(\mathbf y | \mathbf x; \theta)} \frac{\partial}{\partial \theta} \sum_z p(\mathbf y | \mathbf x, \mathbf z) p(\mathbf z | \mathbf x; \theta) \\ &= \frac{1}{p(\mathbf y | \mathbf x; \theta)} \sum_z p(\mathbf y | \mathbf x, \mathbf z) \nabla_\theta p(\mathbf z | \mathbf x; \theta) \\ &= \frac{1}{p(\mathbf y | \mathbf x; \theta)} \sum_z p(\mathbf y | \mathbf x, \mathbf z) p(\mathbf z | \mathbf x; \theta) \nabla_\theta \log p(\mathbf z | \mathbf x; \theta) \label{eqn:logtrick} \\ &= \mathbb{E}_{\mathbf z \sim p(\mathbf z | \mathbf x, \mathbf y; \theta)} \left[\nabla_\theta \log p(\mathbf z | \mathbf x; \theta)\right] \label{eqn:gradient-estimator} \end{align} Equation \ref{eqn:logtrick} uses the identity $\nabla_\theta f_\theta(x) = f_\theta(x) \nabla_\theta \log f_\theta(x)$ assuming $f_\theta(x) \neq 0 \; \forall \; x$. Equation \ref{eqn:gradient-estimator} gives us an unbiased estimator of our gradient. It tells us to sample some latent sequence decomposition $\mathbf z \sim p(\mathbf z | \mathbf y, \mathbf x; \theta)$ under our model's posterior, where $\mathbf z$ is constraint to be a valid sequence that collapses to $\mathbf y$, i.e. $\mathbf z \in \{\mathbf z' : \mathrm{collapse}(\mathbf z') = \mathbf y\}$. To train the model, we sample $\mathbf z \sim p(\mathbf z | \mathbf y, \mathbf x; \theta)$ and compute the gradient of $\nabla_\theta \log p(\mathbf z | \mathbf x; \theta)$ using backpropagation. However, sampling $\mathbf z \sim p(\mathbf z | \mathbf y, \mathbf x; \theta)$ is difficult. Doing this exactly is computationally expensive, because it would require sampling correctly from the posterior -- it would be possible to do this using a particle filtering like algorithm, but would require a full forward pass through the output sequence to do this. Instead, in our implementation we use a heuristic to sample $\mathbf z \sim p(\mathbf z | \mathbf y, \mathbf x; \theta)$. At each output time step $t$ when producing tokens $z_1, z_2 \cdots z_{\left(t-1\right)}$, we sample from $z_t \sim p\left(z_t | \mathbf x, \mathbf y, \mathbf z_{<t}, \theta\right)$ in a left-to-right fashion. In other words, we sample valid extensions at each time step $t$. At the start of the training, this left-to-right sampling procedure is not a good approximation to the posterior, since the next step probabilities at a time step include probabilities of all future paths from that point. For example, consider the case when the target word is ``cat'', and the vocabulary includes all possible characters and the tokens ``ca'', and ``cat''. At time step 1, when the valid next step options are ``c'', ``ca'', ``cat'', their relative probabilities reflect all possible sequences ``c*'', ``ca*'', ``cat*'' respectively, that start from the first time step of the model. These sets of sequences include sequences other than the target sequence ``cat''. Thus sampling from the distribution at step 1 is a biased procedure. However, as training proceeds the model places more and more mass only on the correct hypotheses, and the relative probabilities that the model produces between valid extensions gets closer to the posterior. In practice, we find that the when the model is trained with this method, it quickly collapses to using single character targets, and never escapes from this local minima\footnote{One notable exception was the word piece ``qu'' (``u'' is almost always followed by ``q'' in English). The model does learn to consistently emit ``qu'' as one token and never produce ``q'' + ``u'' as separate tokens.}. Thus, we follow an $\epsilon$-greedy exploration strategy commonly found in reinforcement learning literature \citep{sutton-1998} -- we sample $z_t$ from a mixture of a uniform distribution over valid next tokens and $p\left(z_t | \mathbf x, \mathbf y, \mathbf z_{<t}, \theta\right)$. The relative probability of using a uniform distribution vs. $p\left( \cdot | \mathbf x, \mathbf y, \mathbf z_{<t}, \theta\right)$ is varied over training. With this modification the model learns to use the longer n-grams of characters appropriately, as shown in later sections. \section{Related Work} \cite{singh-ieeetaslp,mccgraw-ieeetaslp,lu-asru-2013} built probabilistic pronunciation models for Hidden Markov Model (HMM) based systems. However, such models are still constraint to the conditional independence and Markovian assumptions of HMM-based systems. Connectionist Temporal Classification (CTC) \citep{graves-icml-2006,graves-icml-2014} based models assume conditional independence, and can rely on dynamic programming for exact inference. Similarly, \cite{ling-acl-2016} use latent codes to generate text, and also assume conditional independence and leverage on dynamic programming for exact maximum likelihood gradients. Such models can not learn the output language if the language distribution is multimodal. Our seq2seq models makes no such Markovian assumptions and can learn multimodal output distributions. \cite{collobert-arxiv-2016} and \cite{zweig-arxiv-2016} developed extensions of CTC where they used some word pieces. However, the word pieces are only used in repeated characters and the decompositions are fixed. Word piece models with seq2seq have also been recently used in machine translation. \cite{sennirch-acl-2016} used word pieces in rare words, while \cite{wu-arxiv-2016} used word pieces for all the words, however the decomposition is fixed and defined by heuristics or another model. The decompositions in these models are also only a function of the output sequence, while in LSD the decomposition is a function of both the input and output sequence. The LSD framework allows us to learn a distribution of decompositions rather than learning just one decomposition defined by a priori. \cite{vinyals-iclr-2016} used seq2seq to outputs sets, the output sequence is unordered and used fixed length output units; in our decompositions we maintain ordering use variable lengthed output units. Reinforcement learning (i.e., REINFORCE and other task loss estimators) \citep{sutton-1998,graves-icml-2014,ranzato-iclr-2016} learn different output sequences can yield different task losses. However, these methods don't directly learn different decompositions of the same sequence. Future work should incorporate LSD with task loss optimization methods. \section{Introduction} Sequence-to-sequence (seq2seq) models \citep{sutskever-nips-2014,cho-emnlp-2014} with attention \citep{bahdanau-iclr-2015} have been successfully applied to many applications including machine translation \citep{luong-acl-2015,jean-acl-2015}, parsing \citep{vinyals-nips-2015}, image captioning \citep{vinyals-cvpr-2014,xu-icml-2015} and Automatic Speech Recognition (ASR) \citep{chan-icassp-2016,bahdanau-icassp-2016}. Previous work has assumed a fixed deterministic decomposition for each output sequence. The output representation is usually a fixed sequence of words \citep{sutskever-nips-2014,cho-emnlp-2014}, phonemes \citep{chorowski-nips-2015}, characters \citep{chan-icassp-2016,bahdanau-icassp-2016} or even a mixture of characters and words \citep{thang-acl-2016}. However, in all these cases, the models are trained towards one fixed decomposition for each output sequence. We argue against using fixed deterministic decompositions of a sequence that has been defined a priori. Word segmented models \citep{luong-acl-2015,jean-acl-2015} often have to deal with large softmax sizes, rare words and Out-of-Vocabulary (OOV) words. Character models \citep{chan-icassp-2016,bahdanau-icassp-2016} overcome the OOV problem by modelling the smallest output unit, however this typically results in long decoder lengths and computationally expensive inference. And even with mixed (but fixed) character-word models \citep{thang-acl-2016}, it is unclear whether such a predefined segmentation is optimal. In all these examples, the output decomposition is only a function of the output sequence. This may be acceptable for problems such as translations, but inappropriate for tasks such as speech recognition, where segmentation should also be informed by the characteristics of the inputs, such as audio. We want our model to have the capacity and flexibility to learn a distribution of sequence decompositions. Additionally, the decomposition should be a sequence of variable length tokens as deemed most probable. For example, language may be more naturally represented as word pieces \citep{schuster-icassp-2012} rather than individual characters. In many speech and language tasks, it is probably more efficient to model ``qu'' as one output unit rather than ``q'' + ``u'' as separate output units (since in English, ``q'' is almost always followed by ``u''). Word piece models also naturally solve rare word and OOV problems similar to character models. The output sequence decomposition should be a function of both the input sequence and the output sequence (rather than output sequence alone). For example, in speech, the choice of emitting ``ing'' as one word piece or as separate tokens of ``i'' + ``n'' + ``g'' should be a function of the current output word as well as the audio signal (i.e., speaking style). We present the Latent Sequence Decompositions (LSD) framework. LSD does not assume a fixed decomposition for an output sequence, but rather learns to decompose sequences as function of both the input and the output sequence. Each output sequence can be decomposed to a set of latent sequence decompositions using a dictionary of variable length output tokens. The LSD framework produces a distribution over the latent sequence decompositions and marginalizes over them during training. During test inference, we find the best decomposition and output sequence, by using beam search to find the most likely output sequence from the model. \section{Decoding} \label{sec:decoding} During inference we want to find the most likely word sequence given the input acoustics: \begin{align} \hat{\mathbf y} = \arg \max_{\mathbf y} \sum_{\mathbf z} \log p(\mathbf y | \mathbf z) p(\mathbf z | \mathbf x) \end{align} however this is obviously intractable for any non-trivial token space and sequence lengths. We simply approximate this by decoding for the best word piece sequence $\hat{\mathbf z}$ and then collapsing it to its corresponding word sequence $\hat{\mathbf y}$: \begin{align} \hat{\mathbf z} &= \arg \max_{\mathbf z} \log p(\mathbf z | \mathbf x) \\ \hat{\mathbf y} &= \mathrm{collapse}(\hat{\mathbf z}) \end{align} We approximate for the best $\hat{\mathbf z}$ sequence by doing a left-to-right beam search \citep{chan-icassp-2016}. \section{Experiments} We experimented with the Wall Street Journal (WSJ) ASR task. We used the standard configuration of train si284 dataset for training, dev93 for validation and eval92 for test evaluation. Our input features were 80 dimensional filterbanks computed every 10ms with delta and delta-delta acceleration normalized with per speaker mean and variance as generated by Kaldi \citep{povey-asru-2011}. The $\mathrm{EncodeRNN}$ function is a 3 layer BLSTM with 256 LSTM units per-direction (or 512 total) and $4=2^2$ time factor reduction. The $\mathrm{DecodeRNN}$ is a 1 layer LSTM with 256 LSTM units. All the weight matrices were initialized with a uniform distribution $\mathcal{U}(-0.075, 0.075)$ and bias vectors to $0$. Gradient norm clipping of $1$ was used, gaussian weight noise $\mathcal{N}(0, 0.075)$ and L2 weight decay $1\mathrm{e}{-5}$ \citep{graves-nips-2011}. We used ADAM with the default hyperparameters described in \citep{kingma-iclr-2015}, however we decayed the learning rate from $1\mathrm{e}{-3}$ to $1\mathrm{e}{-4}$. We used 8 GPU workers for asynchronous SGD under the TensorFlow framework \citep{tensorflow2015-whitepaper}. We monitor the dev93 Word Error Rate (WER) until convergence and report the corresponding eval92 WER. The models took around 5 days to converge. We created our token vocabulary $\mathcal Z$ by looking at the $n$-gram character counts of the training dataset. We explored $n \in \{2, 3, 4, 5\}$ and took the top $\{256, 512, 1024\}$ tokens based on their count frequencies (since taking the full $n$-cartesian exponent of the unigrams would result in an intractable number of tokens for $n > 2$). We found very minor differences in WER based on the vocabulary size, for our $n=\{2,3\}$ word piece experiments we used a vocabulary size of 256 while our $n=\{4,5\}$ word piece experiments used a vocabulary size of 512. Additionally, we restrict $\langle \mathrm{space} \rangle$ to be a unigram token and not included in any other word pieces, this forces the decompositions to break on word boundaries. Table \ref{tab:wordpiece} compares the effect of varying the $n$ sized word piece vocabulary. The Latent Sequence Decompositions (LSD) models were trained with the framework described in Section \ref{sec:lsd} and the (Maximum Extension) MaxExt decomposition is a fixed decomposition. MaxExt is generated in a left-to-right fashion, where at each step the longest word piece extension is selected from the vocabulary. The MaxExt decomposition is not the shortest $|\mathbf z|$ possible sequence, however it is a deterministic decomposition that can be easily generated in linear time on-the-fly. We decoded these models with simple n-best list beam search without any external dictionary or Language Model (LM). The baseline model is simply the unigram or character model and achieves 14.76\% WER. We find the LSD $n=4$ word piece vocabulary model to perform the best at 12.88\% WER or yielding a 12.7\% relative improvement over the baseline character model. None of our MaxExt models beat our character model baseline, suggesting the maximum extension decomposition to be a poor decomposition choice. However, all our LSD models perform better than the baseline suggesting the LSD framework is able to learn a decomposition better than the baseline character decomposition. We also look at the distribution of the characters covered based on the word piece lengths during inference across different $n$ sized word piece vocabulary used in training. We define the distribution of the characters covered as the percentage of characters covered by the set of word pieces with the same length across the test set, and we exclude $\langle \mathrm{space} \rangle$ in this statistic. Figure \ref{fig:charactercoverage} plots the distribution of the $\{1,2,3,4,5\}$-ngram word pieces the model decides to use to decompose the sequences. When the model is trained to use the bigram word piece vocabulary, we found the model to prefer bigrams (55\% of the characters emitted) over characters (45\% of the characters emitted) in the LSD decomposition. This suggest that a character only vocabulary may not be the best vocabulary to learn from. Our best model, LSD with $n=4$ word piece vocabulary, covered the word characters with 42.16\%, 39.35\%, 14.83\% and 3.66\% of the time using 1, 2, 3, 4 sized word pieces respectively. In the $n=5$ word piece vocabulary model, the LSD model uses the $n=5$ sized word pieces to cover approximately 2\% of the characters. We suspect if we used a larger dataset, we could extend the vocabulary to cover even larger $n \geq 5$. The MaxExt model were trained to greedily emit the longest possible word piece, consequently this prior meant the model will prefer to emit long word pieces over characters. While this decomposition results in the shorter $|\mathbf z|$ length, the WER is slightly worse than the character baseline. This suggest the much shorter decompositions generated by the MaxExt prior may not be best decomposition. This falls onto the principle that the best $\mathbf z^*$ decomposition is not only a function of $\mathbf y^*$ but as a function of $(\mathbf x, \mathbf y^*)$. In the case of ASR, the segmentation is a function of the acoustics as well as the text. Table \ref{tab:compare} compares our WSJ results with other published end-to-end models. The best CTC model achieved 27.3\% WER with REINFORCE optimization on WER \citep{graves-icml-2014}. The previously best reported basic seq2seq model on WSJ WER achieved 18.0\% WER \citep{bahdanau-iclr-2016} with Task Loss Estimation (TLE). Our baseline, also a seq2seq model, achieved 14.8\% WER. Main differences between our models is that we did not use convolutional locational-based priors and we used weight noise during training. The deep CNN model with residual connections, batch normalization and convolutions achieved a WER of 11.8\% \citep{yu-icassp-2016} \footnote{\label{cnn} For our CNN architectures, we use and compare to the ``(C (3 $\times$ 3) / 2) $\times$ 2 + NiN'' architecture from Table 2 line 4.}. Our LSD model using a $n=4$ word piece vocabulary achieves a WER of 12.9\% or 12.7\% relatively better over the baseline seq2seq model. If we combine our LSD model with the CNN \citep{yu-icassp-2016} model, we achieve a combined WER of 9.6\% WER or 35.1\% relatively better over the baseline seq2seq model. These numbers are all reported without the use of any language model. Please see Appendix \ref{appendix} for the decompositions generated by our model. The LSD model learns multiple word piece decompositions for the same word sequence. \nonstopmode \documentclass{article} % For LaTeX2e \usepackage[T1]{fontenc} \title{Latent Sequence Decompositions} \author{William Chan\thanks{Work done at Google Brain.} \\ Carnegie Mellon University \\ \texttt{williamchan@cmu.edu} \\ \AND Yu Zhang\footnotemark[1] \\ Massachusetts Institute of Technology \\ \texttt{yzhang87@mit.edu} \AND Quoc V. Le, Navdeep Jaitly \\ Google Brain \\ \texttt{\{qvl,ndjaitly\}@google.com} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} Sequence-to-sequence models rely on a fixed decomposition of the target sequences into a sequence of tokens that may be words, word-pieces or characters. The choice of these tokens and the decomposition of the target sequences into a sequence of tokens is often static, and independent of the input, output data domains. This can potentially lead to a sub-optimal choice of token dictionaries, as the decomposition is not informed by the particular problem being solved. In this paper we present Latent Sequence Decompositions (LSD), a framework in which the decomposition of sequences into constituent tokens is learnt during the training of the model. The decomposition depends both on the input sequence and on the output sequence. In LSD, during training, the model samples decompositions incrementally, from left to right by locally sampling between valid extensions. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9\% WER compared to a character baseline of 14.8\% WER. When combined with a convolutional network on the encoder, we achieve a WER of 9.6\%. \end{abstract} \input{introduction.tex} \input{lsd.tex} \input{model.tex} \input{decoder.tex} \input{experiments.tex} \input{relatedwork.tex} \input{conclusion.tex} \subsubsection*{Acknowledgments} We thank Ashish Agarwal, Philip Bachman, Dzmitry Bahdanau, Eugene Brevdo, Jan Chorowski, Jeff Dean, Chris Dyer, Gilbert Leung, Mohammad Norouzi, Noam Shazeer, Xin Pan, Luke Vilnis, Oriol Vinyals and the Google Brain team for many insightful discussions and technical assistance. \bibliographystyle{iclr2017_conference} \input{appendix.tex} \end{document} \clearpage \appendix \section{Learning the Decompositions} \label{appendix} We give the top 8 hypothesis generated by a baseline seq2seq character model, a Latent Sequence Decompositions (LSD) word piece model and a Maximum Extension (MaxExt) word piece model. We note that ``shamrock's'' is an out-of-vocabulary word while ``shamrock'' is in-vocabulary. The ground truth is ``shamrock's pretax profit from the sale was one hundred twenty five million dollars a spokeswoman said''. Note how the LSD model generates multiple decompostions for the same word sequence, this does not happen with the MaxExt model. \begin{sidewaystable}[h] \centering \small \caption{Top hypothesis comparsion between seq2seq character model, LSD word piece model and MaxExt word piece model.} \begin{tabular}{llr} \toprule $n$ & \bfseries Hypothesis & \bfseries LogProb \\ \midrule \multicolumn{3}{l}{\bfseries Reference} \\ - & shamrock's pretax profit from the sale was one hundred twenty five million dollars a spokeswoman said & - \\ \midrule \multicolumn{3}{l}{\bfseries Character seq2seq} \\ 1 & c|h|a|m|r|o|c|k|'|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d & -1.373868 \\ 2 & c|h|a|m|r|o|x| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d & -2.253581 \\ 3 & c|h|a|m|r|o|c|k|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d & -3.482713 \\ 4 & c|h|a|m|r|o|c|k|'|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |o|f| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d & -3.493957 \\ 5 & c|h|a|m|r|o|d|'|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d & -3.885185 \\ 6 & c|h|a|m|r|o|x| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |o|f| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d & -4.373687 \\ 6 & c|h|a|m|r|o|c|'|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d & -5.148484 \\ 8 & c|h|a|m|r|o|c|k|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |o|f| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d & -5.602793 \\ \midrule \multicolumn{3}{l}{\bfseries Word Piece Model Maximum Extension} \\ 1 & sh|am|ro|ck|'s| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said & -1.155203 \\ 2 & sh|am|ro|x| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said & -3.031330 \\ 3 & sh|ar|ro|x| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said & -3.074762 \\ 4 & sh|e| |m| |ro|x| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said & -3.815662 \\ 5 & sh|e| |mar|x| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said & -3.880760 \\ 6 & sh|ar|ro|ck|s| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said & -4.083274 \\ 7 & sh|e| |m| |ro|ck|ed| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said & -4.878025 \\ 8 & sh|e| |m| |ro|ck|s| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said & -5.121490 \\ \midrule \multicolumn{3}{l}{\bfseries Word Piece Model Latent Sequence Decompositions} \\ 1 & sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|lio|n| |doll|a|r|s| |a| |sp|ok|e|s|wo|ma|n| |said & -28.111485 \\ 2 & sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|li|o|n| |doll|ar|s| |a| |sp|ok|e|s|wo|ma|n| |said & -28.172878 \\ 3 & sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|lio|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |said & -28.453381 \\ 4 & sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|li|o|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |said & -29.103184 \\ 5 & sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|lio|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |sa|id & -29.159660 \\ 6 & sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|lio|n| |doll|a|r|s| |a| |sp|o|k|e|s|w|o|ma|n| |said & -29.164141 \\ 7 & sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|li|o|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |sai|d & -29.169310 \\ 8 & sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|li|o|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |sa|id & -29.809937 \\ \bottomrule \end{tabular} \end{sidewaystable} \section{Model} In this work, we model the latent sequence decompositions $p(\mathbf z | \mathbf x)$ with an attention-based seq2seq model \citep{bahdanau-iclr-2015}. Each output token $z_i$ is modelled as a conditional distribution over all previously emitted tokens $\mathbf z_{<i}$ and the input sequence $\mathbf x$ using the chain rule: \begin{align} p(\mathbf z | \mathbf x; \theta) = \prod_i p(z_i | \mathbf x, \mathbf z_{<i}) \end{align} The input sequence $\mathbf x$ is processed through an $\mathrm{EncodeRNN}$ network. The $\mathrm{EncodeRNN}$ function transforms the features $\mathbf x$ into some higher level representation $\mathbf h$. In our experimental implementation $\mathrm{EncodeRNN}$ is a stacked Bidirectional LSTM (BLSTM) \citep{schuster-ieeetsp-1997,graves-asru-2013} with hierarchical subsampling \citep{hihi-nips-1996,koutnik-icml-2014}: \begin{align} \mathbf h = \mathrm{EncodeRNN}(\mathbf x) \end{align} The output sequence $\mathbf z$ is generated with an attention-based transducer \citep{bahdanau-iclr-2015} one $z_i$ token at a time: \begin{align} s_i &= \mathrm{DecodeRNN}([z_{i - 1}, c_{i - 1}], s_{i - 1}) \\ c_i &= \mathrm{AttentionContext}(s_i, \mathbf h) \\ p(z_i | \mathbf{x}, \mathbf z_{<i}) &= \mathrm{TokenDistribution}(s_i, c_i) \end{align} The $\mathrm{DecodeRNN}$ produces a transducer state $s_i$ as a function of the previously emitted token $z_{i-1}$, previous attention context $c_{i - 1}$ and previous transducer state $s_{i - 1}$. In our implementation, $\mathrm{DecodeRNN}$ is a LSTM \citep{hochreiter-neuralcomputation-1997} function without peephole connections. The $\mathrm{AttentionContext}$ function generates $c_i$ with a content-based MLP attention network \citep{bahdanau-iclr-2015}. Energies $e_i$ are computed as a function of the encoder features $\mathbf h$ and current transducer state $s_i$. The energies are normalized into an attention distribution $\alpha_i$. The attention context $c_i$ is created as a $\alpha_i$ weighted linear sum over $\mathbf h$: \begin{align} e_{i, j} &= \langle v, \tanh(\phi(s_i, h_j)) \rangle \\ \alpha_{i, j} &= \frac{\exp(e_{i, j})}{\sum_{j'} \exp(e_{i,j'})} \\ c_i &= \sum_j \alpha_{i,j} h_j \end{align} where $\phi$ is linear transform function. $\mathrm{TokenDistribution}$ is a MLP function with softmax outputs modelling the distribution $p(z_i | \mathbf{x}, \mathbf z_{<i})$.
Reading Wikipedia to Answer Open-Domain Questions
1704.00051
Table 6: Full Wikipedia results. Top-1 exact-match accuracy (in %, using SQuAD eval script). +Fine-tune (DS): Document Reader models trained on SQuAD and fine-tuned on each DS training set independently. +Multitask (DS): Document Reader single model trained on SQuAD and all the distant supervision (DS) training sets jointly. YodaQA results are extracted from https://github.com/brmson/yodaqa/wiki/Benchmarks and use additional resources such as Freebase and DBpedia, see Section 2.
[ "[BOLD] Dataset", "[BOLD] YodaQA", "[BOLD] DrQA SQuAD", "[BOLD] DrQA +Fine-tune (DS)", "[BOLD] DrQA +Multitask (DS)" ]
[ [ "SQuAD  [ITALIC] (All Wikipedia)", "[EMPTY]", "27.1", "28.4", "29.8" ], [ "CuratedTREC", "31.3", "19.7", "25.7", "25.4" ], [ "WebQuestions", "39.8", "11.8", "19.5", "20.7" ], [ "WikiMovies", "[EMPTY]", "24.5", "34.3", "36.5" ] ]
Despite the difficulty of the task compared to machine comprehension (where you are given the right paragraph) and unconstrained QA (using redundant resources), DrQA still provides reasonable performance across all four datasets.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{715} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand\squad{SQuAD\xspace} \newcommand\curq{CuratedTREC-S\xspace} \newcommand\sq{SimpleQuestions\xspace} \newcommand\lcurq{CuratedTREC\xspace} \newcommand\trecz{TREC'00\xspace} \newcommand\trecn{TREC'99\xspace} \newcommand\trect{TREC'02\xspace} \newcommand\wikim{WikiMovies\xspace} \newcommand\wq{WebQuestions\xspace} \newcommand\us{DrQA\xspace} \newcommand\usr{Document Retriever\xspace} \newcommand\usp{Document Reader\xspace} \newcommand{\finalem}{70.0} \newcommand{\finalf}{79.0} \title{Reading Wikipedia to Answer Open-Domain Questions} \author{Danqi Chen\thanks{\hspace{0.3em} Most of this work was done while DC was with Facebook AI Research.} \\ Computer Science\\ Stanford University\\ Stanford, CA 94305, USA\\ {\tt danqi@cs.stanford.edu} \\\And Adam Fisch, Jason Weston \& Antoine Bordes\\ Facebook AI Research\\ 770 Broadway\\ New York, NY 10003, USA\\ {\tt \{afisch,jase,abordes\}@fb.com} \\} \date{} \begin{document} \maketitle \begin{abstract} This paper proposes to tackle open-domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of {\it machine reading at scale} combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task. \end{abstract} \section{Introduction} This paper considers the problem of answering factoid questions in an open-domain setting using Wikipedia as the unique knowledge source, such as one does when looking for answers in an encyclopedia. Wikipedia is a constantly evolving source of detailed information that could facilitate intelligent machines --- if they are able to leverage its power. Unlike knowledge bases (KBs) such as Freebase~\cite{bollacker2008freebase} or DBPedia~\cite{auer2007dbpedia}, which are easier for computers to process but too sparsely populated for open-domain question answering \cite{miller2016key}, Wikipedia contains up-to-date knowledge that humans are interested in. It is designed, however, for humans -- not machines -- to read. Using Wikipedia articles as the knowledge source causes the task of question answering (QA) to combine the challenges of both large-scale open-domain QA and of machine comprehension of text. In order to answer any question, one must first retrieve the few relevant articles among more than 5 million items, and then scan them carefully to identify the answer. We term this setting, {\it machine reading at scale} (MRS). Our work treats Wikipedia as a collection of articles and does not rely on its internal graph structure. As a result, our approach is generic and could be switched to other collections of documents, books, or even daily updated newspapers. Large-scale QA systems like IBM's DeepQA ~\cite{ferrucci2010building} rely on multiple sources to answer: besides Wikipedia, it is also paired with KBs, dictionaries, and even news articles, books, etc. As a result, such systems heavily rely on information redundancy among the sources to answer correctly. Having a single knowledge source forces the model to be very precise while searching for an answer as the evidence might appear only once. This challenge thus encourages research in the ability of a machine to read, a key motivation for the machine comprehension subfield and the creation of datasets such as SQuAD~\cite{rajpurkar2016squad}, CNN/Daily Mail~\cite{nips2015hermann} and CBT~\cite{hill2015goldilocks}. However, those machine comprehension resources typically assume that a short piece of relevant text is already identified and given to the model, which is not realistic for building an open-domain QA system. In sharp contrast, methods that use KBs or information retrieval over documents have to employ search as an integral part of the solution. Instead MRS is focused on simultaneously maintaining the challenge of machine comprehension, which requires the deep understanding of text, while keeping the realistic constraint of searching over a large open resource. In this paper, we show how multiple existing QA datasets can be used to evaluate MRS by requiring an open-domain system to perform well on all of them at once. We develop \us, a strong system for question answering from Wikipedia composed of: (1) \usr, a module using bigram hashing and TF-IDF matching designed to, given a question, efficiently return a subset of relevant articles and (2) \usp, a multi-layer recurrent neural network machine comprehension model trained to detect answer spans in those few returned documents. Figure~\ref{fig:pip} gives an illustration of \us. Our experiments show that \usr outperforms the built-in Wikipedia search engine and that \usp reaches state-of-the-art results on the very competitive SQuAD benchmark \cite{rajpurkar2016squad}. Finally, our full system is evaluated using multiple benchmarks. In particular, we show that performance is improved across all datasets through the use of multitask learning and distant supervision compared to single task training. %We arrive at a single system where the aim is to correctly respond to any question where Wikipedia can provide the answer. \section{Related Work} \label{sec:rwork} Open-domain QA was originally defined as finding answers in collections of unstructured documents, following the setting of the annual TREC competitions\footnote{\url{http://trec.nist.gov/data/qamain.html}}. With the development of KBs, many recent innovations have occurred in the context of QA from KBs with the creation of resources like WebQuestions \cite{berant2013semantic} and SimpleQuestions \cite{bordes2015large} based on the Freebase KB \cite{bollacker2008freebase}, or on automatically extracted KBs, e.g., OpenIE triples and NELL \cite{fader2014open}. However, KBs have inherent limitations (incompleteness, fixed schemas) that motivated researchers to return to the original setting of answering from raw text. A second motivation to cast a fresh look at this problem is that of machine comprehension of text, i.e., answering questions after reading a short text or story. That subfield has made considerable progress recently thanks to new deep learning architectures like attention-based and memory-augmented neural networks \cite{bahdanau2015neural,weston2015memory,graves2014neural} and release of new training and evaluation datasets like QuizBowl \cite{iyyer2014neural}, CNN/Daily Mail based on news articles \cite{nips2015hermann}, CBT based on children books \cite{hill2015goldilocks}, or SQuAD \cite{rajpurkar2016squad} and WikiReading \cite{hewlett2016wiki}, both based on Wikipedia. An objective of this paper is to test how such new methods can perform in an open-domain QA framework. QA using Wikipedia as a resource has been explored previously. \citet{ryu2014open} perform open-domain QA using a Wikipedia-based knowledge model. They combine article content with multiple other answer matching modules based on different types of semi-structured knowledge such as infoboxes, article structure, category structure, and definitions. Similarly, \citet{Ahn04} also combine Wikipedia as a text resource with other resources, in this case with information retrieval over other documents. \citet{buscaldi2006mining} also mine knowledge from Wikipedia for QA. Instead of using it as a resource for seeking answers to questions, they focus on validating answers returned by their QA system, and use Wikipedia categories for determining a set of patterns that should fit with the expected answer. In our work, we consider the comprehension of text only, and use Wikipedia text documents as the sole resource in order to emphasize the task of machine reading at scale, as described in the introduction. There are a number of highly developed full pipeline QA approaches using either the Web, as does QuASE \cite{sun2015open}, or Wikipedia as a resource, as do Microsoft's AskMSR \cite{brill2002askmsr}, IBM's DeepQA \cite{ferrucci2010building} and YodaQA \cite{baudivs2015yodaqa,baudivs2015modeling} --- the latter of which is open source and hence reproducible for comparison purposes. AskMSR is a search-engine based QA system that relies on ``data redundancy rather than sophisticated linguistic analyses of either questions or candidate answers'', i.e., it does not focus on machine comprehension, as we do. DeepQA is a very sophisticated system that relies on both unstructured information including text documents as well as structured data such as KBs, databases and ontologies to generate candidate answers or vote over evidence. YodaQA is an open source system modeled after DeepQA, similarly combining websites, information extraction, databases and Wikipedia in particular. Our comprehension task is made more challenging by only using a single resource. Comparing against these methods provides a useful datapoint for an ``upper bound'' benchmark on performance. Multitask learning \cite{caruana1998multitask} and task transfer have a rich history in machine learning (e.g., using ImageNet in the computer vision community \cite{huh2016makes}), as well as in NLP in particular \cite{Collobert08}. Several works have attempted to combine multiple QA training datasets via multitask learning to (i) achieve improvement across the datasets via task transfer; and (ii) provide a single general system capable of asking different kinds of questions due to the inevitably different data distributions across the source datasets. \citet{fader2014open} used WebQuestions, TREC and WikiAnswers with four KBs as knowledge sources and reported improvement on the latter two datasets through multitask learning. \citet{bordes2015large} combined WebQuestions and SimpleQuestions using distant supervision with Freebase as the KB to give slight improvements on both datasets, although poor performance was reported when training on only one dataset and testing on the other, showing that task transfer is indeed a challenging subject; see also \cite{kadlecparticular} for a similar conclusion. Our work follows similar themes, but in the setting of having to retrieve and then read text documents, rather than using a KB, with positive results. \section{Our System: \us} \label{sec:model} In the following we describe our system \us for MRS which consists of two components: (1) the Document Retriever module for finding relevant articles and (2) a machine comprehension model, \usp, for extracting answers from a single document or a small collection of documents. \subsection{\usr} \label{sec:irmodel} Following classical QA systems, we use an efficient (non-machine learning) document retrieval system to first narrow our search space and focus on reading only articles that are likely to be relevant. A simple inverted index lookup followed by term vector model scoring performs quite well on this task for many question types, compared to the built-in ElasticSearch based Wikipedia Search API \cite{gormley2015elasticsearch}. Articles and questions are compared as TF-IDF weighted bag-of-word vectors. We further improve our system by taking local word order into account with n-gram features. Our best performing system uses bigram counts while preserving speed and memory efficiency by using the hashing of \cite{weinberger2009feature} to map the bigrams to $2^{24}$ bins with an unsigned \emph{murmur3} hash. We use \usr as the first part of our full model, by setting it to return 5 Wikipedia articles given any question. Those articles are then processed by \usp. \subsection{Document Reader} Our \usp model is inspired by the recent success of neural network models on machine comprehension tasks, in a similar spirit to the \emph{AttentiveReader} described in \cite{nips2015hermann,chen2016thorough}. Given a question $q$ consisting of $l$ tokens $\{q_1, \ldots, q_l\}$ and a document or a small set of documents of $n$ paragraphs where a single paragraph $p$ consists of $m$ tokens $\{p_1, \ldots, p_m\}$, we develop an RNN model that we apply to each paragraph in turn and then finally aggregate the predicted answers. Our method works as follows: \paragraph{\textbf{Paragraph encoding}} We first represent all tokens $p_i$ in a paragraph $p$ as a sequence of feature vectors $\tilde{\mathbf{p}}_i \in \mathbb{R}^d$ and pass them as the input to a recurrent neural network and thus obtain: \begin{equation*} \{\mathbf{p}_1, \ldots, \mathbf{p}_m\} = \text{RNN}(\{\tilde{\mathbf{p}}_1, \ldots, \tilde{\mathbf{p}}_m \}), \end{equation*} where $\mathbf{p}_i$ is expected to encode useful context information around token $p_i$. Specifically, we choose to use a multi-layer bidirectional long short-term memory network (LSTM), and take $\mathbf{p}_i$ as the concatenation of each layer's hidden units in the end. The feature vector $\tilde{\mathbf{p}}_i$ is comprised of the following parts: \begin{itemize} \item \emph{Word embeddings}: $f_{emb}(p_i) = \mathbf{E}(p_i)$. We use the 300-dimensional Glove word embeddings trained from 840B Web crawl data \cite{pennington2014glove}. We keep most of the pre-trained word embeddings fixed and only fine-tune the $1000$ most frequent question words because the representations of some key words such as \textit{what}, \textit{how}, \textit{which}, \textit{many} could be crucial for QA systems. \item \emph{Exact match}: $f_{exact\_match}(p_i) = \mathbb{I}(p_i \in q)$. We use three simple binary features, indicating whether $p_i$ can be exactly matched to one question word in $q$, either in its original, lowercase or lemma form. These simple features turn out to be extremely helpful, as we will show in Section~\ref{sec:exp}. \item \emph{Token features}: \\ $f_{token}(p_i) = (\text{POS}(p_i), \text{NER}(p_i), \text{TF}(p_i))$. We also add a few manual features which reflect some properties of token $p_i$ in its context, which include its part-of-speech (POS) and named entity recognition (NER) tags and its (normalized) term frequency (TF). \item \emph{Aligned question embedding}: \\ Following \cite{lee2016learning} and other recent works, the last part we incorporate is an aligned question embedding $f_{align}(p_i) = \sum_j{a_{i, j} \mathbf{E}(q_j)}$, where the attention score $a_{i, j}$ captures the similarity between $p_i$ and each question words $q_j$. Specifically, $a_{i, j}$ is computed by the dot products between nonlinear mappings of word embeddings: \begin{equation*} a_{i, j} = \frac{\exp\left(\alpha(\mathbf{E}(p_i)) \cdot \alpha(\mathbf{E}(q_{j}))\right)}{\sum_{j'}{\exp\left(\alpha(\mathbf{E}(p_i)) \cdot \alpha(\mathbf{E}(q_{j'}))\right)}}, \end{equation*} and $\alpha(\cdot)$ is a single dense layer with ReLU nonlinearity. Compared to the \emph{exact match} features, these features add soft alignments between similar but non-identical words (e.g., \textit{car} and \textit{vehicle}). \end{itemize} \paragraph{\textbf{Question encoding}} The question encoding is simpler, as we only apply another recurrent neural network on top of the word embeddings of $q_i$ and combine the resulting hidden units into one single vector: $\{\mathbf{q}_1, \ldots, \mathbf{q}_l\} \rightarrow \mathbf{q}$. We compute $\mathbf{q} = \sum_j{b_j \mathbf{q}_j}$ where $b_j$ encodes the importance of each question word: \begin{equation*} b_j = \frac{\exp(\mathbf{w} \cdot \mathbf{q}_j)}{\sum_{j'}{\exp(\mathbf{w} \cdot \mathbf{q}_{j'})}}, \end{equation*} and $\mathbf{w}$ is a weight vector to learn. \paragraph{\textbf{Prediction}} At the paragraph level, the goal is to predict the span of tokens that is most likely the correct answer. We take the the paragraph vectors $\{\mathbf{p}_1, \ldots, \mathbf{p}_m\}$ and the question vector $\mathbf{q}$ as input, and simply train two classifiers independently for predicting the two ends of the span. Concretely, we use a bilinear term to capture the similarity between $\mathbf{p}_i$ and $\mathbf{q}$ and compute the probabilities of each token being start and end as: \begin{eqnarray*} P_{start}(i) & \propto & \exp \left(\mathbf{p}_i \mathbf{W}_{s} \mathbf{q}\right) \\ P_{end}(i) & \propto & \exp \left(\mathbf{p}_i \mathbf{W}_{e} \mathbf{q}\right) \end{eqnarray*} During prediction, we choose the best span from token $i$ to token $i'$ such that $i \leq i' \leq i + 15$ and $P_{start}(i) \times P_{end}(i')$ is maximized. To make scores compatible across paragraphs in one or several retrieved documents, we use the unnormalized exponential and take argmax over all considered paragraph spans for our final prediction. \section{Data} \label{sec:data} Our work relies on three types of data: (1) Wikipedia that serves as our knowledge source for finding answers, (2) the SQuAD dataset which is our main resource to train \usp and (3) three more QA datasets (CuratedTREC, WebQuestions and WikiMovies) that in addition to SQuAD, are used to test the open-domain QA abilities of our full system, and to evaluate the ability of our model to learn from multitask learning and distant supervision. Statistics of the datasets are given in Table~\ref{tab:data-stats}. \subsection{Wikipedia (Knowledge Source)} We use the 2016-12-21 dump\footnote{\url{https://dumps.wikimedia.org/enwiki/latest}} of English Wikipedia for all of our full-scale experiments as the knowledge source used to answer questions. For each page, only the plain text is extracted and all structured data sections such as lists and figures are stripped.\footnote{We use the WikiExtractor script: \url{https://github.com/attardi/wikiextractor}.} After discarding internal disambiguation, list, index, and outline pages, we retain 5,075,182 articles consisting of 9,008,962 unique uncased token types. \subsection{SQuAD} The Stanford Question Answering Dataset (SQuAD) \cite{rajpurkar2016squad} is a dataset for machine comprehension based on Wikipedia. The dataset contains 87k examples for training and 10k for development, with a large hidden test set which can only be accessed by the SQuAD creators. Each example is composed of a paragraph extracted from a Wikipedia article and an associated human-generated question. The answer is always a span from this paragraph and a model is given credit if its predicted answer matches it. Two evaluation metrics are used: exact string match (EM) and F1 score, which measures the weighted average of precision and recall at the token level. In the following, we use SQuAD for training and evaluating our Document Reader for the standard machine comprehension task given the relevant paragraph as defined in \cite{rajpurkar2016squad}. For the task of evaluating open-domain question answering over Wikipedia, we use the SQuAD development set QA pairs only, and we ask systems to uncover the correct answer spans \emph{without} having access to the associated paragraphs. That is, a model is required to answer a question given the whole of Wikipedia as a resource; it is {\em not} given the relevant paragraph as in the standard SQuAD setting. \subsection{Open-domain QA Evaluation Resources}\label{sec:othersources} SQuAD is one of the largest general purpose QA datasets currently available. SQuAD questions have been collected via a process involving showing a paragraph to each human annotator and asking them to write a question. As a result, their distribution is quite specific. We hence propose to train and evaluate our system on other datasets developed for open-domain QA that have been constructed in different ways (not necessarily in the context of answering from Wikipedia). \paragraph{\lcurq} This dataset is based on the benchmarks from the TREC QA tasks that have been curated by \citet{baudivs2015modeling}. We use the large version, which contains a total of 2,180 questions extracted from the datasets from TREC 1999, 2000, 2001 and 2002.\footnote{This dataset is available at \url{https://github.com/brmson/dataset-factoid-curated}.} \paragraph{\wq} Introduced in \cite{berant2013semantic}, this dataset is built to answer questions from the Freebase KB. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechanical Turk. We convert each answer to text by using entity names so that the dataset does not reference Freebase IDs and is purely made of plain text question-answer pairs. \paragraph{\wikim} This dataset, introduced in \cite{miller2016key}, contains 96k question-answer pairs in the domain of movies. Originally created from the OMDb and MovieLens databases, the examples are built such that they can also be answered by using a subset of Wikipedia as the knowledge source (the title and the first section of articles from the movie domain). \subsection{Distantly Supervised Data} \label{sec:ds} All the QA datasets presented above contain training portions, but CuratedTREC, WebQuestions and WikiMovies only contain question-answer pairs, and not an associated document or paragraph as in SQuAD, and hence cannot be used for training \usp directly. Following previous work on distant supervision (DS) for relation extraction \cite{mintz2009distant}, we use a procedure to automatically associate paragraphs to such training examples, and then add these examples to our training set. We use the following process for each question-answer pair to build our training set. First, we run \usr on the question to retrieve the top 5 Wikipedia articles. All paragraphs from those articles without an exact match of the known answer are directly discarded. All paragraphs shorter than 25 or longer than 1500 characters are also filtered out. If any named entities are detected in the question, we remove any paragraph that does not contain them at all. For every remaining paragraph in each retrieved page, we score all positions that match an answer using unigram and bigram overlap between the question and a 20 token window, keeping up to the top 5 paragraphs with the highest overlaps. If there is no paragraph with non-zero overlap, the example is discarded; otherwise we add each found pair to our DS training dataset. Some examples are shown in Table~\ref{tab:ex} and data statistics are given in Table~\ref{tab:data-stats}. Note that we can also generate additional DS data for SQuAD by trying to find mentions of the answers not just in the paragraph provided, but also from other pages or the same page that the given paragraph was in. We observe that around half of the DS examples come from pages outside of the articles used in SQuAD. \section{Experiments} \label{sec:exp} This section first presents evaluations of our Document Retriever and Document Reader modules separately, and then describes tests of their combination, \us, for open-domain QA on the full Wikipedia. \subsection{Finding Relevant Articles} We first examine the performance of our \usr module on all the QA datasets. Table~\ref{tab:ir-res} compares the performance of the two approaches described in Section~\ref{sec:irmodel} with that of the Wikipedia Search Engine\footnote{We use the Wikipedia Search API \url{https://www.mediawiki.org/wiki/API:Search}.} for the task of finding articles that contain the answer given a question. Specifically, we compute the ratio of questions for which the text span of any of their associated answers appear in at least one the top 5 relevant pages returned by each system. Results on all datasets indicate that our simple approach outperforms Wikipedia Search, especially with bigram hashing. We also compare doing retrieval with Okapi BM25 or by using cosine distance in the word embeddings space (by encoding questions and articles as bag-of-embeddings), both of which we find performed worse. \subsection{Reader Evaluation on SQuAD} Next we evaluate our \usp component on the standard SQuAD evaluation \cite{rajpurkar2016squad}. \paragraph{Implementation details} We use $3$-layer bidirectional LSTMs with $h = 128$ hidden units for both paragraph and question encoding. We apply the Stanford CoreNLP toolkit \cite{manning2014stanford} for tokenization and also generating lemma, part-of-speech, and named entity tags. Lastly, all the training examples are sorted by the length of paragraph and divided into mini-batches of $32$ examples each. We use \emph{Adamax} for optimization as described in \cite{kingma2014adam}. Dropout with $p = 0.3$ is applied to word embeddings and all the hidden units of LSTMs. \paragraph{Result and analysis} Table \ref{tab:squad-res} presents our evaluation results on both development and test sets. SQuAD has been a very competitive machine comprehension benchmark since its creation and we only list the best-performing systems in the table. Our system (single model) can achieve {\finalem}\% exact match and {\finalf}\% F1 scores on the test set, which surpasses all the published results and can match the top performance on the SQuAD leaderboard at the time of writing. Additionally, we think that our model is conceptually simpler than most of the existing systems. We conducted an ablation analysis on the feature vector of paragraph tokens. As shown in Table \ref{tab:feature-ablation} all the features contribute to the performance of our final system. Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over $77\%$. More interestingly, if we remove both $f_{aligned}$ and $f_{exact\_match}$, the performance drops dramatically, so we conclude that both features play a similar but complementary role in the feature representation related to the paraphrased nature of a question vs. the context around an answer. \subsection{Full Wikipedia Question Answering} Finally, we assess the performance of our full system \us for answering open-domain questions using the four datasets introduced in Section~\ref{sec:data}. We compare three versions of \us which evaluate the impact of using distant supervision and multitask learning across the training sources provided to \usp (\usr remains the same for each case): \begin{itemize} \item SQuAD: A single \usp model is trained on the SQuAD training set only and used on all evaluation sets. \vspace{-1mm} \item Fine-tune (DS): A \usp model is pre-trained on SQuAD and then fine-tuned for each dataset independently using its distant supervision (DS) training set. \vspace{-1mm} \item Multitask (DS): A single \usp model is jointly trained on the SQuAD training set and {\em all} the DS sources. \end{itemize} For the full Wikipedia setting we use a streamlined model that does not use the CoreNLP parsed $f_{token}$ features or lemmas for $f_{exact\_match}$. We find that while these help for more exact paragraph reading in SQuAD, they don't improve results in the full setting. Additionally, WebQuestions and WikiMovies provide a list of candidate answers (e.g., 1.6 million Freebase entity strings for WebQuestions) and we restrict the answer span must be in this list during prediction. \paragraph{Results} Table~\ref{tab:full-pip-res} presents the results. Despite the difficulty of the task compared to machine comprehension (where you are given the right paragraph) and unconstrained QA (using redundant resources), \us still provides reasonable performance across all four datasets. We are interested in a single, full system that can answer any question using Wikipedia. The single model trained only on SQuAD is outperformed on all four of the datasets by the multitask model that uses distant supervision. However performance when training on SQuAD alone is not far behind, indicating that task transfer is occurring. The majority of the improvement from SQuAD to Multitask (DS) however is likely not from task transfer as fine-tuning on each dataset alone using DS also gives improvements, showing that is is the introduction of extra data in the same domain that helps. Nevertheless, the best single model that we can find is our overall goal, and that is the Multitask (DS) system. We compare to an unconstrained QA system using redundant resources (not just Wikipedia), YodaQA \cite{baudivs2015yodaqa}, giving results which were previously reported on \lcurq and \wq. Despite the increased difficulty of our task, it is reassuring that our performance is not too far behind on \lcurq (31.3 vs. 25.4). The gap is slightly bigger on \wq, likely because this dataset was created from the specific structure of Freebase which YodaQA uses directly. \us's performance on SQuAD compared to its Document Reader component on machine comprehension in Table \ref{tab:squad-res} shows a large drop (from 69.5 to 27.1) as we now are given Wikipedia to read, not a single paragraph. Given the correct document (but not the paragraph) we can achieve 49.4, indicating many false positives come from highly topical sentences. This is despite the fact that the Document Retriever works relatively well (77.8\% of the time retrieving the answer, see Table \ref{tab:ir-res}). It is worth noting that a large part of the drop comes from the nature of the SQuAD questions. They were written with a specific paragraph in mind, thus their language can be ambiguous when the context is removed. Additional resources other than SQuAD, specifically designed for MRS, might be needed to go further. \section{Conclusion} \label{sec:conc} We studied the task of machine reading at scale, by using Wikipedia as the unique knowledge source for open-domain QA. Our results indicate that MRS is a key challenging task for researchers to focus on. Machine comprehension systems alone cannot solve the overall task. Our method integrates search, distant supervision, and multitask learning to provide an effective complete system. Evaluating the individual components as well as the full system across multiple benchmarks showed the efficacy of our approach. Future work should aim to improve over our \us system. Two obvious angles of attack are: (i) incorporate the fact that Document Reader aggregates over multiple paragraphs and documents directly in the training, as it currently trains on paragraphs independently; and (ii) perform end-to-end training across the Document Retriever and Document Reader pipeline, rather than independent systems. \subsection*{Acknowledgments} The authors thank Pranav Rajpurkar for testing Document Reader on the test set of SQuAD. \bibliographystyle{acl_natbib} \end{document}
Leveraging Cognitive Features for Sentiment Analysis
1701.05581
Table 2: Results for different feature combinations. (P,R,F)→ Precision, Recall, F-score. Feature labels Uni→Unigram features, Sn→Sentiment features, Sr→Sarcasm features and Gz→Gaze features along with features related to reading difficulty
[ "[BOLD] Classifier", "[BOLD] Näive Bayes [BOLD] Dataset 1", "[BOLD] Näive Bayes [BOLD] Dataset 1", "[BOLD] Näive Bayes [BOLD] Dataset 1", "[BOLD] SVM [BOLD] Dataset 1", "[BOLD] SVM [BOLD] Dataset 1", "[BOLD] SVM [BOLD] Dataset 1", "[BOLD] Multi-layer NN [BOLD] Dataset 1", "[BOLD] Multi-layer NN [BOLD] Dataset 1", "[BOLD] Multi-layer NN [BOLD] Dataset 1" ]
[ [ "[EMPTY]", "[BOLD] P", "[BOLD] R", "[BOLD] F", "[BOLD] P", "[BOLD] R", "[BOLD] F", "[BOLD] P", "[BOLD] R", "[BOLD] F" ], [ "Uni", "58.5", "57.3", "57.9", "67.8", "68.5", "68.14", "65.4", "65.3", "65.34" ], [ "Sn", "58.7", "57.4", "58.0", "69.6", "70.2", "69.8", "67.5", "67.4", "67.5" ], [ "Sn + Sr", "63.0", "59.4", "61.14", "72.8", "73.2", "72.6", "69.0", "69.2", "69.1" ], [ "Gz", "61.8", "58.4", "60.05", "54.3", "52.6", "53.4", "59.1", "60.8", "60" ], [ "Sn+Gz", "60.2", "58.8", "59.2", "69.5", "70.1", "69.6", "70.3", "70.5", "70.4" ], [ "[BOLD] Sn+ Sr+Gz", "[BOLD] 63.4", "[BOLD] 59.6", "[BOLD] 61.4", "[BOLD] 73.3", "[BOLD] 73.6", "[BOLD] 73.5", "[BOLD] 70.5", "[BOLD] 70.7", "[BOLD] 70.6" ], [ "[EMPTY]", "[BOLD] Dataset 2", "[BOLD] Dataset 2", "[BOLD] Dataset 2", "[BOLD] Dataset 2", "[BOLD] Dataset 2", "[BOLD] Dataset 2", "[BOLD] Dataset 2", "[BOLD] Dataset 2", "[BOLD] Dataset 2" ], [ "Uni", "[BOLD] 51.2", "[BOLD] 50.3", "[BOLD] 50.74", "57.8", "57.9", "57.8", "53.8", "53.9", "53.8" ], [ "Sn", "51.1", "50.3", "50.7", "62.5", "62.5", "62.5", "58.0", "58.1", "58.0" ], [ "Sn+Sr", "50.7", "50.1", "50.39", "70.3", "70.3", "70.3", "66.8", "66.8", "66.8" ], [ "Gz", "49.9", "50.9", "50.39", "48.9", "48.9", "48.9", "53.6", "54.0", "53.3" ], [ "Sn+Gz", "51", "50.3", "50.6", "62.4", "62.3", "62.3", "59.7", "59.8", "59.8" ], [ "[BOLD] Sn+ Sr+Gz", "50.2", "49.7", "50", "[BOLD] 71.9", "[BOLD] 71.8", "[BOLD] 71.8", "[BOLD] 69.1", "[BOLD] 69.2", "[BOLD] 69.1" ] ]
We observe the maximum accuracy with the complete feature-set comprising Sentiment, Sarcasm and Thwarting, and Cognitive features derived from gaze data. For this combination, SVM outperforms the other classifiers. The novelty of our feature design lies in (a) First augmenting sarcasm and thwarting based features (Sr) with sentiment features (Sn), which shoots up the accuracy by 3.1% for Dataset1 and 7.8% for Dataset2 (b) Augmenting gaze features with Sn+Sr, which further increases the accuracy by 0.6% and 1.5% for Dataset 1 and 2 respectively, amounting to an overall improvement of 3.7% and 9.3% respectively. It may be noted that the addition of gaze features may seem to bring meager improvements in the classification accuracy but the improvements are consistent across datasets and several classifiers. Still, we speculate that aggregating various eye-tracking parameters to extract the cognitive features may have caused loss of information, there by limiting the improvements. For example, the graph based features are computed for each participant and eventually averaged to get the graph features for a sentence, thereby not leveraging the power of individual eye-movement patterns. We intend to address this issue in future.
\documentclass[11pt]{article} \aclfinalcopy % Uncomment this line for the final submission \urlstyle{same} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Leveraging Cognitive Features for Sentiment Analysis} \author{ \textbf{Abhijit Mishra}\textsuperscript{$\dagger$}, \textbf{Diptesh Kanojia}\textsuperscript{$\dagger$,$\clubsuit$}, \textbf{Seema Nagar} \textsuperscript{$\star$}, \textbf{Kuntal Dey}\textsuperscript{$\star$}, \\\textbf{Pushpak Bhattacharyya}\textsuperscript{$\dagger$}\\ \textsuperscript{$\dagger$}Indian Institute of Technology Bombay, India\\ \textsuperscript{$\clubsuit$}IITB-Monash Research Academy, India\\ \textsuperscript{$\star$}IBM Research, India\\ \textsuperscript{$\dagger$}\{abhijitmishra, diptesh, pb\}@cse.iitb.ac.in\\ \textsuperscript{$\star$}\{senagar3, kuntadey\}@in.ibm.com } \date{} \begin{document} \maketitle \begin{abstract} Sentiments expressed in user-generated short text and sentences are nuanced by subtleties at lexical, syntactic, semantic and pragmatic levels. To address this, we propose to augment traditional features used for sentiment analysis and sarcasm detection, with cognitive features derived from the eye-movement patterns of readers. Statistical classification using our enhanced feature set improves the performance (F-score) of polarity detection by a maximum of $3.7\%$ and $9.3\%$ on two datasets, over the systems that use only traditional features. We perform feature significance analysis, and experiment on a held-out dataset, showing that cognitive features indeed empower sentiment analyzers to handle complex constructs. \end{abstract} \section{Introduction} This paper addresses the task of Sentiment Analysis (SA) - automatic detection of the sentiment polarity as positive versus negative - of user-generated short texts and sentences. Several sentiment analyzers exist in literature today \cite{liu2012survey}. Recent works, such as \newcite{kouloumpis2011twitter}, \newcite{agarwal2011sentiment} and \newcite{barbosa2010robust}, attempt to conduct such analyses on user-generated content. Sentiment analysis remains a hard problem, due to the challenges it poses at the various levels, as summarized below. \subsection{Lexical Challenges} Sentiment analyzers face the following three challenges at the lexical level: (1) \textbf{Data Sparsity}, \emph{i.e.}, handling the presence of unseen words/phrases. (e.g., \emph{The movie is messy, uncouth, incomprehensible, vicious and absurd}) (2) \textbf{Lexical Ambiguity}, \textit{e.g.,} finding appropriate senses of a word given the context (e.g., \emph{His face fell when he was dropped from the team} vs \emph{The boy fell from the bicycle}, where the verb ``fell'' has to be disambiguated) (3) \textbf{Domain Dependency}, tackling words that change polarity across domains. (\textit{e.g.,} the word \emph{unpredictable} being positive in case of \emph{unpredictable movie} in movie domain and negative in case of \emph{unpredictable steering} in car domain). Several methods have been proposed to address the different lexical level difficulties by - (a) using WordNet synsets and word cluster information to tackle lexical ambiguity and data sparsity \cite{akkaya2009subjectivity,balamurali2011harnessing,go2009twitter,maas2011learning,popat2013haves,saif2012alleviating} and (b) mining domain dependent words \cite{sharma2013detecting,wiebe2006word}. \subsection{Syntactic Challenges} Difficulty at the syntax level arises when the given text follows a complex phrasal structure and, \emph{phrase attachments} are expected to be resolved before performing SA. For instance, the sentence \emph{A somewhat crudely constructed but gripping, questing look at a person so racked with self-loathing, he becomes an enemy to his own race.} requires processing at the syntactic level, before analyzing the sentiment. Approaches leveraging syntactic properties of text include generating dependency based rules for SA \cite{poria2014sentic} and leveraging local dependency \cite{li2010sentiment}. \subsection{Semantic and Pragmatic Challenges} This corresponds to the difficulties arising in the higher layers of NLP, \emph{i.e.,} semantic and pragmatic layers. Challenges in these layers include handling: (a) Sentiment expressed implicitly (\textit{e.g.,} \emph{Guy gets girl, guy loses girl, audience falls asleep.)} (b) Presence of sarcasm and other forms of irony (\textit{e.g.,} \emph{This is the kind of movie you go because the theater has air-conditioning.}) and (c) Thwarted expectations (\textit{e.g.,} \emph{The acting is fine. Action sequences are top-notch. Still, I consider it as a below average movie due to its poor storyline.}). Such challenges are extremely hard to tackle with traditional NLP tools, as these need both linguistic and pragmatic knowledge. Most attempts towards handling \emph{thwarting} \cite{ramteke2013detecting} and \emph{sarcasm and irony} \cite{carvalho2009clues,riloff2013sarcasm,liebrecht2013perfect,maynard2014cares,barbieri2014modelling,joshi2015harnessing}, rely on distant supervision based techniques (\textit{e.g.,} leveraging hashtags) and/or stylistic/pragmatic features (emoticons, laughter expressions such as ``lol'' {\it etc}). Addressing difficulties for linguistically well-formed texts, in absence of explicit cues (like emoticons), proves to be difficult using textual/stylistic features alone. \subsection{Introducing Cognitive Features} We empower our systems by augmenting cognitive features along with traditional linguistic features used for general sentiment analysis, thwarting and sarcasm detection. Cognitive features are derived from the eye-movement patterns of human annotators recorded while they annotate short-text with sentiment labels. Our hypothesis is that cognitive processes in the brain are related to eye-movement activities \cite{parasuraman2006neuroergonomics}. Hence, considering readers' eye-movement patterns while they read sentiment bearing texts may help tackle linguistic nuances better. We perform statistical classification using various classifiers and different feature combinations. With our augmented feature-set, we observe a significant improvement of accuracy across all classifiers for two different datasets. Experiments on a carefully curated held-out dataset indicate a significant improvement in sentiment polarity detection over the state of the art, specifically text with complex constructs like irony and sarcasm. Through feature significance analysis, we show that cognitive features indeed empower sentiment analyzers to handle complex constructs like irony and sarcasm. Our approach is the first of its kind to the best of our knowledge. We share various resources and data related to this work at \url{http://www.cfilt.iitb.ac.in/cognitive-nlp} The rest of the paper is organized as follows. Section \ref{sec:relatedwork} presents a summary of past work done in traditional SA and SA from a psycholinguistic point of view. Section \ref{sec:datasets} describes the available datasets we have taken for our analysis. Section \ref{sec:features} presents our features that comprise both traditional textual features, used for sentiment analysis and cognitive features derived from annotators' eye-movement patterns. In section \ref{sec:predictiveframework}, we discuss the results for various sentiment classification techniques under different combinations of textual and cognitive features, showing the effectiveness of cognitive features. In section \ref{sec:feasibility}, we discuss on the feasibility of our approach before concluding the paper in section \ref{sec:conclusion}. \section{Related Work} \label{sec:relatedwork} Sentiment classification has been a long standing NLP problem with both supervised \cite{pang2002thumbs,benamara2007sentiment,martineau2009delta} and unsupervised \cite{mei2007topic,lin2009joint} machine learning based approaches existing for the task. Supervised approaches are popular because of their superior classification accuracy \cite{mullen2004sentiment,pang2008opinion} and in such approaches, feature engineering plays an important role. Apart from the commonly used bag-of-words features based on unigrams, bigrams etc. \cite{dave2003mining,ng2006examining}, syntactic properties \cite{martineau2009delta,nakagawa2010dependency}, semantic properties \cite{balamurali2011harnessing} and effect of negators. \newcite{ikeda2008learning} are also used as features for the task of sentiment classification. The fact that sentiment expression may be complex to be handled by traditional features is evident from a study of comparative sentences by \newcite{ganapathibhotla2008mining}. This, however has not been addressed by feature based approaches. Eye-tracking technology has been used recently for sentiment analysis and annotation related research (apart from the huge amount of work in psycholinguistics that we find hard to enlist here due to space limitations). \newcite{joshi2014measuring} develop a method to measure the sentiment annotation complexity using cognitive evidence from eye-tracking. \newcite{mishra2014cognitive} study sentiment detection, and subjectivity extraction through anticipation and homing, with the use of eye tracking. Regarding other NLP tasks, \newcite{joshi2013more} propose a studied the cognitive aspects if Word Sense Disambiguation (WSD) through eye-tracking. Earlier, \newcite{mishra2013automatically} measure translation annotation difficulty of a given sentence based on gaze input of translators used to label training data. \newcite{klerke2016improving} present a novel multi-task learning approach for sentence compression using labelled data, while, \newcite{barrett-sogaard:2015:CogACLL} discriminate between grammatical functions using gaze features. The recent advancements in the literature discussed above, motivate us to explore gaze-based cognition for sentiment analysis. We acknowledge that some of the well performing sentiment analyzers use Deep Learning techniques (like Convolutional Neural Network based approach by \newcite{maas2011learning} and Recursive Neural Network based approach by \newcite{dos2014deep}). In these, the features are automatically learned from the input text. Since our approach is feature based, we do not consider these approaches for our current experimentation. Taking inputs from gaze data and using them in a deep learning setting sounds intriguing, though, it is beyond the scope of this work. \section{Eye-tracking and Sentiment Analysis Datasets} \label{sec:datasets} We use two publicly available datasets for our experiments. Dataset $1$ has been released by \newcite{sarcasmunderstandability} which they use for the task of \emph{sarcasm understandability} prediction. Dataset $2$ has been used by \newcite{joshi2014measuring} for the task of sentiment annotation complexity prediction. These datasets contain many instances with higher level nuances like presence of implicit sentiment, sarcasm and thwarting. We describe the datasets below. \subsection{Dataset 1} It contains $994$ text snippets with $383$ positive and $611$ negative examples. Out of this, $350$ are sarcastic or have other forms of irony. The snippets are a collection of reviews, normalized-tweets and quotes. Each snippet is annotated by seven participants with binary positive/negative polarity labels. Their eye-movement patterns are recorded with a high quality \emph{SR-Research Eyelink-$1000$ eye-tracker} (sampling rate $500$Hz). The annotation accuracy varies from $70\%-90\%$ with a Fleiss kappa inter-rater agreement of $0.62$. \subsection{Dataset $2$} This dataset consists of $1059$ snippets comprising movie reviews and normalized tweets. Each snippet is annotated by five participants with positive, negative and objective labels. Eye-tracking is done using a low quality Tobii T$120$ eye-tracker (sampling rate $120$Hz). The annotation accuracy varies from $75\%-85\%$ with a Fleiss kappa inter-rater agreement of $0.68$. We rule out the objective ones and consider $843$ snippets out of which $443$ are positive and $400$ are negative. \subsection{Performance of Existing SA Systems Considering Dataset -$1$ and $2$ as Test Data} \label{sec:perf} It is essential to check whether our selected datasets really pose challenges to existing sentiment analyzers or not. For this, we implement two statistical classifiers and a rule based classifier to check the test accuracy of Dataset $1$ and Dataset $2$. The statistical classifiers are based on Support Vector Machine (SVM) and N{\"a}ive Bayes (NB) implemented using Weka~\cite{hall2009weka} and LibSVM~\cite{libsvm2011} APIs. These are on trained on 10662 snippets comprising movie reviews and tweets, randomly collected from standard datasets released by \newcite{pang2004sentimental} and Sentiment 140 (\url{http://www.sentiment140.com/}). The feature-set comprises traditional features for SA reported in a number of papers. They are discussed in section ~\ref{sec:features} under the category of \emph{Sentiment Features}. The \textit{in-house} rule based (RB) classifier decides the sentiment labels based on the counts of positive and negative words present in the snippet, computed using MPQA lexicon \cite{wilson2005recognizing}. It also considers negators as explained by \newcite{Jia:2009:ENS:1645953.1646241} and intensifiers as explained by \newcite{dragut2014role}. Table~\ref{tab:traditionalClassifiers} presents the accuracy of the three systems. The F-scores are not very high for all the systems (especially for dataset 1 that contains more sarcastic/ironic texts), possibly indicating that the snippets in our dataset pose challenges for existing sentiment analyzers. Hence, the selected datasets are ideal for our current experimentation that involves cognitive features. \section{Enhanced feature set for SA} \label{sec:features} Our feature-set into four categories \emph{viz.} (1) Sentiment features (2) Sarcasm, Irony and Thwarting related Features (3) Cognitive features from eye-movement (4) Textual features related to reading difficulty. We describe our feature-set below. \subsection{Sentiment Features} We consider a series of textual features that have been extensively used in sentiment literature \cite{liu2012survey}. The features are described below. Each feature is represented by a unique abbreviated form, which are used in the subsequent discussions. \begin{enumerate} \item \textbf{Presence of Unigrams (NGRAM_PCA) \textit{i.e.}} Presence of unigrams appearing in each sentence that also appear in the vocabulary obtained from the training corpus. To avoid overfitting (since our training data size is less), we reduce the dimension to 500 using Principal Component Analysis. \item \textbf{Subjective words (Positive\_words,\\Negative\_words) \textit{i.e.}} Presence of positive and negative words computed against MPQA lexicon \cite{wilson2005recognizing}, a popular lexicon used for sentiment analysis. \item \textbf{Subjective scores (PosScore, NegScore) \textit{i.e.}} Scores of positive subjectivity and negative subjectivity using SentiWordNet \cite{esuli2006sentiwordnet}. \item \textbf{Sentiment flip count (FLIP) \textit{i.e.}} Number of times words polarity changes in the text. Word polarity is determined using MPQA lexicon. \item \textbf{Part of Speech ratios (VERB, NOUN, ADJ, ADV) \textit{i.e.}} Ratios (proportions) of verbs, nouns, adjectives and adverbs in the text. This is computed using NLTK\footnote{\url{http://www.nltk.org/}}. \item \textbf{Count of Named Entities (NE) \textit{i.e.}} Number of named entity mentions in the text. This is computed using NLTK. \item \textbf {Discourse connectors (DC) \textit{i.e.}} Number of discourse connectors in the text computed using an in-house list of discourse connectors (like \emph{however}, \emph{although} etc.) \end{enumerate} \subsection{Sarcasm, Irony and Thwarting related Features} To handle complex texts containing constructs irony, sarcasm and thwarted expectations as explained earlier, we consider the following features. The features are taken from \newcite{riloff2013sarcasm}, \newcite{ramteke2013detecting} and \newcite{joshi2015harnessing}. \begin{enumerate} \item \textbf{Implicit incongruity (IMPLICIT\_PCA) \textit{i.e.}} Presence of positive phrases followed by negative situational phrase (computed using bootstrapping technique suggested by \newcite{riloff2013sarcasm}). We consider the top 500 principal components of these phrases to reduce dimension, in order to avoid overfitting. \item \textbf{Punctuation marks (PUNC) \textit{i.e.}} Count of punctuation marks in the text. \item \textbf{Largest pos/neg subsequence (LAR) \textit{i.e.}} Length of the largest series of words with polarities unchanged. Word polarity is determined using MPQA lexicon. \item \textbf{Lexical polarity (LP) \textit{i.e.}} Sentence polarity found by supervised logistic regression using the dataset used by \newcite{joshi2015harnessing}. \end{enumerate} \subsection{Cognitive features from eye-movement} Eye-movement patterns are characterized by two basic attributes: $(1)$ Fixations, corresponding to a longer stay of gaze on a visual object (like characters, words \textit{etc.} in text) $(2)$ Saccades, corresponding to the transition of eyes between two fixations. Moreover, a saccade is called a \emph{Regressive Saccade} or simply, \emph{Regression} if it represents a phenomenon of going back to a pre-visited segment. A portion of a text is said to be \emph{skipped} if it does not have any fixation. Figure \ref{fig:screenshot} shows eye-movement behavior during annotation of the given sentence in dataset-$1$. The circles represent fixation and the line connecting the circles represent saccades. Our cognition driven features are derived from these basic eye-movement attributes. We divide our features in two sets as explained ahead. \subsection{Basic gaze features} Readers' eye-movement behavior, characterized by fixations, forward saccades, skips and regressions, can be directly quantified by simple statistical aggregation (\textit{i.e.,} computing features for individual participants and then averaging). Since these behaviors intuitively relate to the cognitive process of the readers \cite{rayner1994eye}, we consider simple statistical properties of these factors as features to our model. Some of these features have been reported by \newcite{sarcasmunderstandability} for modeling sarcasm understandability of readers. However, as far as we know, these features are being introduced in NLP tasks like sentiment analysis for the first time. \begin{enumerate} \item \textbf{Average First-Fixation Duration per word (FDUR) \textit{i.e.}} Sum of \emph{first-fixation duration} divided by word count. First fixations are fixations occurring during the first pass reading. Intuitively, an increased first fixation duration is associated to more time spent on the words, which accounts for lexical complexity. This is motivated by \newcite{rayner1986lexical}. \item \textbf{Average Fixation Count (FC) \textit{i.e.}} Sum of fixation counts divided by word count. If the reader reads fast, the first fixation duration may not be high even if the lexical complexity is more. But the number of fixations may increase on the text. So, fixation count may help capture lexical complexity in such cases. \item \textbf{Average Saccade Length (SL) \textit{i.e.}} Sum of saccade lengths (measured by number of words) divided by word count. Intuitively, lengthy saccades represent the text being structurally/syntactically complex. This is also supported by \newcite{von2011scanpath}. \item \textbf{Regression Count (REG) \textit{i.e.}} Total number of gaze regressions. Regressions correspond to both lexical and syntactic re-analysis \cite{malsburg2015determinants}. Intuitively, regression count should be useful in capturing both syntactic and semantic difficulties. \item \textbf{Skip count (SKIP) \textit{i.e.}} Number of words skipped divided by total word count. Intuitively, higher skip count should correspond lesser semantic processing requirement (assuming that skipping is not done intentionally). \item \textbf{Count of regressions from second half to first half of the sentence (RSF) \textit{i.e.}} Number of regressions from second half of the sentence to the first half of the sentence (given the sentence is divided into two equal half of words). Constructs like sarcasm, irony often have phrases that are incongruous (\textit{e.g.} \emph{"The book is so great that it can be used as a paperweight"- the incongruous phrases are "book is so great" and "used as a paperweight".}. Intuitively, when a reader encounters such incongruous phrases, the second phrases often cause a surprisal resulting in a long regression to the first part of the text. Hence, this feature is considered. \item \textbf{Largest Regression Position (LREG) \textit{i.e.}} Ratio of the absolute position of the word from which a regression with the largest amplitude (in terms of number of characters) is observed, to the total word count of sentence. This is chosen under the assumption that regression with the maximum amplitude may occur from the portion of the text which causes maximum surprisal (in order to get more information about the portion causing maximum surprisal). The relative starting position of such portion, captured by LREG, may help distinguish between sentences with different linguistic subtleties. \end{enumerate} \subsection{Complex gaze features} We propose a graph structure constructed from the gaze data to derive more complex gaze features. We term the graph as \emph{gaze-saliency graphs}. \emph{A gaze-saliency graph for a sentence $S$ for a reader $R$, represented as $G= (V,E)$, is a graph with vertices ($V$) and edges ($E$) where each vertex $v \in V$ corresponds to a word in $S$ (may not be unique) and there exists an edge $e \in E$ between vertices $v_1$ and $v_2$ if R performs at least one saccade between the words corresponding to $v1$ and $v2$}. Figure \ref{fig:saliency} shows an example of such a graph. \begin{enumerate} \item \textbf{Edge density of the saliency gaze graph (ED) \textit{i.e.}} Ratio of number of edges in the gaze saliency graph and total number of possible links ($(|V|\times|V|-1|)/2$) in the saliency graph. As, \emph{Edge Density} of a saliency graph increases with the number of distinct saccades, it is supposed to increase if the text is semantically more difficult. \item \textbf{Fixation Duration at Left/Source as Edge Weight (F1H, F1S) \textit{i.e.}} Largest weighted degree (F1H) and second largest weighted degree (F1S) of the saliency graph considering the fixation duration on the word of node $i$ of edge $E_{ij}$ as edge weight. \item \textbf{Fixation Duration at Right/Target as Edge Weight (F2H, F2S) \textit{i.e.}} Largest weighted degree (F2H) and second largest weighted degree (F2S) of the saliency graph considering the fixation duration of the word of node $i$ of edge $E_{ij}$ as edge weight. \item \textbf{Forward Saccade Count as Edge Weight (FSH, FSS) \textit{i.e.}} Largest weighted degree (FSH) and second largest weighted degree (FSS) of the saliency graph considering the number of forward saccades between nodes $i$ and $j$ of an edge $E_{ij}$ as edge weight.. \item \textbf{Forward Saccade Distance as Edge Weight (FSDH, FSDS) \textit{i.e.}} Largest weighted degree (FSDH) and second largest weighted degree (FSDS) of the saliency graph considering the total distance (word count) of forward saccades between nodes $i$ and $j$ of an edge $E_{ij}$ as edge weight. \item \textbf{Regressive Saccade Count as Edge Weight (RSH, RSS) \textit{i.e.}} Largest weighted degree (RSH) and second largest weighted degree (RSS) of the saliency graph considering the number of regressive saccades between nodes $i$ and $j$ of an edge $E_{ij}$ as edge weight. \item \textbf{Regressive Saccade Distance as Edge Weight (RSDH, RSDS) \textit{i.e.}} Largest weighted degree (RSDH) and second largest weighted degree (RSDS) of the saliency graph considering the number of regressive saccades between nodes $i$ and $j$ of an edge $E_{ij}$ as edge weight. \end{enumerate} The "highest and second highest degree" based gaze features derived from saliency graphs are motivated by our qualitative observations from the gaze data. Intuitively, the highest weighted degree of a graph is expected to be higher if some phrases have complex semantic relationships with others. \subsection{Features Related to Reading Difficulty} Eye-movement during reading text with sentiment related nuances (like sarcasm) can be similar to text with other forms of difficulties. To address the effect of sentence length, word length and syllable count that affect reading behavior, we consider the following features. \begin{enumerate} \item \textbf{Readability Ease (RED) \textit{i.e.}} Flesch Readability Ease score of the text \cite{kincaid1975derivation}. Higher the score, easier is the text to comprehend. \item \textbf {Sentence Length (LEN) \textit{i.e.}} Number of words in the sentence. \end{enumerate} We now explain our experimental setup and results. \section{Experiments and results} \input{featureCombinationsTable.tex} \label{sec:predictiveframework} We test the effectiveness of the enhanced feature-set by implementing three classifiers \textit{viz.,} SVM (with linear kernel), NB and Multi-layered Neural Network. These systems are implemented using the Weka~\cite{hall2009weka} and LibSVM~\cite{libsvm2011} APIs. Several classifier hyperparameters are kept to the default values given in Weka. We separately perform a 10-fold cross validation on both Dataset 1 and 2 using different sets of feature combinations. The average F-scores for the class-frequency based random classifier are $33\%$ and $46.93\%$ for dataset 1 and dataset 2 respectively. The classification accuracy is reported in Table~\ref{tab:featureCombinations}. We observe the maximum accuracy with the complete feature-set comprising Sentiment, Sarcasm and Thwarting, and Cognitive features derived from gaze data. For this combination, SVM outperforms the other classifiers. The novelty of our feature design lies in (a) First augmenting sarcasm and thwarting based features (\emph{Sr}) with sentiment features (\emph{Sn}), which shoots up the accuracy by $3.1\%$ for Dataset1 and $7.8\%$ for Dataset2 (b) Augmenting gaze features with \emph{Sn+Sr}, which further increases the accuracy by $0.6\%$ and $1.5\%$ for Dataset 1 and 2 respectively, amounting to an overall improvement of $3.7\%$ and $9.3\%$ respectively. It may be noted that the addition of gaze features may seem to bring meager improvements in the classification accuracy but the improvements are consistent across datasets and several classifiers. Still, we speculate that aggregating various eye-tracking parameters to extract the cognitive features may have caused loss of information, there by limiting the improvements. For example, the graph based features are computed for each participant and eventually averaged to get the graph features for a sentence, thereby not leveraging the power of individual eye-movement patterns. We intend to address this issue in future. Since the best ($Sn+Sr+Gz$) and the second best feature ($Sn+Sr$) combinations are close in terms of accuracy (difference of $0.6\%$ for dataset $\textit{1}$ and $1.5\%$ for dataset $\textit{2}$), we perform a statistical significance test using McNemar test ($\alpha = 0.05$). The difference in the F-scores turns out to be strongly significant with $p=0.0060$ (The odds ratio is $0.489$, with a $95\%$ confidence interval). However, the difference in the F-scores is not statistically significant ($p=0.21$) for dataset $\textit{2}$ for the best and second best feature combinations. \subsection{Importance of cognitive features} We perform a \emph{chi-squared test} based feature significance analysis, shown in Table \ref{tab:sig}. For dataset 1, $10$ out of the top $20$ ranked features are gaze-based features and for dataset 2, $7$ out of top $20$ features are gaze-based, as shown in bold letters. Moreover, if we consider gaze features alone for feature ranking using chi-squared test, features \textit{FC}, \textit{SL}, \textit{FSDH}, \textit{FSDS}, \textit{RSDH} and \textit{RSDS} turn out to be insignificant. To study whether the cognitive features actually help in classifying complex output as hypothesized earlier, we repeat the experiment on a held-out dataset, randomly derived from Dataset 1. It has $294$ text snippets out of which $131$ contain complex constructs like irony/sarcasm and rest of the snippets are relatively simpler. We choose SVM, our best performing classifier, with similar configuration as explained in section \ref{sec:predictiveframework}. \input{exampleTable.tex} As seen in Table \ref{tab:heldout}, the relative improvement of F-score, when gaze features are included, is $6.1\%$ for complex texts and is $2.1\%$ for simple texts (all the values are statistically significant with $p<0.05$ for McNemar test, except $Sn$ and $Sn+Sr$ for Non-irony case.). This demonstrates the efficacy of the gaze based features. Table~\ref{tab:example} shows a few example cases (obtained from test folds) showing the effectiveness of our enhanced feature set. \section{Feasibility of our approach} \label{sec:feasibility} Since our method requires gaze data from human readers to be available, the methods practicability becomes questionable. We present our views on this below. \subsection{Availability of Mobile Eye-trackers} Availability of inexpensive embedded eye-trackers on hand-held devices has come close to reality now. This opens avenues to get eye-tracking data from inexpensive mobile devices from a huge population of online readers non-intrusively, and derive cognitive features to be used in predictive frameworks like ours. For instance, \emph{Cogisen: (http://www.sencogi.com)} has a patent (ID: EP2833308-A1) on ``eye-tracking using inexpensive mobile web-cams". \newcite{wood2014eyetab} have introduced \textit{EyeTab}, a model-based approach for binocular gaze estimation that runs entirely on tablets. \subsection{Applicability Scenario} We believe, mobile eye-tracking modules could be a part of mobile applications built for e-commerce, online learning, gaming \emph{etc.} where automatic analysis of online reviews calls for better solutions to detect and handle linguistic nuances in sentiment analysis setting. To give an example, let's say a book gets different reviews on Amazon. Our system could watch how readers read the review using mobile eye-trackers, and thereby, decide the polarity of opinion, especially when sentiment is not expressed explicitly (\textit{e.g.,} using strong polar words) in the text. Such an application can horizontally scale across the web, helping to improve automatic classification of online reviews. \subsection{Getting Users' Consent for Eye-tracking} Eye-tracking technology has already been utilized by leading mobile technology developers (like Samsung) to facilitate richer user experiences through services like \emph{Smart-scroll} (where a user's eye movement determines whether a page has to be scrolled or not) and \emph{Smart-lock} (where user's gaze position decided whether to lock the screen or not). The growing interest of users in using such services takes us to a promising situation where getting users' consent to record eye-movement patterns will not be difficult, though it is yet not the current state of affairs. \section{Conclusion} \label{sec:conclusion} We combined traditional sentiment features with (a) different textual features used for sarcasm and thwarting detection, and (b) cognitive features derived from readers' eye movement behavior. The combined feature set improves the overall accuracy over the traditional feature set based SA by a margin of $3.6\%$ and $9.3\%$ respectively for Datasets $1$ and $2$. It is significantly effective for text with complex constructs, leading to an improvement of $6.1\%$ on our held-out data. In future, we propose to explore (a) devising deeper gaze-based features and (b) \emph{multi-view} classification using independent learning from linguistics and cognitive data. We also plan to explore deeper graph and gaze features, and models to learn complex gaze feature representation. Our general approach may be useful in other problems like emotion analysis, text summarization and question answering, where textual clues alone do not prove to be sufficient. \section*{Acknowledgments} We thank the members of CFILT Lab, especially Jaya Jha and Meghna Singh, and the students of IIT Bombay for their help and support. \bibliographystyle{conll2016} \end{document} Errors committed by our system arise from multiple factors starting from limitations of the eye-tracker hardware to errors committed by linguistic tools and resources. Moreover, aggregating various eye-tracking parameters to extract the cognitive features may have caused loss of information. For example, the graph based features are computed for each participant and eventually averaged to get the graph features for a sentence, thereby not leveraging the power of individual eye-movement patterns.
Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots
1612.01627
Table 4: Evaluation results of model ablation.
[ "[EMPTY]", "[BOLD] Ubuntu Corpus", "[BOLD] Ubuntu Corpus", "[BOLD] Ubuntu Corpus", "[BOLD] Ubuntu Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus" ]
[ [ "[EMPTY]", "R2@1", "R10@1", "R10@2", "R10@5", "MAP", "MRR", "P@1", "R10@1", "R10@2", "R10@5" ], [ "Replace [ITALIC] M", "0.905", "0.661", "0.799", "0.950", "0.503", "0.541", "0.343", "0.201", "0.364", "0.729" ], [ "Replace [ITALIC] A", "0.918", "0.716", "0.832", "0.954", "0.522", "0.565", "0.376", "0.220", "0.385", "0.727" ], [ "Only [ITALIC] M1", "0.919", "0.704", "0.832", "0.955", "0.518", "0.562", "0.370", "0.228", "0.371", "0.737" ], [ "Only [ITALIC] M2", "0.921", "0.715", "0.836", "0.956", "0.521", "0.565", "0.382", "0.232", "0.380", "0.734" ], [ "SMN [ITALIC] last", "0.923", "0.723", "0.842", "0.956", "0.526", "0.571", "0.393", "0.236", "0.387", "0.729" ] ]
First, replacing the multi-channel “2D” matching with a neural tensor network (NTN) Socher et al. (denoted as ReplaceM) makes the performance drop dramatically. This is because NTN only matches a pair by an utterance vector and a response vector and loses important information in the pair. Together with the visualization, we can conclude that “2D” matching plays a key role in the “matching first” strategy as it captures the important matching information in each pair with minimal loss. Second, the performance drops slightly when replacing the GRU for matching accumulation with a multi-layer perceptron (denoted as ReplaceA). This indicates that utterance relationships are useful. Finally, we left only one channel in matching and found that M2 is a little more powerful than M1 and we achieve the best results with both of them (except on R10@5 on the Douban Corpus).
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} % http://ctan.org/pkg/pifont % http://ctan.org/pkg/amssymb \newcommand{\thickhline}{\noalign{\hrule height 1pt}} \def\aclpaperid{37} \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \aclfinalcopy \newcommand\BibTeX{B{\sc ib}\TeX} \title{Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots} \author{ Yu Wu$^\dag$, Wei Wu$^\ddag$, Chen Xing$^\diamondsuit$, Zhoujun Li$^\dag$\thanks{~~~Corresponding Author}~, Ming Zhou$^\ddag$~~~~\\ $^\dag$State Key Lab of Software Development Environment, Beihang University, Beijing, China\\ $^ \diamondsuit$College of Computer and Control Engineering, Nankai University, Tianjin, China\\ $^\ddag$~~~~Microsoft Research, Beijing, China\\ \{wuyu,lizj\}@buaa.edu.cn \{wuwei,v-chxing,mingzhou\}@microsoft.com } \date{} \begin{document} \maketitle \begin{abstract} We study response selection for multi-turn conversation in retrieval-based chatbots. Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information. We propose a sequential matching network (SMN) to address both problems. SMN first matches a response with each utterance in the context on multiple levels of granularity, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models relationships among utterances. The final matching score is calculated with the hidden states of the RNN. An empirical study on two public data sets shows that SMN can significantly outperform state-of-the-art methods for response selection in multi-turn conversation. \end{abstract} \section{Introduction} Conversational agents include task-oriented dialog systems and non-task-oriented chatbots. Dialog systems focus on helping people complete specific tasks in vertical domains \cite{young2010hidden}, while chatbots aim to naturally and meaningfully converse with humans on open domain topics \cite{ritter2011data}. Existing work on building chatbots includes generation -based methods and retrieval-based methods. Retrieval based chatbots enjoy the advantage of informative and fluent responses, because they select a proper response for the current conversation from a repository with response selection algorithms. While most existing work on retrieval-based chatbots studies response selection for single-turn conversation \cite{wang2013dataset} which only considers the last input message, we consider the problem in a multi-turn scenario. In a chatbot, multi-turn response selection takes a message and utterances in its previous turns as input and selects a response that is natural and relevant to the whole context. The key to response selection lies in input-response matching. Different from single-turn conversation, multi-turn conversation requires matching between a response and a conversation context %\footnote{In this paper, both ``session'' and ``context'' refer to an input message and utterances in its previous turns.} in which one needs to consider not only the matching between the response and the input message but also matching between responses and utterances in previous turns. The challenges of the task include (1) how to identify important information (words, phrases, and sentences) in context, which is crucial to selecting a proper response and leveraging relevant information in matching; and (2) how to model relationships among the utterances in the context. %How much important information an utterance contains also indicates its importance to response selection. Table \ref{example1} illustrates the challenges with an example. First, ``hold a drum class'' and ``drum'' in context are very important. %to response selection for the session. Without them, one may find responses relevant to the message (i.e., the fifth utterance of the context) but nonsense in the context (e.g., ``what lessons do you want?''). %On the other hand, although ``Shanghai'' and ``Lujiazui'' are keywords in their utterances, they are useless and even noise to response selection. It is crucial yet non-trivial to extract the important information from the context and leverage them in matching while circumvent the noise. Second, the message highly depends on the second utterance in the context, and the order of the utterances matters in response selection: exchanging the third utterance and the fifth utterance may lead to different responses. %Third, because Context 1 contains much important information, it is more important than others in response selection. Existing work, however, either ignores relationships among utterances when concatenating them together \cite{lowe2015ubuntu}, or loses important information in context in the process of converting the whole context to a vector without enough supervision from responses (e.g., by a hierarchical RNN \cite{zhou2016multi}). We propose a sequential matching network (SMN), a new context based matching model that can tackle both challenges in an end-to-end way. The reason that existing models lose important information in the context is that they first represent the whole context as a vector and then match the context vector with a response vector. Thus, responses in these models connect with the context until the final step in matching. To avoid information loss, SMN matches a response with each utterance in the context at the beginning and encodes important information in each pair into a matching vector. The matching vectors are then accumulated in the utterances' temporal order to model their relationships. The final matching degree is computed with the accumulation of the matching vectors. Specifically, for each utterance-response pair, the model constructs a word-word similarity matrix and a sequence-sequence similarity matrix by the word embeddings and the hidden states of a recurrent neural network with gated recurrent units (GRU) \cite{chung2014empirical} respectively. The two matrices capture important matching information in the pair on a word level and a segment (word subsequence) level respectively, and the information is distilled and fused as a matching vector through an alternation of convolution and pooling operations on the matrices. By this means, important information from multiple levels of granularity in context is recognized under sufficient supervision from the response and carried into matching with minimal loss. The matching vectors are then uploaded to another GRU to form a matching score for the context and the response. The GRU accumulates the pair matching in its hidden states in the chronological order of the utterances in context. It models relationships and dependencies among the utterances in a matching fashion and has the utterance order supervise the accumulation of pair matching. %The gate mechanism of the GRU helps select important pairs and filter out noise. The matching degree of the context and the response is computed by a logit model with the hidden states of the GRU. SMN extends the powerful ``2D'' matching paradigm in text pair matching for single-turn conversation to context based matching for multi-turn conversation, and enjoys the advantage of both important information in utterance-response pairs and relationships among utterances being sufficiently preserved and leveraged in matching. We test our model on the Ubuntu dialogue corpus \cite{lowe2015ubuntu} which is a large scale publicly available English data set for research in multi-turn conversation. The results show that our model can significantly outperform state-of-the-art methods, and improvement to the best baseline model on R$_{10}$@1 is over $6$\%. In addition to the Ubuntu corpus, we create a human-labeled Chinese data set, namely the Douban Conversation Corpus, and test our model on it. In contrast to the Ubuntu corpus in which data is collected from a specific domain and negative candidates are randomly sampled, conversations in this data come from the open domain, and response candidates in this data set are collected from a retrieval engine and labeled by three human judges. On this data, our model improves the best baseline model by $3$\% on R$_{10}$@1 and $4$\% on P@1. As far as we know, Douban Conversation Corpus is the first human-labeled data set for multi-turn response selection and could be a good complement to the Ubuntu corpus. We have released Douban Conversation Corups and our source code at \url{https://github.com/MarkWuNLP/MultiTurnResponseSelection} %\url{https://github.com/MarkWuNLP/MultiTurnResponseSelection}. Our contributions in this paper are three-folds: (1) the proposal of a new context based matching model for multi-turn response selection in retrieval-based chatbots; (2) the publication of a large human-labeled data set to research communities; (3) empirical verification of the effectiveness of the model on public data sets. \section{Related Work} Recently, building a chatbot with data driven approaches \cite{ritter2011data,ji2014information} has drawn significant attention. Existing work along this line includes retrieval-based methods \cite{hu2014convolutional,ji2014information,wang2015syntax,DBLP:conf/sigir/YanSW16,DBLP:journals/corr/WuWLZ16,zhou2016multi,wu2016ranking} and generation-based methods \cite{DBLP:conf/acl/ShangLL15,serban2015building,vinyals2015neural,li2015diversity,li2016persona,xing2016topic,serban2016multiresolution}. Our work is a retrieval-based method, in which we study context-based response selection. Early studies of retrieval-based chatbots focus on response selection for single-turn conversation \cite{wang2013dataset,ji2014information,wang2015syntax,DBLP:journals/corr/WuWLZ16}. Recently, researchers have begun to pay attention to multi-turn conversation. For example, Lowe et al. \shortcite{lowe2015ubuntu} match a response with the literal concatenation of context utterances. Yan et al. \shortcite{DBLP:conf/sigir/YanSW16} concatenate context utterances with the input message as reformulated queries and perform matching with a deep neural network architecture. Zhou et al. \shortcite{zhou2016multi} improve multi-turn response selection with a multi-view model including an utterance view and a word view. % The stark difference between our model and the existing models is that our model matches a response with each utterance at the very first and matching information instead of sentences is accumulated in a temporal manner through a GRU. Our model is different in that it matches a response with each utterance at first and accumulates matching information instead of sentences by a GRU, thus useful information for matching can be sufficiently retained. \section{Sequential Matching Network} \subsection{Problem Formalization}\label{probform} Suppose that we have a data set $\mathcal {D} = \{(y_i,s_i,r_i)\}_{i=1}^N$, where $s_i=\{u_{i,1}, \ldots, u_{i,n_i}\}$ represents a conversation context with $\{u_{i,k}\}_{k=1}^{n_i}$ as utterances. $r_i$ is a response candidate and $y_i\in \{0,1\}$ denotes a label. $y_i=1$ means $r_i$ is a proper response for $s_i$, otherwise $y_i=0$. Our goal is to learn a matching model $g(\cdot,\cdot)$ with $\mathcal{D}$. For any context-response pair $(s,r)$, $g(s,r)$ measures the matching degree between $s$ and $r$. \subsection{Model Overview} We propose a sequential matching network (SMN) to model $g(\cdot,\cdot)$. Figure \ref{fig:arch} gives the architecture. SMN first decomposes context-response matching into several utterance-response pair matching and then all pairs matching are accumulated as a context based matching through a recurrent neural network. SMN consists of three layers. The first layer matches a response candidate with each utterance in the context on a word level and a segment level, and important matching information from the two levels is distilled by convolution, pooling and encoded in a matching vector. %An utterance-response pair is transformed to a word-word similarity matrix and a sequence-sequence similarity matrix, The matching vectors are then fed into the second layer where they are accumulated in the hidden states of a recurrent neural network with GRU following the chronological order of the utterances in the context. The third layer calculates the final matching score with the hidden states of the second layer. SMN enjoys several advantages over existing models. First, a response candidate can match each utterance in the context at the very beginning, thus matching information in every utterance-response pair can be sufficiently extracted and carried to the final matching score with minimal loss. Second, information extraction from each utterance is conducted on different levels of granularity and under sufficient supervision from the response, thus semantic structures that are useful for response selection in each utterance can be well identified and extracted. Third, matching and utterance relationships are coupled rather than separately modeled, thus utterance relationships (e.g., order), as a kind of knowledge, can supervise the formation of the matching score. By taking utterance relationships into account, SMN extends the ``2D'' matching that has proven effective in text pair matching for single-turn response selection to sequential ``2D'' matching for context based matching in response selection for multi-turn conversation. In the following sections, we will describe details of the three layers. \subsection{Utterance-Response Matching}\label{multi-channel} Given an utterance $u$ in a context $s$ and a response candidate $r$, the model looks up an embedding table and represents $u$ and $r$ as $\mathbf{U} = \left[e_{u,1},\ldots,e_{u,n_u}\right]$ and $\mathbf{R} = \left[e_{r,1},\ldots,e_{r,n_r}\right]$ respectively, where $e_{u,i}, e_{r,i} \in \mathbb{R} ^ d$ are the embeddings of the $i$-th word of $u$ and $r$ respectively. $\mathbf{U}$ $ \in \mathbb{R}^{d \times n_u}$ and $\mathbf{R}$ $ \in \mathbb{R}^{d \times n_r}$ are then used to construct a word-word similarity matrix $\mathbf{M}_1$ $ \in \mathbb{R}^{n_u \times n_r}$ and a sequence-sequence similarity matrix $\mathbf{M}_2$ $ \in \mathbb{R}^{n_u \times n_r}$ which are two input channels of a convolutional neural network (CNN). The CNN distills important matching information from the matrices and encodes the information into a matching vector $v$. Specifically, $\forall i,j$, the $(i,j)$-th element of $\mathbf{M}_1$ is defined by \begin{equation}\label{M1Element}\small e_{1,i,j} = e_{u,i}^{\top} \cdot e_{r,j}. \end{equation} $\mathbf{M}_1$ models the matching between $u$ and $r$ on a word level. To construct $\mathbf{M}_2$, we first employ a GRU to transform $\mathbf{U}$ and $\mathbf{R}$ to hidden vectors. Suppose that $\mathbf{H}_u = \left[h_{u,1},\ldots, h_{u,n_u}\right]$ are the hidden vectors of $\mathbf{U}$, then $\forall i$, $h_{u,i}\in \mathbb{R}^m$ is defined by \small \begin{eqnarray}\label{gru} \small && z_i = \sigma(\mathbf{W_z} e_{u,i} + \mathbf{U_z} {h}_{u,i-1}) \nonumber \\ && r_i = \sigma(\mathbf{W_r} e_{u,i} + \mathbf{U_r} {h}_{u,i-1}) \nonumber \\ && \widetilde{h}_{u,i} = tanh(\mathbf{W_h} e_{u,i} + \mathbf{U_h} (r_i \odot {h}_{u,i-1}))\nonumber \\ && h_{u,i} = z_i \odot \widetilde{h}_{u,i} + (1-z_i) \odot {h}_{u,i-1}, \end{eqnarray} \normalsize where $h_{u,0}=0$, $z_i$ and $r_i$ are an update gate and a reset gate respectively, $\sigma(\cdot)$ is a sigmoid function, and $\mathbf{W_z}$, $\mathbf{W_h}$, $\mathbf{W_r}$, $\mathbf{U_z}$, $\mathbf{U_r}$,$ \mathbf{U_h}$ are parameters. Similarly, we have $\mathbf{H}_r = \left[h_{r,1},\ldots, h_{r,n_r}\right]$ as the hidden vectors of $\mathbf{R}$. Then, $\forall i,j$, the $(i,j)$-th element of $\mathbf{M}_2$ is defined by \begin{equation}\label{M2Element} \small e_{2,i,j} = h_{u,i}^{\top} \mathbf{A} h_{r,j}, \end{equation} where $\mathbf{A} \in \mathbb{R}^{m \times m}$ is a linear transformation. $\forall i$, GRU models the sequential relationship and the dependency among words up to position $i$ and encodes the text segment until the $i$-th word to a hidden vector. Therefore, $\mathbf{M}_2$ models the matching between $u$ and $r$ on a segment level. $\mathbf{M}_1$ and $\mathbf{M}_2$ are then processed by a CNN to form $v$. $\forall f=1,2$, CNN regards $\mathbf{M}_f$ as an input channel, and alternates convolution and max-pooling operations. Suppose that $z^{(l,f)} = \left[ z^{(l,f)}_{i,j} \right]_{I^{(l,f)} \times J^{(l,f)}}$ denotes the output of feature maps of type-$f$ on layer-$l$, where $z^{(0,f)}= \mathbf{M}_f$, $\forall f = 1,2$. On the convolution layer, we employ a 2D convolution operation with a window size ${r_w^{(l,f)} \times r_h^{(l,f)}}$, and define $z_{i,j}^{(l,f)}$ as \begin{equation}\label{Conv}\small z_{i,j}^{(l,f)} = \sigma (\sum_{f'=0}^{F_{l-1}} \sum_{s=0}^{r_w^{(l,f)}} \sum_{t=0}^{r_h^{(l,f)}} \mathbf{W}_{s,t} ^ {(l,f)} \cdot z_{i+s,j+t} ^{(l-1,f')} + \mathbf{b}^{l,k} ), \end{equation} where $\sigma(\cdot)$ is a ReLU, $\mathbf{W} ^ {(l,f)} \in \mathbb{R}^{r_w^{(l,f)} \times r_h^{(l,f)}} $ and $\mathbf{b}^{l,k}$ are parameters, and $F_{l-1}$ is the number of feature maps on the $(l-1)$-th layer. A max pooling operation follows a convolution operation and can be formulated as \begin{equation}\label{Pool}\small z_{i,j}^{(l,f)} = \max_{p_w^{(l,f)} > s \geq 0} \max_{p_h^{(l,f)} > t \geq 0} z_{i+s,j+t} , \end{equation} where $p_w^{(l,f)}$ and $p_h^{(l,f)}$ are the width and the height of the 2D pooling respectively. The output of the final feature maps are concatenated and mapped to a low dimensional space with a linear transformation as the matching vector $v \in \mathbb{R}^q$. According to Equation (\ref{M1Element}), (\ref{M2Element}), (\ref{Conv}), and (\ref{Pool}), we can see that by learning word embedding and parameters of GRU from training data, words or segments in an utterance that are useful for recognizing the appropriateness of a response may have high similarity with some words or segments in the response and result in high value areas in the similarity matrices. These areas will be transformed and selected by convolution and pooling operations and carry important information in the utterance to the matching vector. This is how our model identifies important information in context and leverage it in matching under the supervision of the response. We consider multiple channels because we want to capture important matching information on multiple levels of granularity of text. \subsection{Matching Accumulation}\label{match_acum} Suppose that $\left[v_1,\ldots,v_n\right]$ is the output of the first layer (corresponding to $n$ pairs), at the second layer, a GRU takes $\left[v_1,\ldots,v_n\right]$ as an input and encodes the matching sequence into its hidden states $H_m = \left[h'_1,\ldots, h'_n\right] \in \mathbb{R}^{q \times n}$ with a detailed parameterization similar to Equation (\ref{gru}). This layer has two functions: (1) it models the dependency and the temporal relationship of utterances in the context; (2) it leverages the temporal relationship to supervise the accumulation of the pair matching as a context based matching. Moreover, from Equation (\ref{gru}), we can see that the reset gate (i.e., $r_i$) and the update gate (i.e., $z_i$) control how much information from the previous hidden state and the current input flows to the current hidden state, thus important matching vectors (corresponding to important utterances) can be accumulated while noise in the vectors can be filtered out. \subsection{Matching Prediction and Learning} With $\left[h'_1,\ldots, h'_n\right]$, we define $g(s,r)$ as \begin{equation}\small g(s,r) = softmax (\mathbf{W_2} L[h'_1,\ldots, h'_n] + \mathbf{b_2}), \end{equation} where $\mathbf{W_2}$ and $\mathbf{b_2}$ are parameters. We consider three parameterizations for $L[h'_1,\ldots, h'_n]$: (1) only the last hidden state is used. Then $L[h'_1,\ldots, h'_n]=h'_n$. (2) the hidden states are linearly combined. Then, $L[h'_1,\ldots, h'_n]=\sum_{i=1}^{n} w_i h'_i$, where $w_i \in \mathbb{R}$. (3) we follow \cite{yang2016hierarchical} and employ an attention mechanism to combine the hidden states. Then, $L[h'_1,\ldots, h'_n]$ is defined as \begin{small} \begin{eqnarray} && t_i = tanh(\mathbf{W_{1,1}} h_{u_i,n_u} + \mathbf{W_{1,2}} h'_i + \mathbf{b_1}),\nonumber \\ && \alpha_i = \frac{exp(t_i^{\top} t_s)}{\sum_i(exp(t_i^{\top} t_s))},\nonumber \\ && L[h'_1,\ldots, h'_n]= \sum_{i=1}^{n}{\alpha_i h'_i}, \end{eqnarray} \end{small}where $\mathbf{W_{1,1}} \in \mathbb{R}^{q \times m}, \mathbf{W_{1,2}} \in \mathbb{R}^{q \times q}$ and $\mathbf{b_1} \in \mathbb{R}^q$ are parameters. $h'_i$ and $h_{u_i,n_u}$ are the $i$-th matching vector and the final hidden state of the $i$-th utterance respectively. %We believe both matching vector and utterance benefit to recognize important utterances for matching. $t_s \in \mathbb{R}^q$ is a virtual context vector which is randomly initialized and jointly learned in training. Both (2) and (3) aim to learn weights for $\{h'_1,\ldots,h'_n\}$ from training data and highlight the effect of important matching vectors in the final matching. The difference is that weights in (2) are static, because the weights are totally determined by the positions of utterances, while weights in (3) are dynamically computed by the matching vectors and utterance vectors. We denote our model with the three parameterizations of $L[h'_1,\ldots, h'_n]$ as SMN$_{last}$, SMN$_{static}$, and SMN$_{dynamic}$, and empirically compare them in experiments. We learn $g(\cdot, \cdot)$ by minimizing cross entropy with $\mathcal{D}$. Let $\Theta$ denote the parameters of SMN, then the objective function $\mathcal{L}(\mathcal{D},\Theta)$ of learning can be formulated as \begin{equation}\label{obj}\small - \sum_{i=1}^{N} \left[y_i log(g(s_i,r_i)) + (1-y_i)log(1-g(s_i,r_i))\right]. \end{equation} \section{Response Candidate Retrieval} \label{candidate_retrieval} In practice, a retrieval-based chatbot, to apply the matching approach to the response selection, one needs to retrieve a number of response candidates from an index beforehand. While candidate retrieval is not the focus of the paper, it is an important step in a real system. In this work, we exploit a heuristic method to obtain response candidates from the index. Given a message $u_n$ with $\{u_1,\ldots,u_{n-1}\}$ utterances in its previous turns, we extract the top $5$ keywords from $\{u_1,\ldots,u_{n-1}\}$ based on their tf-idf scores\footnote{Tf is word frequency in the context, while idf is calculated using the entire index.} and expand $u_n$ with the keywords. Then we send the expanded message to the index and retrieve response candidates using the inline retrieval algorithm of the index. Finally, we use $g(s,r)$ to re-rank the candidates and return the top one as a response to the context. \section{Experiments} We tested our model on a publicly available English data set and a Chinese data set published with this paper. \subsection{Ubuntu Corpus} The English data set is the Ubuntu Corpus \cite{lowe2015ubuntu} which contains multi-turn dialogues collected from chat logs of the Ubuntu Forum. The data set consists of $1$ million context-response pairs for training, $0.5$ million pairs for validation, and $0.5$ million pairs for testing. Positive responses are true responses from humans, and negative ones are randomly sampled. The ratio of the positive and the negative is 1:1 in training, and 1:9 in validation and testing. We used the copy shared by Xu et al. \shortcite{xu2016incorporating} \footnote{\url{https://www.dropbox.com/s/2fdn26rj6h9bpvl/ubuntu data.zip?dl=0}} in which numbers, urls, and paths are replaced by special placeholders. We followed \cite{lowe2015ubuntu} and employed recall at position $k$ in $n$ candidates ($R_n@k$) as evaluation metrics. \subsection{Douban Conversation Corpus} The Ubuntu Corpus is a domain specific data set, and response candidates are obtained from negative sampling without human judgment. To further verify the efficacy of our model, we created a new data set with open domain conversations, called the Douban Conversation Corpus. Response candidates in the test set of the Douban Conversation Corpus are collected following the procedure of a retrieval-based chatbot and are labeled by human judges. It simulates the real scenario of a retrieval-based chatbot. We publish it to research communities to facilitate the research of multi-turn response selection. Specifically, we crawled $1.1$ million dyadic dialogues (conversation between two persons) longer than $2$ turns from Douban group\footnote{\url{https://www.douban.com/group}} which is a popular social networking service in China. We randomly sampled $0.5$ million dialogues for creating a training set, $25$ thousand dialouges for creating a validation set, and $1,000$ dialogues for creating a test set, and made sure that there is no overlap between the three sets. For each dialogue in training and validation, we took the last turn as a positive response for the previous turns as a context and randomly sampled another response from the $1.1$ million data as a negative response. There are $1$ million context-response pairs in the training set and $50$ thousand pairs in the validation set. To create the test set, we first crawled $15$ million post-reply pairs from Sina Weibo\footnote{\url{http://weibo.com/}} which is the largest microblogging service in China and indexed the pairs with Lucene\footnote{\url{https://lucenenet.apache.org/}}. We took the last turn of each Douban dyadic dialogue in the test set as a message, retrieved $10$ response candidates from the index following the method in Section \ref{candidate_retrieval}, and finally formed a test set with $10,000$ context-response pairs. We recruited three labelers to judge if a candidate is a proper response to the context. A proper response means the response can naturally reply to the message given the whole context. Each pair received three labels and the majority of the labels were taken as the final decision. Table \ref{dataset} gives the statistics of the three sets. Note that the Fleiss' kappa \cite{fleiss1971measuring} of the labeling is $0.41$, which indicates that the three labelers reached a relatively high agreement. Besides $R_n@k$s, we also followed the convention of information retrieval and employed mean average precision (MAP) \cite{baeza1999modern}, mean reciprocal rank (MRR) \cite{voorhees1999trec}, and precision at position 1 (P@1) as evaluation metrics. We did not calculate R$_2$@1 because in Douban corpus one context could have more than one correct responses, and we have to randomly sample one for R$_2$@1, which may bring bias to evaluation. %We use different evaluation metrics on our Chinese data set, as we believe precision (MAP and P@1) and recall (MRR) are both important in practice. When using the labeled set, we removed conversations with all negative responses or all positive responses, as models make no difference with them. There are $6,670$ context-response pairs left in the test set. \subsection{Baseline} We considered the following baselines: \textbf{Basic models}: models in \cite{lowe2015ubuntu} and \cite{kadlec2015improved} including TF-IDF, RNN, CNN, LSTM and BiLSTM. \textbf{Multi-view}: the model proposed by Zhou et al. \shortcite{zhou2016multi} that utilizes a hierarchical recurrent neural network to model utterance relationships. %Utterance view also can be regarded as a variant of Hierarchical Recurrent Neural Network model \cite{serban2016hierarchical}. \textbf{Deep learning to respond (DL2R)}: the model proposed by Yan et al. \shortcite{DBLP:conf/sigir/YanSW16} that reformulates the message with other utterances in the context. \textbf{Advanced single-turn matching models}: since BiLSTM does not represent the state-of-the-art matching model, we concatenated the utterances in a context and matched the long text with a response candidate using more powerful models including MV-LSTM \cite{wan2016match} (2D matching), Match-LSTM \cite{wang2015learning}, Attentive-LSTM \cite{tan2015lstm} (two attention based models), and Multi-Channel which is described in Section \ref{multi-channel}. Multi-Channel is a simple version of our model without considering utterance relationships. We also appended the top 5 tf-idf words in context to the input message, and computed the score between the expanded message and a response with Multi-Channel, denoted as Multi-Channel$_{exp}$. \subsection{Parameter Tuning} For baseline models, if their results are available in existing literature (e.g., those on the Ubuntu corpus), we just copied the numbers, otherwise we implemented the models following the settings in the literatures. All models were implemented using Theano \cite{2016arXiv160502688short}. Word embeddings were initialized by the results of word2vec \cite{mikolov2013distributed} %\footnote{\url{https://code.google.com/archive/p/word2vec/}} which ran on the training data, and the dimensionality of word vectors is $200$. For Multi-Channel and layer one of our model, we set the dimensionality of the hidden states of GRU as $200$. We tuned the window size of convolution and pooling in $\{(2,2),(3,3)(4,4)\}$ and chose $(3,3)$ finally. The number of feature maps is $8$. In layer two, we set the dimensionality of matching vectors and the hidden states of GRU as $50$. The parameters were updated by stochastic gradient descent with Adam algorithm \cite{kingma2014adam} on a single Tesla K80 GPU. The initial learning rate is $0.001$, and the parameters of Adam, $\beta_1$ and $\beta_2$ are $0.9$ and $0.999$ respectively. We employed early-stopping as a regularization strategy. Models were trained in mini-batches with a batch size of $200$, and the maximum utterance length is $50$. We set the maximum context length (i.e., number of utterances) as $10$, because the performance of models does not improve on contexts longer than 10 (details are shown in the Section \ref{analysis}). We padded zeros if the number of utterances in a context is less than $10$, otherwise we kept the last $10$ utterances. \vspace{-2mm} \subsection{Evaluation Results} Table \ref{exp:response} shows the evaluation results on the two data sets. Our models outperform baselines greatly in terms of all metrics on both data sets, with the improvements being statistically significant (t-test with $p$-value $\leq 0.01$, except $R_{10}@5$ on Douban Corpus). Even the state-of-the-art single-turn matching models perform much worse than our models. The results demonstrate that one cannot neglect utterance relationships and simply perform multi-turn response selection by concatenating utterances together. Our models achieve significant improvements over Multi-View, which justified our ``matching first'' strategy. DL2R is worse than our models, indicating that utterance reformulation with heuristic rules is not a good method for utilizing context information. $R_n@k$s are low on the Douban Corpus as there are multiple correct candidates for a context (e.g., if there are $3$ correct responses, then the maximum $R_{10}@1$ is $0.33$). SMN$_{dynamic}$ is only slightly better than SMN$_{static}$ and SMN$_{last}$. The reason might be that the GRU can select useful signals from the matching sequence and accumulate them in the final state with its gate mechanism, thus the efficacy of an attention mechanism is not obvious for the task at hand. \vspace{-2mm} \subsection{Further Analysis}\label{analysis} \textbf{Visualization}: we visualize the similarity matrices and the gates of GRU in layer two using an example from the Ubuntu corpus to further clarify how our model identifies important information in the context and how it selects important matching vectors with the gate mechanism of GRU as described in Section \ref{multi-channel} and Section \ref{match_acum}. The example is \emph{ $\{$$u_1$: how can unzip many rar ( $\_number\_$ for example ) files at once; $u_2$: sure you can do that in bash; $u_3$: okay how? $u_4$: are the files all in the same directory? $u_5$: yes they all are; $r$: then the command glebihan should extract them all from/to that directory$\}$}. It is from the test set and our model successfully ranked the correct response to the top position. Due to space limitation, we only visualized $\mathbf{M}_1$, $\mathbf{M}_2$ and the update gate (i.e. $z$) in Figure \ref{fig:compareall}. %Other pieces of our model are shown in the supplementary material. We can see that in $u_1$ important words including ``unzip'', ``rar'', ``files'' are recognized and carried to matching by ``command'', ``extract'', and ``directory'' in $r$, while $u_3$ is almost useless and thus little information is extracted from it. $u_1$ is crucial to response selection and nearly all information from $u_1$ and $r$ flows to the hidden state of GRU, while other utterances are less informative and the corresponding gates are almost ``closed'' to keep the information from $u_1$ and $r$ until the final state. \textbf{Model ablation}: we investigate the effect of different parts of SMN by removing them one by one from SMN$_{last}$, shown in Table \ref{exp:discuss}. First, replacing the multi-channel ``2D'' matching with a neural tensor network (NTN) \cite{socher2013reasoning} (denoted as Replace$_M$) makes the performance drop dramatically. This is because NTN only matches a pair by an utterance vector and a response vector and loses important information in the pair. Together with the visualization, we can conclude that ``2D'' matching plays a key role in the ``matching first'' strategy as it captures the important matching information in each pair with minimal loss. Second, the performance drops slightly when replacing the GRU for matching accumulation with a multi-layer perceptron (denoted as Replace$_A$). This indicates that utterance relationships are useful. Finally, we left only one channel in matching and found that $\mathbf{M}_2$ is a little more powerful than $\mathbf{M}_1$ and we achieve the best results with both of them (except on $R_{10}@5$ on the Douban Corpus). \textbf{Performance across context length}: we study how our model (SMN$_{last}$) performs across the length of contexts. Figure \ref{length} shows the comparison on MAP in different length intervals on the Douban corpus. Our model consistently performs better than the baselines, and when contexts become longer, the gap becomes larger. The results demonstrate that our model can well capture the dependencies, especially long dependencies, among utterances in contexts.% We give the comparisons on other metrics in our supplementary material. \textbf{Maximum context length}: we investigate the influence of maximum context length for SMN. Figure \ref{fig:max_smn_length} shows the performance of SMN on Ubuntu Corpus and Douban Corpus with respect to maximum context length. From Figure \ref{fig:max_smn_length}, we find that performance improves significantly when the maximum context length is lower than 5, and becomes stable after the context length reaches 10. This indicates that context information is important for multi-turn response selection, and we can set the maximum context length as 10 to balance effectiveness and efficiency. \textbf{Error analysis}: although SMN outperforms baseline methods on the two data sets, there are still several problems that cannot be handled perfectly. (1) Logical consistency. SMN models the context and response on the semantic level, but pays little attention to logical consistency. This leads to several DSATs in the Douban Corpus. For example, given a context \emph{\{a: Does anyone know Newton jogging shoes? b: 100 RMB on Taobao. a: I know that. I do not want to buy it because that is a fake which is made in Qingdao ,b: Is it the only reason you do not want to buy it? \}}, SMN gives a large score to the response \emph{\{ It is not a fake. I just worry about the date of manufacture\}}. The response is inconsistent with the context on logic, as it claims that the jogging shoes are not fake. In the future, we shall explore the logic consistency problem in retrieval-based chatbots. (2) No correct candidates after retrieval. In the experiment, we prepared 1000 contexts for testing, but only 667 contexts have correct candidates after candidate response retrieval. This indicates that there is still room for candidate retrieval components to improve, and only expanding the input message with several keywords in context may not be a perfect approach for candidate retrieval. In the future, we will consider advanced methods for retrieving candidates. \section{Conclusion and Future Work} We present a new context based model for multi-turn response selection in retrieval-based chatbots. %The architecture matches utterance-response pairs by a multi-channel ``2D'' matching component at first and utilizes utterance relationships to synthesize the pair matching as a session based matching. Experiment results on open data sets show that the model can significantly outperform the state-of-the-art methods. Besides, we publish the first human-labeled multi-turn response selection data set to research communities. In the future, we shall study how to model logical consistency of responses and improve candidate retrieval. \section{Acknowledgment} We appreciate valuable comments provided by anonymous reviewers and our discussions with Zhao Yan. This work was supported by the National Natural Science Foundation of China (Grand Nos. 61672081, U1636211, 61370126), Beijing Advanced Innovation Center for Imaging Technology (No.BAICIT-2016001), National High Technology Research and Development Program of China (No.2015AA016004), and the Fund of the State Key Laboratory of Software Development Environment (No.SKLSDE-2015ZX-16). \bibliographystyle{acl_natbib} \end{document}
Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots
1612.01627
Table 3: Evaluation results on the two data sets. Numbers in bold mean that the improvement is statistically significant compared with the best baseline.
[ "[EMPTY]", "[BOLD] Ubuntu Corpus", "[BOLD] Ubuntu Corpus", "[BOLD] Ubuntu Corpus", "[BOLD] Ubuntu Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus", "[BOLD] Douban Conversation Corpus" ]
[ [ "[EMPTY]", "R2@1", "R10@1", "R10@2", "R10@5", "MAP", "MRR", "P@1", "R10@1", "R10@2", "R10@5" ], [ "TF-IDF", "0.659", "0.410", "0.545", "0.708", "0.331", "0.359", "0.180", "0.096", "0.172", "0.405" ], [ "RNN", "0.768", "0.403", "0.547", "0.819", "0.390", "0.422", "0.208", "0.118", "0.223", "0.589" ], [ "CNN", "0.848", "0.549", "0.684", "0.896", "0.417", "0.440", "0.226", "0.121", "0.252", "0.647" ], [ "LSTM", "0.901", "0.638", "0.784", "0.949", "0.485", "0.527", "0.320", "0.187", "0.343", "0.720" ], [ "BiLSTM", "0.895", "0.630", "0.780", "0.944", "0.479", "0.514", "0.313", "0.184", "0.330", "0.716" ], [ "Multi-View", "0.908", "0.662", "0.801", "0.951", "0.505", "0.543", "0.342", "0.202", "0.350", "0.729" ], [ "DL2R", "0.899", "0.626", "0.783", "0.944", "0.488", "0.527", "0.330", "0.193", "0.342", "0.705" ], [ "MV-LSTM", "0.906", "0.653", "0.804", "0.946", "0.498", "0.538", "0.348", "0.202", "0.351", "0.710" ], [ "Match-LSTM", "0.904", "0.653", "0.799", "0.944", "0.500", "0.537", "0.345", "0.202", "0.348", "0.720" ], [ "Attentive-LSTM", "0.903", "0.633", "0.789", "0.943", "0.495", "0.523", "0.331", "0.192", "0.328", "0.718" ], [ "Multi-Channel", "0.904", "0.656", "0.809", "0.942", "0.506", "0.543", "0.349", "0.203", "0.351", "0.709" ], [ "Multi-Channel [ITALIC] exp", "0.714", "0.368", "0.497", "0.745", "0.476", "0.515", "0.317", "0.179", "0.335", "0.691" ], [ "SMN [ITALIC] last", "[BOLD] 0.923", "[BOLD] 0.723", "[BOLD] 0.842", "[BOLD] 0.956", "[BOLD] 0.526", "[BOLD] 0.571", "[BOLD] 0.393", "[BOLD] 0.236", "[BOLD] 0.387", "0.729" ], [ "SMN [ITALIC] static", "[BOLD] 0.927", "[BOLD] 0.725", "[BOLD] 0.838", "[BOLD] 0.962", "[BOLD] 0.523", "[BOLD] 0.572", "[BOLD] 0.387", "[BOLD] 0.228", "[BOLD] 0.387", "0.734" ], [ "SMN [ITALIC] dynamic", "[BOLD] 0.926", "[BOLD] 0.726", "[BOLD] 0.847", "[BOLD] 0.961", "[BOLD] 0.529", "[BOLD] 0.569", "[BOLD] 0.397", "[BOLD] 0.233", "[BOLD] 0.396", "0.724" ] ]
Our models outperform baselines greatly in terms of all metrics on both data sets, with the improvements being statistically significant (t-test with p-value ≤0.01, except R10@5 on Douban Corpus). Even the state-of-the-art single-turn matching models perform much worse than our models. The results demonstrate that one cannot neglect utterance relationships and simply perform multi-turn response selection by concatenating utterances together. Our models achieve significant improvements over Multi-View, which justified our “matching first” strategy. DL2R is worse than our models, indicating that utterance reformulation with heuristic rules is not a good method for utilizing context information. Rn@ks are low on the Douban Corpus as there are multiple correct candidates for a context (e.g., if there are 3 correct responses, then the maximum R10@1 is 0.33). SMNdynamic is only slightly better than SMNstatic and SMNlast. The reason might be that the GRU can select useful signals from the matching sequence and accumulate them in the final state with its gate mechanism, thus the efficacy of an attention mechanism is not obvious for the task at hand.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} % http://ctan.org/pkg/pifont % http://ctan.org/pkg/amssymb \newcommand{\thickhline}{\noalign{\hrule height 1pt}} \def\aclpaperid{37} \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \aclfinalcopy \newcommand\BibTeX{B{\sc ib}\TeX} \title{Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots} \author{ Yu Wu$^\dag$, Wei Wu$^\ddag$, Chen Xing$^\diamondsuit$, Zhoujun Li$^\dag$\thanks{~~~Corresponding Author}~, Ming Zhou$^\ddag$~~~~\\ $^\dag$State Key Lab of Software Development Environment, Beihang University, Beijing, China\\ $^ \diamondsuit$College of Computer and Control Engineering, Nankai University, Tianjin, China\\ $^\ddag$~~~~Microsoft Research, Beijing, China\\ \{wuyu,lizj\}@buaa.edu.cn \{wuwei,v-chxing,mingzhou\}@microsoft.com } \date{} \begin{document} \maketitle \begin{abstract} We study response selection for multi-turn conversation in retrieval-based chatbots. Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information. We propose a sequential matching network (SMN) to address both problems. SMN first matches a response with each utterance in the context on multiple levels of granularity, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models relationships among utterances. The final matching score is calculated with the hidden states of the RNN. An empirical study on two public data sets shows that SMN can significantly outperform state-of-the-art methods for response selection in multi-turn conversation. \end{abstract} \section{Introduction} Conversational agents include task-oriented dialog systems and non-task-oriented chatbots. Dialog systems focus on helping people complete specific tasks in vertical domains \cite{young2010hidden}, while chatbots aim to naturally and meaningfully converse with humans on open domain topics \cite{ritter2011data}. Existing work on building chatbots includes generation -based methods and retrieval-based methods. Retrieval based chatbots enjoy the advantage of informative and fluent responses, because they select a proper response for the current conversation from a repository with response selection algorithms. While most existing work on retrieval-based chatbots studies response selection for single-turn conversation \cite{wang2013dataset} which only considers the last input message, we consider the problem in a multi-turn scenario. In a chatbot, multi-turn response selection takes a message and utterances in its previous turns as input and selects a response that is natural and relevant to the whole context. The key to response selection lies in input-response matching. Different from single-turn conversation, multi-turn conversation requires matching between a response and a conversation context %\footnote{In this paper, both ``session'' and ``context'' refer to an input message and utterances in its previous turns.} in which one needs to consider not only the matching between the response and the input message but also matching between responses and utterances in previous turns. The challenges of the task include (1) how to identify important information (words, phrases, and sentences) in context, which is crucial to selecting a proper response and leveraging relevant information in matching; and (2) how to model relationships among the utterances in the context. %How much important information an utterance contains also indicates its importance to response selection. Table \ref{example1} illustrates the challenges with an example. First, ``hold a drum class'' and ``drum'' in context are very important. %to response selection for the session. Without them, one may find responses relevant to the message (i.e., the fifth utterance of the context) but nonsense in the context (e.g., ``what lessons do you want?''). %On the other hand, although ``Shanghai'' and ``Lujiazui'' are keywords in their utterances, they are useless and even noise to response selection. It is crucial yet non-trivial to extract the important information from the context and leverage them in matching while circumvent the noise. Second, the message highly depends on the second utterance in the context, and the order of the utterances matters in response selection: exchanging the third utterance and the fifth utterance may lead to different responses. %Third, because Context 1 contains much important information, it is more important than others in response selection. Existing work, however, either ignores relationships among utterances when concatenating them together \cite{lowe2015ubuntu}, or loses important information in context in the process of converting the whole context to a vector without enough supervision from responses (e.g., by a hierarchical RNN \cite{zhou2016multi}). We propose a sequential matching network (SMN), a new context based matching model that can tackle both challenges in an end-to-end way. The reason that existing models lose important information in the context is that they first represent the whole context as a vector and then match the context vector with a response vector. Thus, responses in these models connect with the context until the final step in matching. To avoid information loss, SMN matches a response with each utterance in the context at the beginning and encodes important information in each pair into a matching vector. The matching vectors are then accumulated in the utterances' temporal order to model their relationships. The final matching degree is computed with the accumulation of the matching vectors. Specifically, for each utterance-response pair, the model constructs a word-word similarity matrix and a sequence-sequence similarity matrix by the word embeddings and the hidden states of a recurrent neural network with gated recurrent units (GRU) \cite{chung2014empirical} respectively. The two matrices capture important matching information in the pair on a word level and a segment (word subsequence) level respectively, and the information is distilled and fused as a matching vector through an alternation of convolution and pooling operations on the matrices. By this means, important information from multiple levels of granularity in context is recognized under sufficient supervision from the response and carried into matching with minimal loss. The matching vectors are then uploaded to another GRU to form a matching score for the context and the response. The GRU accumulates the pair matching in its hidden states in the chronological order of the utterances in context. It models relationships and dependencies among the utterances in a matching fashion and has the utterance order supervise the accumulation of pair matching. %The gate mechanism of the GRU helps select important pairs and filter out noise. The matching degree of the context and the response is computed by a logit model with the hidden states of the GRU. SMN extends the powerful ``2D'' matching paradigm in text pair matching for single-turn conversation to context based matching for multi-turn conversation, and enjoys the advantage of both important information in utterance-response pairs and relationships among utterances being sufficiently preserved and leveraged in matching. We test our model on the Ubuntu dialogue corpus \cite{lowe2015ubuntu} which is a large scale publicly available English data set for research in multi-turn conversation. The results show that our model can significantly outperform state-of-the-art methods, and improvement to the best baseline model on R$_{10}$@1 is over $6$\%. In addition to the Ubuntu corpus, we create a human-labeled Chinese data set, namely the Douban Conversation Corpus, and test our model on it. In contrast to the Ubuntu corpus in which data is collected from a specific domain and negative candidates are randomly sampled, conversations in this data come from the open domain, and response candidates in this data set are collected from a retrieval engine and labeled by three human judges. On this data, our model improves the best baseline model by $3$\% on R$_{10}$@1 and $4$\% on P@1. As far as we know, Douban Conversation Corpus is the first human-labeled data set for multi-turn response selection and could be a good complement to the Ubuntu corpus. We have released Douban Conversation Corups and our source code at \url{https://github.com/MarkWuNLP/MultiTurnResponseSelection} %\url{https://github.com/MarkWuNLP/MultiTurnResponseSelection}. Our contributions in this paper are three-folds: (1) the proposal of a new context based matching model for multi-turn response selection in retrieval-based chatbots; (2) the publication of a large human-labeled data set to research communities; (3) empirical verification of the effectiveness of the model on public data sets. \section{Related Work} Recently, building a chatbot with data driven approaches \cite{ritter2011data,ji2014information} has drawn significant attention. Existing work along this line includes retrieval-based methods \cite{hu2014convolutional,ji2014information,wang2015syntax,DBLP:conf/sigir/YanSW16,DBLP:journals/corr/WuWLZ16,zhou2016multi,wu2016ranking} and generation-based methods \cite{DBLP:conf/acl/ShangLL15,serban2015building,vinyals2015neural,li2015diversity,li2016persona,xing2016topic,serban2016multiresolution}. Our work is a retrieval-based method, in which we study context-based response selection. Early studies of retrieval-based chatbots focus on response selection for single-turn conversation \cite{wang2013dataset,ji2014information,wang2015syntax,DBLP:journals/corr/WuWLZ16}. Recently, researchers have begun to pay attention to multi-turn conversation. For example, Lowe et al. \shortcite{lowe2015ubuntu} match a response with the literal concatenation of context utterances. Yan et al. \shortcite{DBLP:conf/sigir/YanSW16} concatenate context utterances with the input message as reformulated queries and perform matching with a deep neural network architecture. Zhou et al. \shortcite{zhou2016multi} improve multi-turn response selection with a multi-view model including an utterance view and a word view. % The stark difference between our model and the existing models is that our model matches a response with each utterance at the very first and matching information instead of sentences is accumulated in a temporal manner through a GRU. Our model is different in that it matches a response with each utterance at first and accumulates matching information instead of sentences by a GRU, thus useful information for matching can be sufficiently retained. \section{Sequential Matching Network} \subsection{Problem Formalization}\label{probform} Suppose that we have a data set $\mathcal {D} = \{(y_i,s_i,r_i)\}_{i=1}^N$, where $s_i=\{u_{i,1}, \ldots, u_{i,n_i}\}$ represents a conversation context with $\{u_{i,k}\}_{k=1}^{n_i}$ as utterances. $r_i$ is a response candidate and $y_i\in \{0,1\}$ denotes a label. $y_i=1$ means $r_i$ is a proper response for $s_i$, otherwise $y_i=0$. Our goal is to learn a matching model $g(\cdot,\cdot)$ with $\mathcal{D}$. For any context-response pair $(s,r)$, $g(s,r)$ measures the matching degree between $s$ and $r$. \subsection{Model Overview} We propose a sequential matching network (SMN) to model $g(\cdot,\cdot)$. Figure \ref{fig:arch} gives the architecture. SMN first decomposes context-response matching into several utterance-response pair matching and then all pairs matching are accumulated as a context based matching through a recurrent neural network. SMN consists of three layers. The first layer matches a response candidate with each utterance in the context on a word level and a segment level, and important matching information from the two levels is distilled by convolution, pooling and encoded in a matching vector. %An utterance-response pair is transformed to a word-word similarity matrix and a sequence-sequence similarity matrix, The matching vectors are then fed into the second layer where they are accumulated in the hidden states of a recurrent neural network with GRU following the chronological order of the utterances in the context. The third layer calculates the final matching score with the hidden states of the second layer. SMN enjoys several advantages over existing models. First, a response candidate can match each utterance in the context at the very beginning, thus matching information in every utterance-response pair can be sufficiently extracted and carried to the final matching score with minimal loss. Second, information extraction from each utterance is conducted on different levels of granularity and under sufficient supervision from the response, thus semantic structures that are useful for response selection in each utterance can be well identified and extracted. Third, matching and utterance relationships are coupled rather than separately modeled, thus utterance relationships (e.g., order), as a kind of knowledge, can supervise the formation of the matching score. By taking utterance relationships into account, SMN extends the ``2D'' matching that has proven effective in text pair matching for single-turn response selection to sequential ``2D'' matching for context based matching in response selection for multi-turn conversation. In the following sections, we will describe details of the three layers. \subsection{Utterance-Response Matching}\label{multi-channel} Given an utterance $u$ in a context $s$ and a response candidate $r$, the model looks up an embedding table and represents $u$ and $r$ as $\mathbf{U} = \left[e_{u,1},\ldots,e_{u,n_u}\right]$ and $\mathbf{R} = \left[e_{r,1},\ldots,e_{r,n_r}\right]$ respectively, where $e_{u,i}, e_{r,i} \in \mathbb{R} ^ d$ are the embeddings of the $i$-th word of $u$ and $r$ respectively. $\mathbf{U}$ $ \in \mathbb{R}^{d \times n_u}$ and $\mathbf{R}$ $ \in \mathbb{R}^{d \times n_r}$ are then used to construct a word-word similarity matrix $\mathbf{M}_1$ $ \in \mathbb{R}^{n_u \times n_r}$ and a sequence-sequence similarity matrix $\mathbf{M}_2$ $ \in \mathbb{R}^{n_u \times n_r}$ which are two input channels of a convolutional neural network (CNN). The CNN distills important matching information from the matrices and encodes the information into a matching vector $v$. Specifically, $\forall i,j$, the $(i,j)$-th element of $\mathbf{M}_1$ is defined by \begin{equation}\label{M1Element}\small e_{1,i,j} = e_{u,i}^{\top} \cdot e_{r,j}. \end{equation} $\mathbf{M}_1$ models the matching between $u$ and $r$ on a word level. To construct $\mathbf{M}_2$, we first employ a GRU to transform $\mathbf{U}$ and $\mathbf{R}$ to hidden vectors. Suppose that $\mathbf{H}_u = \left[h_{u,1},\ldots, h_{u,n_u}\right]$ are the hidden vectors of $\mathbf{U}$, then $\forall i$, $h_{u,i}\in \mathbb{R}^m$ is defined by \small \begin{eqnarray}\label{gru} \small && z_i = \sigma(\mathbf{W_z} e_{u,i} + \mathbf{U_z} {h}_{u,i-1}) \nonumber \\ && r_i = \sigma(\mathbf{W_r} e_{u,i} + \mathbf{U_r} {h}_{u,i-1}) \nonumber \\ && \widetilde{h}_{u,i} = tanh(\mathbf{W_h} e_{u,i} + \mathbf{U_h} (r_i \odot {h}_{u,i-1}))\nonumber \\ && h_{u,i} = z_i \odot \widetilde{h}_{u,i} + (1-z_i) \odot {h}_{u,i-1}, \end{eqnarray} \normalsize where $h_{u,0}=0$, $z_i$ and $r_i$ are an update gate and a reset gate respectively, $\sigma(\cdot)$ is a sigmoid function, and $\mathbf{W_z}$, $\mathbf{W_h}$, $\mathbf{W_r}$, $\mathbf{U_z}$, $\mathbf{U_r}$,$ \mathbf{U_h}$ are parameters. Similarly, we have $\mathbf{H}_r = \left[h_{r,1},\ldots, h_{r,n_r}\right]$ as the hidden vectors of $\mathbf{R}$. Then, $\forall i,j$, the $(i,j)$-th element of $\mathbf{M}_2$ is defined by \begin{equation}\label{M2Element} \small e_{2,i,j} = h_{u,i}^{\top} \mathbf{A} h_{r,j}, \end{equation} where $\mathbf{A} \in \mathbb{R}^{m \times m}$ is a linear transformation. $\forall i$, GRU models the sequential relationship and the dependency among words up to position $i$ and encodes the text segment until the $i$-th word to a hidden vector. Therefore, $\mathbf{M}_2$ models the matching between $u$ and $r$ on a segment level. $\mathbf{M}_1$ and $\mathbf{M}_2$ are then processed by a CNN to form $v$. $\forall f=1,2$, CNN regards $\mathbf{M}_f$ as an input channel, and alternates convolution and max-pooling operations. Suppose that $z^{(l,f)} = \left[ z^{(l,f)}_{i,j} \right]_{I^{(l,f)} \times J^{(l,f)}}$ denotes the output of feature maps of type-$f$ on layer-$l$, where $z^{(0,f)}= \mathbf{M}_f$, $\forall f = 1,2$. On the convolution layer, we employ a 2D convolution operation with a window size ${r_w^{(l,f)} \times r_h^{(l,f)}}$, and define $z_{i,j}^{(l,f)}$ as \begin{equation}\label{Conv}\small z_{i,j}^{(l,f)} = \sigma (\sum_{f'=0}^{F_{l-1}} \sum_{s=0}^{r_w^{(l,f)}} \sum_{t=0}^{r_h^{(l,f)}} \mathbf{W}_{s,t} ^ {(l,f)} \cdot z_{i+s,j+t} ^{(l-1,f')} + \mathbf{b}^{l,k} ), \end{equation} where $\sigma(\cdot)$ is a ReLU, $\mathbf{W} ^ {(l,f)} \in \mathbb{R}^{r_w^{(l,f)} \times r_h^{(l,f)}} $ and $\mathbf{b}^{l,k}$ are parameters, and $F_{l-1}$ is the number of feature maps on the $(l-1)$-th layer. A max pooling operation follows a convolution operation and can be formulated as \begin{equation}\label{Pool}\small z_{i,j}^{(l,f)} = \max_{p_w^{(l,f)} > s \geq 0} \max_{p_h^{(l,f)} > t \geq 0} z_{i+s,j+t} , \end{equation} where $p_w^{(l,f)}$ and $p_h^{(l,f)}$ are the width and the height of the 2D pooling respectively. The output of the final feature maps are concatenated and mapped to a low dimensional space with a linear transformation as the matching vector $v \in \mathbb{R}^q$. According to Equation (\ref{M1Element}), (\ref{M2Element}), (\ref{Conv}), and (\ref{Pool}), we can see that by learning word embedding and parameters of GRU from training data, words or segments in an utterance that are useful for recognizing the appropriateness of a response may have high similarity with some words or segments in the response and result in high value areas in the similarity matrices. These areas will be transformed and selected by convolution and pooling operations and carry important information in the utterance to the matching vector. This is how our model identifies important information in context and leverage it in matching under the supervision of the response. We consider multiple channels because we want to capture important matching information on multiple levels of granularity of text. \subsection{Matching Accumulation}\label{match_acum} Suppose that $\left[v_1,\ldots,v_n\right]$ is the output of the first layer (corresponding to $n$ pairs), at the second layer, a GRU takes $\left[v_1,\ldots,v_n\right]$ as an input and encodes the matching sequence into its hidden states $H_m = \left[h'_1,\ldots, h'_n\right] \in \mathbb{R}^{q \times n}$ with a detailed parameterization similar to Equation (\ref{gru}). This layer has two functions: (1) it models the dependency and the temporal relationship of utterances in the context; (2) it leverages the temporal relationship to supervise the accumulation of the pair matching as a context based matching. Moreover, from Equation (\ref{gru}), we can see that the reset gate (i.e., $r_i$) and the update gate (i.e., $z_i$) control how much information from the previous hidden state and the current input flows to the current hidden state, thus important matching vectors (corresponding to important utterances) can be accumulated while noise in the vectors can be filtered out. \subsection{Matching Prediction and Learning} With $\left[h'_1,\ldots, h'_n\right]$, we define $g(s,r)$ as \begin{equation}\small g(s,r) = softmax (\mathbf{W_2} L[h'_1,\ldots, h'_n] + \mathbf{b_2}), \end{equation} where $\mathbf{W_2}$ and $\mathbf{b_2}$ are parameters. We consider three parameterizations for $L[h'_1,\ldots, h'_n]$: (1) only the last hidden state is used. Then $L[h'_1,\ldots, h'_n]=h'_n$. (2) the hidden states are linearly combined. Then, $L[h'_1,\ldots, h'_n]=\sum_{i=1}^{n} w_i h'_i$, where $w_i \in \mathbb{R}$. (3) we follow \cite{yang2016hierarchical} and employ an attention mechanism to combine the hidden states. Then, $L[h'_1,\ldots, h'_n]$ is defined as \begin{small} \begin{eqnarray} && t_i = tanh(\mathbf{W_{1,1}} h_{u_i,n_u} + \mathbf{W_{1,2}} h'_i + \mathbf{b_1}),\nonumber \\ && \alpha_i = \frac{exp(t_i^{\top} t_s)}{\sum_i(exp(t_i^{\top} t_s))},\nonumber \\ && L[h'_1,\ldots, h'_n]= \sum_{i=1}^{n}{\alpha_i h'_i}, \end{eqnarray} \end{small}where $\mathbf{W_{1,1}} \in \mathbb{R}^{q \times m}, \mathbf{W_{1,2}} \in \mathbb{R}^{q \times q}$ and $\mathbf{b_1} \in \mathbb{R}^q$ are parameters. $h'_i$ and $h_{u_i,n_u}$ are the $i$-th matching vector and the final hidden state of the $i$-th utterance respectively. %We believe both matching vector and utterance benefit to recognize important utterances for matching. $t_s \in \mathbb{R}^q$ is a virtual context vector which is randomly initialized and jointly learned in training. Both (2) and (3) aim to learn weights for $\{h'_1,\ldots,h'_n\}$ from training data and highlight the effect of important matching vectors in the final matching. The difference is that weights in (2) are static, because the weights are totally determined by the positions of utterances, while weights in (3) are dynamically computed by the matching vectors and utterance vectors. We denote our model with the three parameterizations of $L[h'_1,\ldots, h'_n]$ as SMN$_{last}$, SMN$_{static}$, and SMN$_{dynamic}$, and empirically compare them in experiments. We learn $g(\cdot, \cdot)$ by minimizing cross entropy with $\mathcal{D}$. Let $\Theta$ denote the parameters of SMN, then the objective function $\mathcal{L}(\mathcal{D},\Theta)$ of learning can be formulated as \begin{equation}\label{obj}\small - \sum_{i=1}^{N} \left[y_i log(g(s_i,r_i)) + (1-y_i)log(1-g(s_i,r_i))\right]. \end{equation} \section{Response Candidate Retrieval} \label{candidate_retrieval} In practice, a retrieval-based chatbot, to apply the matching approach to the response selection, one needs to retrieve a number of response candidates from an index beforehand. While candidate retrieval is not the focus of the paper, it is an important step in a real system. In this work, we exploit a heuristic method to obtain response candidates from the index. Given a message $u_n$ with $\{u_1,\ldots,u_{n-1}\}$ utterances in its previous turns, we extract the top $5$ keywords from $\{u_1,\ldots,u_{n-1}\}$ based on their tf-idf scores\footnote{Tf is word frequency in the context, while idf is calculated using the entire index.} and expand $u_n$ with the keywords. Then we send the expanded message to the index and retrieve response candidates using the inline retrieval algorithm of the index. Finally, we use $g(s,r)$ to re-rank the candidates and return the top one as a response to the context. \section{Experiments} We tested our model on a publicly available English data set and a Chinese data set published with this paper. \subsection{Ubuntu Corpus} The English data set is the Ubuntu Corpus \cite{lowe2015ubuntu} which contains multi-turn dialogues collected from chat logs of the Ubuntu Forum. The data set consists of $1$ million context-response pairs for training, $0.5$ million pairs for validation, and $0.5$ million pairs for testing. Positive responses are true responses from humans, and negative ones are randomly sampled. The ratio of the positive and the negative is 1:1 in training, and 1:9 in validation and testing. We used the copy shared by Xu et al. \shortcite{xu2016incorporating} \footnote{\url{https://www.dropbox.com/s/2fdn26rj6h9bpvl/ubuntu data.zip?dl=0}} in which numbers, urls, and paths are replaced by special placeholders. We followed \cite{lowe2015ubuntu} and employed recall at position $k$ in $n$ candidates ($R_n@k$) as evaluation metrics. \subsection{Douban Conversation Corpus} The Ubuntu Corpus is a domain specific data set, and response candidates are obtained from negative sampling without human judgment. To further verify the efficacy of our model, we created a new data set with open domain conversations, called the Douban Conversation Corpus. Response candidates in the test set of the Douban Conversation Corpus are collected following the procedure of a retrieval-based chatbot and are labeled by human judges. It simulates the real scenario of a retrieval-based chatbot. We publish it to research communities to facilitate the research of multi-turn response selection. Specifically, we crawled $1.1$ million dyadic dialogues (conversation between two persons) longer than $2$ turns from Douban group\footnote{\url{https://www.douban.com/group}} which is a popular social networking service in China. We randomly sampled $0.5$ million dialogues for creating a training set, $25$ thousand dialouges for creating a validation set, and $1,000$ dialogues for creating a test set, and made sure that there is no overlap between the three sets. For each dialogue in training and validation, we took the last turn as a positive response for the previous turns as a context and randomly sampled another response from the $1.1$ million data as a negative response. There are $1$ million context-response pairs in the training set and $50$ thousand pairs in the validation set. To create the test set, we first crawled $15$ million post-reply pairs from Sina Weibo\footnote{\url{http://weibo.com/}} which is the largest microblogging service in China and indexed the pairs with Lucene\footnote{\url{https://lucenenet.apache.org/}}. We took the last turn of each Douban dyadic dialogue in the test set as a message, retrieved $10$ response candidates from the index following the method in Section \ref{candidate_retrieval}, and finally formed a test set with $10,000$ context-response pairs. We recruited three labelers to judge if a candidate is a proper response to the context. A proper response means the response can naturally reply to the message given the whole context. Each pair received three labels and the majority of the labels were taken as the final decision. Table \ref{dataset} gives the statistics of the three sets. Note that the Fleiss' kappa \cite{fleiss1971measuring} of the labeling is $0.41$, which indicates that the three labelers reached a relatively high agreement. Besides $R_n@k$s, we also followed the convention of information retrieval and employed mean average precision (MAP) \cite{baeza1999modern}, mean reciprocal rank (MRR) \cite{voorhees1999trec}, and precision at position 1 (P@1) as evaluation metrics. We did not calculate R$_2$@1 because in Douban corpus one context could have more than one correct responses, and we have to randomly sample one for R$_2$@1, which may bring bias to evaluation. %We use different evaluation metrics on our Chinese data set, as we believe precision (MAP and P@1) and recall (MRR) are both important in practice. When using the labeled set, we removed conversations with all negative responses or all positive responses, as models make no difference with them. There are $6,670$ context-response pairs left in the test set. \subsection{Baseline} We considered the following baselines: \textbf{Basic models}: models in \cite{lowe2015ubuntu} and \cite{kadlec2015improved} including TF-IDF, RNN, CNN, LSTM and BiLSTM. \textbf{Multi-view}: the model proposed by Zhou et al. \shortcite{zhou2016multi} that utilizes a hierarchical recurrent neural network to model utterance relationships. %Utterance view also can be regarded as a variant of Hierarchical Recurrent Neural Network model \cite{serban2016hierarchical}. \textbf{Deep learning to respond (DL2R)}: the model proposed by Yan et al. \shortcite{DBLP:conf/sigir/YanSW16} that reformulates the message with other utterances in the context. \textbf{Advanced single-turn matching models}: since BiLSTM does not represent the state-of-the-art matching model, we concatenated the utterances in a context and matched the long text with a response candidate using more powerful models including MV-LSTM \cite{wan2016match} (2D matching), Match-LSTM \cite{wang2015learning}, Attentive-LSTM \cite{tan2015lstm} (two attention based models), and Multi-Channel which is described in Section \ref{multi-channel}. Multi-Channel is a simple version of our model without considering utterance relationships. We also appended the top 5 tf-idf words in context to the input message, and computed the score between the expanded message and a response with Multi-Channel, denoted as Multi-Channel$_{exp}$. \subsection{Parameter Tuning} For baseline models, if their results are available in existing literature (e.g., those on the Ubuntu corpus), we just copied the numbers, otherwise we implemented the models following the settings in the literatures. All models were implemented using Theano \cite{2016arXiv160502688short}. Word embeddings were initialized by the results of word2vec \cite{mikolov2013distributed} %\footnote{\url{https://code.google.com/archive/p/word2vec/}} which ran on the training data, and the dimensionality of word vectors is $200$. For Multi-Channel and layer one of our model, we set the dimensionality of the hidden states of GRU as $200$. We tuned the window size of convolution and pooling in $\{(2,2),(3,3)(4,4)\}$ and chose $(3,3)$ finally. The number of feature maps is $8$. In layer two, we set the dimensionality of matching vectors and the hidden states of GRU as $50$. The parameters were updated by stochastic gradient descent with Adam algorithm \cite{kingma2014adam} on a single Tesla K80 GPU. The initial learning rate is $0.001$, and the parameters of Adam, $\beta_1$ and $\beta_2$ are $0.9$ and $0.999$ respectively. We employed early-stopping as a regularization strategy. Models were trained in mini-batches with a batch size of $200$, and the maximum utterance length is $50$. We set the maximum context length (i.e., number of utterances) as $10$, because the performance of models does not improve on contexts longer than 10 (details are shown in the Section \ref{analysis}). We padded zeros if the number of utterances in a context is less than $10$, otherwise we kept the last $10$ utterances. \vspace{-2mm} \subsection{Evaluation Results} Table \ref{exp:response} shows the evaluation results on the two data sets. Our models outperform baselines greatly in terms of all metrics on both data sets, with the improvements being statistically significant (t-test with $p$-value $\leq 0.01$, except $R_{10}@5$ on Douban Corpus). Even the state-of-the-art single-turn matching models perform much worse than our models. The results demonstrate that one cannot neglect utterance relationships and simply perform multi-turn response selection by concatenating utterances together. Our models achieve significant improvements over Multi-View, which justified our ``matching first'' strategy. DL2R is worse than our models, indicating that utterance reformulation with heuristic rules is not a good method for utilizing context information. $R_n@k$s are low on the Douban Corpus as there are multiple correct candidates for a context (e.g., if there are $3$ correct responses, then the maximum $R_{10}@1$ is $0.33$). SMN$_{dynamic}$ is only slightly better than SMN$_{static}$ and SMN$_{last}$. The reason might be that the GRU can select useful signals from the matching sequence and accumulate them in the final state with its gate mechanism, thus the efficacy of an attention mechanism is not obvious for the task at hand. \vspace{-2mm} \subsection{Further Analysis}\label{analysis} \textbf{Visualization}: we visualize the similarity matrices and the gates of GRU in layer two using an example from the Ubuntu corpus to further clarify how our model identifies important information in the context and how it selects important matching vectors with the gate mechanism of GRU as described in Section \ref{multi-channel} and Section \ref{match_acum}. The example is \emph{ $\{$$u_1$: how can unzip many rar ( $\_number\_$ for example ) files at once; $u_2$: sure you can do that in bash; $u_3$: okay how? $u_4$: are the files all in the same directory? $u_5$: yes they all are; $r$: then the command glebihan should extract them all from/to that directory$\}$}. It is from the test set and our model successfully ranked the correct response to the top position. Due to space limitation, we only visualized $\mathbf{M}_1$, $\mathbf{M}_2$ and the update gate (i.e. $z$) in Figure \ref{fig:compareall}. %Other pieces of our model are shown in the supplementary material. We can see that in $u_1$ important words including ``unzip'', ``rar'', ``files'' are recognized and carried to matching by ``command'', ``extract'', and ``directory'' in $r$, while $u_3$ is almost useless and thus little information is extracted from it. $u_1$ is crucial to response selection and nearly all information from $u_1$ and $r$ flows to the hidden state of GRU, while other utterances are less informative and the corresponding gates are almost ``closed'' to keep the information from $u_1$ and $r$ until the final state. \textbf{Model ablation}: we investigate the effect of different parts of SMN by removing them one by one from SMN$_{last}$, shown in Table \ref{exp:discuss}. First, replacing the multi-channel ``2D'' matching with a neural tensor network (NTN) \cite{socher2013reasoning} (denoted as Replace$_M$) makes the performance drop dramatically. This is because NTN only matches a pair by an utterance vector and a response vector and loses important information in the pair. Together with the visualization, we can conclude that ``2D'' matching plays a key role in the ``matching first'' strategy as it captures the important matching information in each pair with minimal loss. Second, the performance drops slightly when replacing the GRU for matching accumulation with a multi-layer perceptron (denoted as Replace$_A$). This indicates that utterance relationships are useful. Finally, we left only one channel in matching and found that $\mathbf{M}_2$ is a little more powerful than $\mathbf{M}_1$ and we achieve the best results with both of them (except on $R_{10}@5$ on the Douban Corpus). \textbf{Performance across context length}: we study how our model (SMN$_{last}$) performs across the length of contexts. Figure \ref{length} shows the comparison on MAP in different length intervals on the Douban corpus. Our model consistently performs better than the baselines, and when contexts become longer, the gap becomes larger. The results demonstrate that our model can well capture the dependencies, especially long dependencies, among utterances in contexts.% We give the comparisons on other metrics in our supplementary material. \textbf{Maximum context length}: we investigate the influence of maximum context length for SMN. Figure \ref{fig:max_smn_length} shows the performance of SMN on Ubuntu Corpus and Douban Corpus with respect to maximum context length. From Figure \ref{fig:max_smn_length}, we find that performance improves significantly when the maximum context length is lower than 5, and becomes stable after the context length reaches 10. This indicates that context information is important for multi-turn response selection, and we can set the maximum context length as 10 to balance effectiveness and efficiency. \textbf{Error analysis}: although SMN outperforms baseline methods on the two data sets, there are still several problems that cannot be handled perfectly. (1) Logical consistency. SMN models the context and response on the semantic level, but pays little attention to logical consistency. This leads to several DSATs in the Douban Corpus. For example, given a context \emph{\{a: Does anyone know Newton jogging shoes? b: 100 RMB on Taobao. a: I know that. I do not want to buy it because that is a fake which is made in Qingdao ,b: Is it the only reason you do not want to buy it? \}}, SMN gives a large score to the response \emph{\{ It is not a fake. I just worry about the date of manufacture\}}. The response is inconsistent with the context on logic, as it claims that the jogging shoes are not fake. In the future, we shall explore the logic consistency problem in retrieval-based chatbots. (2) No correct candidates after retrieval. In the experiment, we prepared 1000 contexts for testing, but only 667 contexts have correct candidates after candidate response retrieval. This indicates that there is still room for candidate retrieval components to improve, and only expanding the input message with several keywords in context may not be a perfect approach for candidate retrieval. In the future, we will consider advanced methods for retrieving candidates. \section{Conclusion and Future Work} We present a new context based model for multi-turn response selection in retrieval-based chatbots. %The architecture matches utterance-response pairs by a multi-channel ``2D'' matching component at first and utilizes utterance relationships to synthesize the pair matching as a session based matching. Experiment results on open data sets show that the model can significantly outperform the state-of-the-art methods. Besides, we publish the first human-labeled multi-turn response selection data set to research communities. In the future, we shall study how to model logical consistency of responses and improve candidate retrieval. \section{Acknowledgment} We appreciate valuable comments provided by anonymous reviewers and our discussions with Zhao Yan. This work was supported by the National Natural Science Foundation of China (Grand Nos. 61672081, U1636211, 61370126), Beijing Advanced Innovation Center for Imaging Technology (No.BAICIT-2016001), National High Technology Research and Development Program of China (No.2015AA016004), and the Fund of the State Key Laboratory of Software Development Environment (No.SKLSDE-2015ZX-16). \bibliographystyle{acl_natbib} \end{document}
Improved Neural Relation Detection for Knowledge Base Question Answering
1704.06194
Table 4: Entity re-ranking on SimpleQuestions (test set).
[ "[BOLD] Top K", "[BOLD] FreeBase API", "[BOLD] (Golub & He, 2016)" ]
[ [ "1", "40.9", "52.9" ], [ "10", "64.3", "74.0" ], [ "20", "69.7", "77.8" ], [ "50", "75.7", "82.0" ], [ "[BOLD] Top K", "Yin et al. [BOLD] ( 2016 [BOLD] )", "[BOLD] Our Re-Ranker" ], [ "1", "72.7", "[BOLD] 79.0" ], [ "10", "86.9", "[BOLD] 89.5" ], [ "20", "88.4", "[BOLD] 90.9" ], [ "50", "90.2", "[BOLD] 92.5" ] ]
w/o entity re-ranking). Our re-ranker results in large improvement, especially when the beam sizes are smaller than 10. This is indicating another important usage of our proposed improved relation detection model on entity linking re-ranking.
\section*{Appendix A: Detailed Score Computation for Constraint Detection} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. Based on the above numbers we compute the proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} \nonumber \end{align} \section*{Appendix B: Special Rules for Constraint Detection} \begin{enumerate} \item Special threshold for date constraints. The time stamps in KB usually follow the year-month-day format, while the time in WebQSP are usually years. This makes the overlap between the date entities in questions and the KB entity names smaller (length of overlap is usually 4). To deal with this, we only check whether the dates in questions could match the years in KB, thus have a special threshold of $\theta=1$ for date constraints. \item Filtering the constraints for answer nodes. Sometimes the answer node could connect to huge number of other nodes, e.g. when the question is asking for a country and we have an answer candidate \emph{the U.S.}. From the observation on the WebQSP datasets, we found that for most of the time, the gold constraints on answers are their entity types (e.g., whether the question is asking for a country or a city). Based on this observation, in the constraint detection step, for the answer nodes we only keep the tuples with \emph{type} relations (i.e. the relation name contains the word ``\emph{type}''), such as \emph{common.topic.notable\_types, education.educational\_institution.school\_type} etc. \end{enumerate} \section*{Appendix C: Effects of Entity Re-Ranking on SimpleQuestions} Removing entity re-ranking step results in significant performance drop (see Table \ref{tab:overall_results}, the row of \emph{w/o entity re-ranking}). Table \ref{tab:final_linking} evaluates our re-ranker as an separate task. Our re-ranker results in large improvement, especially when the beam sizes are smaller than 10. This is indicating another important usage of our proposed improved relation detection model on entity linking re-ranking. \documentclass[11pt,letterpaper]{article} \usepackage[nohyperref]{acl2017} \usepackage[linesnumbered,ruled]{algorithm2e} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{117} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\bq}{\mathbf{q}} \newcommand{\br}{\mathbf{r}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bb}{\bm{\beta}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bg}{\bm{\gamma}} \newcommand{\bG}{\mathbf{\Gamma}} \newcommand{\removed}[1]{} \title{Improved Neural Relation Detection for Knowledge Base Question Answering} \author{Mo Yu$^{\dagger}$\quad Wenpeng Yin$^{\star}$\quad Kazi Saidul Hasan$^{\ddagger}$\quad Cicero dos Santos$^{\dagger}$\\ {\bf Bing Xiang}$^{\ddagger}$\quad {\bf Bowen Zhou}$^{\dagger}$\\ {\tt $^{\dagger}$AI Foundations, IBM Research, USA}\\ {\tt $^{\star}$Center for Information and Language Processing, LMU Munich}\\ {\tt $^{\ddagger}$IBM Watson, USA}\\ {\tt \small \{yum,kshasan,cicerons,bingxia,zhou\}@us.ibm.com, wenpeng@cis.lmu.de} } \date{} \begin{document} \maketitle \begin{abstract} Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. \end{abstract} \section{Introduction} \label{sec:intro} Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples \cite{berant2013semantic,yao2014freebase,bordes2015large,bast2015more,yih2015semantic,xu2016enhancing}. For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure \ref{fig:example} illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$\emph{head-entity, relation, tail-entity}$>$ KB tuple \cite{fader2013paraphrase,yih2014semantic,bordes2015large}; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) \emph{entity linking}, which links $n$-grams in questions to KB entities, and (2) \emph{relation detection}, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the \emph{relation detection} subtask and further explore how it can contribute to the KBQA system. Although general relation detection\footnote{In the information extraction field such tasks are usually called \emph{relation extraction} or \emph{relation classification}.} methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M \cite{bordes2015large}, contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions \cite{bordes2015large} data set has 14\% of the golden test relations not observed in golden training tuples. Third, as shown in Figure \ref{fig:example}(b), for some KBQA tasks like WebQuestions \cite{berant2013semantic}, we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (\emph{BiLSTM}s) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed \emph{improved relation detection} could benefit the KBQA end task, we also propose a simple KBQA implementation composed of \emph{two-step relation detection}. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the \emph{raw question text} by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each \emph{topic entity}\footnote{Following \newcite{yih2015semantic}, here \emph{topic entity} refers to the root of the (directed) query tree; and \emph{core-chain} is the directed path of relation from root to the answer node.} selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. \section{Related Work} \label{sec:relatedwork} \paragraph{Relation Extraction} Relation extraction (RE) is an important sub-field of information extraction. General research in this field usually works on a (small) pre-defined relation set, where given a text paragraph and two target entities, the goal is to determine whether the text indicates any types of relations between the entities or not. As a result RE is usually formulated as a \textbf{classification task}. Traditional RE methods rely on large amount of hand-crafted features \cite{zhou_exploring_2005,rink-harabagiu:2010:SemEval,sun_semi-supervised_2011}. Recent research benefits a lot from the advancement of deep learning: from word embeddings \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} to deep networks like CNNs and LSTMs \cite{zeng-EtAl:2014:Coling,santos2015classifying,vu-EtAl:2016:N16-1} and attention models \cite{zhou-EtAl:2016:P16-2,wang-EtAl:2016:P16-12}. The above research assumes there is a fixed (closed) set of relation types, thus no zero-shot learning capability is required. The number of relations is usually not large: The widely used ACE2005 has 11/32 coarse/fine-grained relations; SemEval2010 Task8 has 19 relations; TAC-KBP2015 has 74 relations although it considers open-domain Wikipedia relations. All are much fewer than thousands of relations in KBQA. As a result, few work in this field focuses on dealing with large number of relations or unseen relations. \newcite{yu-EtAl:2016:N16-12} proposed to use relation embeddings in a low-rank tensor method. However their relation embeddings are still trained in supervised way and the number of relations is not large in the experiments. \paragraph{Relation Detection in KBQA Systems} Relation detection for KBQA also starts with feature-rich approaches \cite{yao2014information,bast2015more} towards usages of deep networks \cite{yih2015semantic,xu2016enhancing,dai-li-xu:2016:P16-1} and attention models \cite{yin2016simple,golub2016character}. Many of the above relation detection research could naturally support large relation vocabulary and open relation sets (especially for QA with OpenIE KB like ParaLex \cite{fader2013paraphrase}), in order to fit the goal of open-domain question answering. Different KBQA data sets have different levels of requirement about the above open-domain capacity. For example, most of the gold test relations in WebQuestions can be observed during training, thus some prior work on this task adopted the close domain assumption like in the general RE research. While for data sets like SimpleQuestions and ParaLex, the capacity to support large relation sets and unseen relations becomes more necessary. To the end, there are two main solutions: (1) use pre-trained relation embeddings (e.g. from TransE \cite{bordes2013translating}), like \cite{dai-li-xu:2016:P16-1}; (2) factorize the relation names to sequences and formulate relation detection as a \textbf{sequence matching and ranking} task. Such factorization works because that the relation names usually comprise meaningful word sequences. For example, \newcite{yin2016simple} split relations to word sequences for single-relation detection. \newcite{liang2016neural} also achieve good performance on WebQSP with word-level relation representation in an end-to-end neural programmer model. \newcite{yih2015semantic} use character tri-grams as inputs on both question and relation sides. \newcite{golub2016character} propose a generative framework for single-relation KBQA which predicts relation with a character-level sequence-to-sequence model. Another difference between relation detection in KBQA and general RE is that general RE research assumes that the two argument entities are both available. Thus it usually benefits from features \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} or attention mechanisms \cite{wang-EtAl:2016:P16-12} based on the entity information (e.g. entity types or entity embeddings). For relation detection in KBQA, such information is mostly missing because: (1) one question usually contains single argument (the topic entity) and (2) one KB entity could have multiple types (type vocabulary size larger than 1,500). This makes KB entity typing itself a difficult problem so no previous used entity information in the relation detection model.\footnote{Such entity information has been used in KBQA systems as features for the final answer re-rankers.} \section{Background: Different Granularity in KB Relations} Previous research \cite{yih2015semantic,yin2016simple} formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. \vspace{0.4em} \noindent \textbf{(1) Relation Name as a Single Token} (\emph{relation-level}). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure \ref{fig:example}, when treating relation names as single tokens, it will be difficult to match the questions to relation names ``\emph{episodes\_written}'' and ``\emph{starring\_roles}'' if these names do not appear in training data -- their relation embeddings $\bh^r$s will be random vectors thus are not comparable to question embeddings $\bh^q$s. \vspace{0.4em} \noindent \textbf{(2) Relation as Word Sequence} (\emph{word-level}). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure \ref{fig:example}(b), when doing only word-level matching, it is difficult to rank the target relation ``\emph{starring\_roles}'' higher compared to the incorrect relation ``\emph{plays\_produced}''. This is because the incorrect relation contains word ``\emph{plays}'', which is more similar to the question (containing word ``\emph{play}'') in the embedding space. On the other hand, if the target relation co-occurs with questions related to ``\emph{tv appearance}'' in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like ``\emph{tv show}'' and ``\emph{play on}''. The two types of relation representation contain different levels of abstraction. As shown in Table \ref{tab:re_example}, the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section \ref{sec:re_method} gives the details of our proposed approach. \section{Improved KB Relation Detection} \label{sec:re_method} This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. \subsection{Relation Representations from Different Granularity} We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\br=\{r^{word}_1,\cdots,r^{word}_{M_1}\} \cup \{r^{rel}_1,\cdots,r^{rel}_{M_2}\}$, where the first $M_1$ tokens are words (e.g. \emph{\{episode, written\}}), and the last $M_2$ tokens are relation names, e.g., \emph{\{episode\_written\}} or \emph{\{starring\_roles, series\}} (when the target is a chain like in Figure \ref{fig:example}(b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\bB^{word}_{1:M_1}:\bB^{rel}_{1:M_2}]$ (each row vector $\bb_i$ is the concatenation between forward/backward representations at $i$). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply \emph{one} max-pooling on these two sets of vectors and get the final relation representation $\bh^r$. \subsection{Different Abstractions of Questions Representations} From Table \ref{tab:re_example}, we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\bq=\{q_1,\cdots,q_N\}$ and gets hidden representations $\bG^{(1)}_{1:N}=[\bg^{(1)}_1;\cdots;\bg^{(1)}_N]$. The second-layer BiLSTM works on $\bG^{(1)}_{1:N}$ to get the second set of hidden representations $\bG^{(2)}_{1:N}$. Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. \subsection{Hierarchical Matching between Relation and Question} \label{ssec:hier_matching} Now we have question contexts of different lengths encoded in $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$. Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (\emph{Hierarchical Matching}). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table \ref{tab:re_example}, the relation word \emph{written} could be matched to either the same single word in the question or a much longer phrase \emph{be the writer of}. We could perform the above hierarchical matching by computing the similarity between each layer of $\bG$ and $\bh^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table \ref{tab:rel}). Our analysis in Section \ref{ssec:exp_re} shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks \cite{he2016deep} for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such \emph{Hierarchical Residual Matching}: (1) Connecting each $\bg^{(1)}_i$ and $\bg^{(2)}_i$, resulting in a $\bg^{'}_i=\bg^{(1)}_i + \bg^{(2)}_i$ for each position $i$. Then the final question representation $\bh^q$ becomes a max-pooling over all $\bg^{'}_i$s, $1$$\leq$i$\leq$$N$. (2) Applying max-pooling on $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$ to get $\bh^{(1)}_{max}$ and $\bh^{(2)}_{max}$, respectively, then setting $\bh^q=\bh^{(1)}_{max}+\bh^{(2)}_{max}$. Finally we compute the matching score of $\br$ given $\bq$ as $s_{rel}(\br;\bq)=cos(\bh^r, \bh^q)$. Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation $\br^+$ and other relations $\br^-$ in the candidate pool $R$. { \abovedisplayskip=5pt \belowdisplayskip=5pt \begin{align} l_{\mathrm{rel}} = \max \{0, \gamma - s_{\mathrm{rel}}(\br^+; \bq) + s_{\mathrm{rel}}(\br^-; \bq)\} \nonumber \end{align} } where $\gamma$ is a constant parameter. Fig \ref{fig:re_model} summarizes the above \emph{Hierarchical Residual BiLSTM (\textbf{HR-BiLSTM})} model. \paragraph{Remark:} Another way of hierarchical matching consists in relying on \textbf{attention mechanism}, e.g. \cite{parikh-EtAl:2016:EMNLP2016}, to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table \ref{tab:rel}). \section{KBQA Enhanced by Relation Detection} \label{sec:kbqa_method} This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work \cite{yih2015semantic,xu2016enhancing}, our KBQA system takes an existing entity linker to produce the top-$K$ linked entities, $EL_K(q)$, for a question $q$ (``\emph{initial entity linking}''). % Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm \ref{algo:pipeline}. \begin{algorithm}[htbp] \small { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \DontPrintSemicolon \BlankLine \textbf{Entity Re-Ranking} (\emph{first-step relation detection}): Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Detect relation(s) using the \emph{reformatted question text} in which the topic entity is replaced by a special token \emph{$<$e$>$} (Section \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section \ref{ssec:query_gen})\; \textbf{Constraint Detection} (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $\hat{r}$ (connecting by a relation $r_c$) , add the high scoring $c$ and $r_c$ to the query (Section \ref{ssec:constraint}). \caption{\label{algo:pipeline}{\footnotesize{KBQA with two-step relation detection}}}} \end{algorithm} Compared to previous approaches, the main difference is that we have an additional \emph{entity re-ranking} step after the \emph{initial entity linking}. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7\% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the \emph{initial entity linking} with relations detected in questions. \removed{ Previous efforts on KBQA usually generate the KB queries from a question $q$ step-by-step as follows: (1) Entity linking, in which the top-$K$ entity candidates for a question $q$ ($EL_K(q)$) are selected. (2) Relation detection, where a topic entity $e$ is selected, and a relation (or chain of relations) is detected from its corresponding relation set $R_e=\{r(e,\cdot) \in KB\}$. (3) Constraint detection, which tries to apply the rest entities in $EL_K(q) \setminus \{e\}$ as constraints to further filter the answers. As the starting step, the accuracy and coverage of this top-$K$ list is critical. However, we have observed that entity linking sometimes becomes bottleneck of KBQA systems. While on WebQSP the best reported linker could get 87.98\% top-1 accuracy on identifying topic entities, on SimpleQuestions this number becomes 72.7\%. Our error analysis shows that such errors are usually due to the ambiguities of entity names. For example in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only text matching. To overcome the above difficulty, previous work usually deals with such problem by generating large beams and then relies on hand-crafted features to re-rank the final generated KB queries, e.g., \newcite{golub2016character} used $K=50$ for SimpleQuestions, which slows down the speed of the model. Here we propose an alternative solution to this problem: having observed that different entity candidates usually connect to different relations, we propose to use relations detected in questions to help entity disambiguation in the \emph{initial entity linking}. Concretely, we add an additional component between the steps (1) and (2) above, which is also a relation detection model on question text but is used to re-rank the entity candidates. We call this \emph{relation detection on entity set}, since it is detecting relation for a set of entity candidates instead of for single specific entities. } Sections \ref{ssec:ent_reranking} and \ref{ssec:rel} elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. \removed { \begin{algorithm*}[htbp] { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \textbf{Entity Re-Ranking}: Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing only the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Perform relation detection using the \emph{reformatted question text} in which the topic entity is replaced by an especial token \emph{$<$e$>$} (Sec. \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Sec. \ref{ssec:rel})\; \textbf{Constraint Detection} (optional): Compute similarity between any $n$-gram in $q$ and any neighbor node $c$ (connected by relation $r_c$) of each entity in the above query, add the high scoring $c$ and $r_c$ to the query (Sec. \ref{ssec:constraint}). \label{algo:pipeline} \caption{\scriptsize{KBQA with two-step relation detection}}} \end{algorithm*} } \subsection{Entity Re-Ranking} \label{ssec:ent_reranking} In this step, we use the \emph{raw question text} as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$. We call this step \emph{relation detection on entity set} since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. \ref{sec:re_method}. For each question $q$, after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ($R^{l}_q$) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$, given the original entity linker score $s_{linker}$, and the score of the most confident relation $r\in R_q^{l} \cap R_e$, we sum these two scores to re-rank the entities: \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s_{\mathrm{rerank}}(e;q) =& \alpha \cdot s_{\mathrm{linker}}(e;q) \nonumber \\ + & (1-\alpha) \cdot\max_{r \in R_q^{l} \cap R_e} s_{\mathrm{rel}}(r;q).\nonumber \end{align} Finally, we select top $K'$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K'}^{'}(q)$. We use the same example in Fig \ref{fig:example}(a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as ``\emph{episodes\_written}'', ``\emph{author\_of}'' and ``\emph{profession}''. Then, according to the connections of entity candidates in KB, we find that the TV writer ``\emph{Mike Kelley}'' will be scored higher than the baseball player ``\emph{Mike Kelley}'', because the former has the relations ``\emph{episodes\_written}'' and ``\emph{profession}''. This method can be viewed as exploiting entity-relation collocation for entity linking. \subsection{Relation Detection} \label{ssec:rel} In this step, for each candidate entity $e \in EL_K'(q)$, we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB.\footnote{{Note that the number of entities and the number of relation candidates will be much smaller than those in the previous step.}} Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$'s entity mention in $q$ with a token ``\emph{$<$e$>$}''. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$: $s_{rel} (r;e,q)$. \subsection{Query Generation} \label{ssec:query_gen} Finally, the system outputs the $<$entity, relation (or core-chain)$>$ pair $(\hat{e}, \hat{r})$ according to: {{ \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s(\hat{e}, \hat{r}; q) =& \max_{e \in EL_{K'}^{'}(q), r \in R_e} \left ( \beta \cdot s_{\mathrm{rerank}}(e;q) \right. \nonumber\\ &\left.+ (1-\beta) \cdot s_{\mathrm{rel}} (r;e,q) \right), \nonumber \end{align} }} where $\beta$ is a hyperparameter to be tuned. %possibly because the $s_{rel}$ scores are closer to each other. \subsection{Constraint Detection} \label{ssec:constraint} Similar to \cite{yih2015semantic}, we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) \textbf{Sub-graph generation}: given the top scored query generated by the previous 3 steps\footnote{{Starting with the top-1 query suffers more from error propagation. However we still achieve state-of-the-art on WebQSP in Sec.\ref{sec:exp}, showing the advantage of our relation detection model. We leave in future work beam-search and feature extraction on beam for final answer re-ranking like in previous research.}}, for each node $v$ (answer node or the CVT node like in Figure \ref{fig:example}(b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$) with any relation, and generate a sub-graph associated to the original query. (2) \textbf{Entity-linking on sub-graph nodes}: we compute a matching score between each $n$-gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta$ (tuned on training set), we will add the constraint entity $c$ (and $r_c$) to the query by attaching it to the corresponding node $v$ on the core-chain. \section{Experiments} \label{sec:exp} \vspace{-0.1em} \subsection{Task Introduction \& Settings} We use the SimpleQuestions \cite{bordes2015large} and WebQSP \cite{yih-EtAl:2016:P16-2} datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. \noindent \textbf{SimpleQuestions (SQ): } It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) \cite{bordes2015large}, in order to compare with previous research. \newcite{yin2016simple} also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results\footnote{The two resources have been downloaded from \scriptsize{\url{https://github.com/Gorov/SimpleQuestions-EntityLinking}}}. Therefore, our results can be compared with their reported results on both tasks. \noindent \textbf{WebQSP (WQ): } A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following \newcite{yih-EtAl:2016:P16-2}, we use S-MART \cite{yang-chang:2015:ACL-IJCNLP} entity-linking outputs.\footnote{{\url{https://github.com/scottyih/STAGG}}} In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set.\footnote{{The dataset is available at \scriptsize{\url{https://github.com/Gorov/KBQA_RE_data}}.}} For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\leq$ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs (\{50, 100, 200, 400\})\footnote{{For CNNs we double the size for fair comparison.}}; (2) learning rate (\{0.1, 0.5, 1.0, 2.0\}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section \ref{ssec:hier_matching}); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have \emph{entity replacement} first (see Section \ref{ssec:rel} and Figure \ref{tab:re_example}). All word vectors are initialized with 300-$d$ pretrained word embeddings \cite{mikolov2013distributed}. The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. \vspace{-0.2em} \subsection{Relation Detection Results} \vspace{-0.1em} \label{ssec:exp_re} Table \ref{tab:rel} shows the results on two relation detection tasks. The AMPCNN result is from \cite{yin2016simple}, which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from \cite{yih2015semantic}, where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3\% (p $<$ 0.001 and 0.01 compared to the best baseline \emph{BiLSTM w/ words} on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2\% to 88.9\%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. \paragraph{Ablation Test:} The bottom of Table \ref{tab:rel} shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3\% vs. 91.2/88.8\%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section \ref{ssec:hier_matching}). For the attention-based baseline, we tried the model from \cite{parikh-EtAl:2016:EMNLP2016} and its one-way variations, where the one-way model gives better results\footnote{{We also tried to apply the same attention method on deep BiLSTM with residual connections, but it does not lead to better results compared to HR-BiLSTM. We hypothesize that the idea of hierarchical matching with attention mechanism may work better for long sequences, and the new advanced attention mechanisms \cite{wang-jiang:2016:N16-1,wang2017bilateral} might help hierarchical matching. We leave the above directions to future work.}}. Note that residual learning significantly helps on WebQSP (80.65\% to 82.53\%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. \paragraph{Analysis} Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that \emph{training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable}. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94\%) compared to HR-BiLSTM (95.67\%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, \emph{training deep BiLSTMs is more difficult without shortcut connections}. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99\%, even lower than the 95.25\% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that \emph{HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction}. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11\%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. \subsection{KBQA End-Task Results} Table \ref{tab:overall_results} compares our system with two published baselines (1) STAGG \cite{yih2015semantic}, the state-of-the-art on WebQSP\footnote{{The STAGG score on SQ is from \cite{bao-EtAl:2016:COLING}.}} and (2) AMPCNN \cite{yin2016simple}, the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table \ref{tab:overall_results}). Compared to the \emph{baseline relation detector} (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3\% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.% which shows the importance of our proposed improved relation detector. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in \cite{yih2015semantic}, constraint detection is crucial for our system\footnote{Note that another reason is that we are evaluating on accuracy here. When evaluating on F1 the gap will be smaller.}. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5\% top-1 accuracy), leaving a huge potential (77.5\% vs. 58.0\%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see \newcite{yih2015semantic} for the three models used), we also try to use the top-3 relation detectors from Section \ref{ssec:exp_re}. As shown on the last row of Table \ref{tab:overall_results}, this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. \section{Conclusion} KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in \cite{liang2016neural}, to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions \cite{su-EtAl:2016:EMNLP2016} and ComplexQuestions \cite{bao-EtAl:2016:COLING} to handle more characteristics of general QA. \bibliographystyle{acl_natbib} \clearpage \newpage \input{acl2017_appendix} \end{document} \section*{Appendix A: Detailed Feature List for \emph{SimpleLinker}} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. The features we used in the \emph{SimpleLinker} include: \begin{enumerate} \item The proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; \item The relative position of the entity mention in the question. We denote the beginning position of $m$ in $q$ as $p_m$ (in characters), then we have the feature $\frac{p_m}{\vert q\vert }$. \end{enumerate} The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} + \frac{p_m}{\vert q\vert } \nonumber \end{align} \section*{Appendix B: Mathematics for the Relation Detection Models} \paragraph{BiCNN}: given a pair of inputs of question $q$ and relation $r$, we use two CNNs (with shared parameters in the experiments) to get the hidden states for each position of $q$ and $r$: $h_{1:N}^q$ and $h_{1:M}^r$, where $N$ and $M$ are the lengths of $q$ and $r$, respectively. By applying max-pooling on $h_{1:N}^q$ and $h_{1:M}^r$ we get the question representation $h_{max}^q$ where $h_{max}^q[i] = max_{1 \leq j \leq N} h_j^q[i]$. Similarly, we get the relation representation $h_{max}^r$. Then the score of $r$ given $q$ is defined as $s_{rel}(r;q)=cos(h_{max}^q, h_{max}^r)$. \paragraph{APCNN}: given the same representations $h_{1:N}^q$ and $h_{1:M}^r$ as above, we compute the alignment score $a_{ij} = (h_{i}^r)^T \mathbf{U} h_{j}^q$. In the experiments, we use identity matrix $\mathbf{U}=\mathbf{I}$. Based on the alignment score, we can compute the attention score for each $h_i^r$ as $w^r_i=\frac{e^{max_j a_{ij}}}{\sum_{k=1:M} e^{max_j a_{kj}}}$; and the attention score for each $h_j^q$ as $w^q_j=\frac{e^{max_i a_{ij}}}{\sum_{k=1:N} e^{max_i a_{ik}}}$. Then the score of relation $r$ given $q$ is \begin{align} s_{rel}(r;q)=cos(\sum_{1\leq j \leq N} w^q_j h_{j}^q, \sum_{1 \leq i \leq M} w^r_i h_{i}^r) \nonumber \end{align} \paragraph{Entity Replacement}: when we use entity replacement, the representation vector $h_{max}^q$ depends on the topic entity $e$. Therefore we denote the similarity score as $s_{rel}(r;e, q)$. \section*{Appendix A: Detailed Score Computation for Constraint Detection} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. Based on the above numbers we compute the proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} \nonumber \end{align} \section*{Appendix B: Special Rules for Constraint Detection} \begin{enumerate} \item Special threshold for date constraints. The time stamps in KB usually follow the year-month-day format, while the time in WebQSP are usually years. This makes the overlap between the date entities in questions and the KB entity names smaller (length of overlap is usually 4). To deal with this, we only check whether the dates in questions could match the years in KB, thus have a special threshold of $\theta=1$ for date constraints. \item Filtering the constraints for answer nodes. Sometimes the answer node could connect to huge number of other nodes, e.g. when the question is asking for a country and we have an answer candidate \emph{the U.S.}. From the observation on the WebQSP datasets, we found that for most of the time, the gold constraints on answers are their entity types (e.g., whether the question is asking for a country or a city). Based on this observation, in the constraint detection step, for the answer nodes we only keep the tuples with \emph{type} relations (i.e. the relation name contains the word ``\emph{type}''), such as \emph{common.topic.notable\_types, education.educational\_institution.school\_type} etc. \end{enumerate} \section*{Appendix C: Effects of Entity Re-Ranking on SimpleQuestions} Removing entity re-ranking step results in significant performance drop (see Table \ref{tab:overall_results}, the row of \emph{w/o entity re-ranking}). Table \ref{tab:final_linking} evaluates our re-ranker as an separate task. Our re-ranker results in large improvement, especially when the beam sizes are smaller than 10. This is indicating another important usage of our proposed improved relation detection model on entity linking re-ranking. \documentclass[11pt,letterpaper]{article} \usepackage[nohyperref]{acl2017} \usepackage[linesnumbered,ruled]{algorithm2e} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{117} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\bq}{\mathbf{q}} \newcommand{\br}{\mathbf{r}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bb}{\bm{\beta}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bg}{\bm{\gamma}} \newcommand{\bG}{\mathbf{\Gamma}} \newcommand{\removed}[1]{} \title{Improved Neural Relation Detection for Knowledge Base Question Answering} \author{Mo Yu$^{\dagger}$\quad Wenpeng Yin$^{\star}$\quad Kazi Saidul Hasan$^{\ddagger}$\quad Cicero dos Santos$^{\dagger}$\\ {\bf Bing Xiang}$^{\ddagger}$\quad {\bf Bowen Zhou}$^{\dagger}$\\ {\tt $^{\dagger}$AI Foundations, IBM Research, USA}\\ {\tt $^{\star}$Center for Information and Language Processing, LMU Munich}\\ {\tt $^{\ddagger}$IBM Watson, USA}\\ {\tt \small \{yum,kshasan,cicerons,bingxia,zhou\}@us.ibm.com, wenpeng@cis.lmu.de} } \date{} \begin{document} \maketitle \begin{abstract} Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. \end{abstract} \section{Introduction} \label{sec:intro} Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples \cite{berant2013semantic,yao2014freebase,bordes2015large,bast2015more,yih2015semantic,xu2016enhancing}. For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure \ref{fig:example} illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$\emph{head-entity, relation, tail-entity}$>$ KB tuple \cite{fader2013paraphrase,yih2014semantic,bordes2015large}; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) \emph{entity linking}, which links $n$-grams in questions to KB entities, and (2) \emph{relation detection}, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the \emph{relation detection} subtask and further explore how it can contribute to the KBQA system. Although general relation detection\footnote{In the information extraction field such tasks are usually called \emph{relation extraction} or \emph{relation classification}.} methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M \cite{bordes2015large}, contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions \cite{bordes2015large} data set has 14\% of the golden test relations not observed in golden training tuples. Third, as shown in Figure \ref{fig:example}(b), for some KBQA tasks like WebQuestions \cite{berant2013semantic}, we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (\emph{BiLSTM}s) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed \emph{improved relation detection} could benefit the KBQA end task, we also propose a simple KBQA implementation composed of \emph{two-step relation detection}. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the \emph{raw question text} by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each \emph{topic entity}\footnote{Following \newcite{yih2015semantic}, here \emph{topic entity} refers to the root of the (directed) query tree; and \emph{core-chain} is the directed path of relation from root to the answer node.} selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. \section{Related Work} \label{sec:relatedwork} \paragraph{Relation Extraction} Relation extraction (RE) is an important sub-field of information extraction. General research in this field usually works on a (small) pre-defined relation set, where given a text paragraph and two target entities, the goal is to determine whether the text indicates any types of relations between the entities or not. As a result RE is usually formulated as a \textbf{classification task}. Traditional RE methods rely on large amount of hand-crafted features \cite{zhou_exploring_2005,rink-harabagiu:2010:SemEval,sun_semi-supervised_2011}. Recent research benefits a lot from the advancement of deep learning: from word embeddings \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} to deep networks like CNNs and LSTMs \cite{zeng-EtAl:2014:Coling,santos2015classifying,vu-EtAl:2016:N16-1} and attention models \cite{zhou-EtAl:2016:P16-2,wang-EtAl:2016:P16-12}. The above research assumes there is a fixed (closed) set of relation types, thus no zero-shot learning capability is required. The number of relations is usually not large: The widely used ACE2005 has 11/32 coarse/fine-grained relations; SemEval2010 Task8 has 19 relations; TAC-KBP2015 has 74 relations although it considers open-domain Wikipedia relations. All are much fewer than thousands of relations in KBQA. As a result, few work in this field focuses on dealing with large number of relations or unseen relations. \newcite{yu-EtAl:2016:N16-12} proposed to use relation embeddings in a low-rank tensor method. However their relation embeddings are still trained in supervised way and the number of relations is not large in the experiments. \paragraph{Relation Detection in KBQA Systems} Relation detection for KBQA also starts with feature-rich approaches \cite{yao2014information,bast2015more} towards usages of deep networks \cite{yih2015semantic,xu2016enhancing,dai-li-xu:2016:P16-1} and attention models \cite{yin2016simple,golub2016character}. Many of the above relation detection research could naturally support large relation vocabulary and open relation sets (especially for QA with OpenIE KB like ParaLex \cite{fader2013paraphrase}), in order to fit the goal of open-domain question answering. Different KBQA data sets have different levels of requirement about the above open-domain capacity. For example, most of the gold test relations in WebQuestions can be observed during training, thus some prior work on this task adopted the close domain assumption like in the general RE research. While for data sets like SimpleQuestions and ParaLex, the capacity to support large relation sets and unseen relations becomes more necessary. To the end, there are two main solutions: (1) use pre-trained relation embeddings (e.g. from TransE \cite{bordes2013translating}), like \cite{dai-li-xu:2016:P16-1}; (2) factorize the relation names to sequences and formulate relation detection as a \textbf{sequence matching and ranking} task. Such factorization works because that the relation names usually comprise meaningful word sequences. For example, \newcite{yin2016simple} split relations to word sequences for single-relation detection. \newcite{liang2016neural} also achieve good performance on WebQSP with word-level relation representation in an end-to-end neural programmer model. \newcite{yih2015semantic} use character tri-grams as inputs on both question and relation sides. \newcite{golub2016character} propose a generative framework for single-relation KBQA which predicts relation with a character-level sequence-to-sequence model. Another difference between relation detection in KBQA and general RE is that general RE research assumes that the two argument entities are both available. Thus it usually benefits from features \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} or attention mechanisms \cite{wang-EtAl:2016:P16-12} based on the entity information (e.g. entity types or entity embeddings). For relation detection in KBQA, such information is mostly missing because: (1) one question usually contains single argument (the topic entity) and (2) one KB entity could have multiple types (type vocabulary size larger than 1,500). This makes KB entity typing itself a difficult problem so no previous used entity information in the relation detection model.\footnote{Such entity information has been used in KBQA systems as features for the final answer re-rankers.} \section{Background: Different Granularity in KB Relations} Previous research \cite{yih2015semantic,yin2016simple} formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. \vspace{0.4em} \noindent \textbf{(1) Relation Name as a Single Token} (\emph{relation-level}). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure \ref{fig:example}, when treating relation names as single tokens, it will be difficult to match the questions to relation names ``\emph{episodes\_written}'' and ``\emph{starring\_roles}'' if these names do not appear in training data -- their relation embeddings $\bh^r$s will be random vectors thus are not comparable to question embeddings $\bh^q$s. \vspace{0.4em} \noindent \textbf{(2) Relation as Word Sequence} (\emph{word-level}). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure \ref{fig:example}(b), when doing only word-level matching, it is difficult to rank the target relation ``\emph{starring\_roles}'' higher compared to the incorrect relation ``\emph{plays\_produced}''. This is because the incorrect relation contains word ``\emph{plays}'', which is more similar to the question (containing word ``\emph{play}'') in the embedding space. On the other hand, if the target relation co-occurs with questions related to ``\emph{tv appearance}'' in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like ``\emph{tv show}'' and ``\emph{play on}''. The two types of relation representation contain different levels of abstraction. As shown in Table \ref{tab:re_example}, the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section \ref{sec:re_method} gives the details of our proposed approach. \section{Improved KB Relation Detection} \label{sec:re_method} This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. \subsection{Relation Representations from Different Granularity} We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\br=\{r^{word}_1,\cdots,r^{word}_{M_1}\} \cup \{r^{rel}_1,\cdots,r^{rel}_{M_2}\}$, where the first $M_1$ tokens are words (e.g. \emph{\{episode, written\}}), and the last $M_2$ tokens are relation names, e.g., \emph{\{episode\_written\}} or \emph{\{starring\_roles, series\}} (when the target is a chain like in Figure \ref{fig:example}(b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\bB^{word}_{1:M_1}:\bB^{rel}_{1:M_2}]$ (each row vector $\bb_i$ is the concatenation between forward/backward representations at $i$). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply \emph{one} max-pooling on these two sets of vectors and get the final relation representation $\bh^r$. \subsection{Different Abstractions of Questions Representations} From Table \ref{tab:re_example}, we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\bq=\{q_1,\cdots,q_N\}$ and gets hidden representations $\bG^{(1)}_{1:N}=[\bg^{(1)}_1;\cdots;\bg^{(1)}_N]$. The second-layer BiLSTM works on $\bG^{(1)}_{1:N}$ to get the second set of hidden representations $\bG^{(2)}_{1:N}$. Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. \subsection{Hierarchical Matching between Relation and Question} \label{ssec:hier_matching} Now we have question contexts of different lengths encoded in $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$. Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (\emph{Hierarchical Matching}). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table \ref{tab:re_example}, the relation word \emph{written} could be matched to either the same single word in the question or a much longer phrase \emph{be the writer of}. We could perform the above hierarchical matching by computing the similarity between each layer of $\bG$ and $\bh^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table \ref{tab:rel}). Our analysis in Section \ref{ssec:exp_re} shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks \cite{he2016deep} for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such \emph{Hierarchical Residual Matching}: (1) Connecting each $\bg^{(1)}_i$ and $\bg^{(2)}_i$, resulting in a $\bg^{'}_i=\bg^{(1)}_i + \bg^{(2)}_i$ for each position $i$. Then the final question representation $\bh^q$ becomes a max-pooling over all $\bg^{'}_i$s, $1$$\leq$i$\leq$$N$. (2) Applying max-pooling on $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$ to get $\bh^{(1)}_{max}$ and $\bh^{(2)}_{max}$, respectively, then setting $\bh^q=\bh^{(1)}_{max}+\bh^{(2)}_{max}$. Finally we compute the matching score of $\br$ given $\bq$ as $s_{rel}(\br;\bq)=cos(\bh^r, \bh^q)$. Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation $\br^+$ and other relations $\br^-$ in the candidate pool $R$. { \abovedisplayskip=5pt \belowdisplayskip=5pt \begin{align} l_{\mathrm{rel}} = \max \{0, \gamma - s_{\mathrm{rel}}(\br^+; \bq) + s_{\mathrm{rel}}(\br^-; \bq)\} \nonumber \end{align} } where $\gamma$ is a constant parameter. Fig \ref{fig:re_model} summarizes the above \emph{Hierarchical Residual BiLSTM (\textbf{HR-BiLSTM})} model. \paragraph{Remark:} Another way of hierarchical matching consists in relying on \textbf{attention mechanism}, e.g. \cite{parikh-EtAl:2016:EMNLP2016}, to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table \ref{tab:rel}). \section{KBQA Enhanced by Relation Detection} \label{sec:kbqa_method} This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work \cite{yih2015semantic,xu2016enhancing}, our KBQA system takes an existing entity linker to produce the top-$K$ linked entities, $EL_K(q)$, for a question $q$ (``\emph{initial entity linking}''). % Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm \ref{algo:pipeline}. \begin{algorithm}[htbp] \small { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \DontPrintSemicolon \BlankLine \textbf{Entity Re-Ranking} (\emph{first-step relation detection}): Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Detect relation(s) using the \emph{reformatted question text} in which the topic entity is replaced by a special token \emph{$<$e$>$} (Section \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section \ref{ssec:query_gen})\; \textbf{Constraint Detection} (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $\hat{r}$ (connecting by a relation $r_c$) , add the high scoring $c$ and $r_c$ to the query (Section \ref{ssec:constraint}). \caption{\label{algo:pipeline}{\footnotesize{KBQA with two-step relation detection}}}} \end{algorithm} Compared to previous approaches, the main difference is that we have an additional \emph{entity re-ranking} step after the \emph{initial entity linking}. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7\% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the \emph{initial entity linking} with relations detected in questions. \removed{ Previous efforts on KBQA usually generate the KB queries from a question $q$ step-by-step as follows: (1) Entity linking, in which the top-$K$ entity candidates for a question $q$ ($EL_K(q)$) are selected. (2) Relation detection, where a topic entity $e$ is selected, and a relation (or chain of relations) is detected from its corresponding relation set $R_e=\{r(e,\cdot) \in KB\}$. (3) Constraint detection, which tries to apply the rest entities in $EL_K(q) \setminus \{e\}$ as constraints to further filter the answers. As the starting step, the accuracy and coverage of this top-$K$ list is critical. However, we have observed that entity linking sometimes becomes bottleneck of KBQA systems. While on WebQSP the best reported linker could get 87.98\% top-1 accuracy on identifying topic entities, on SimpleQuestions this number becomes 72.7\%. Our error analysis shows that such errors are usually due to the ambiguities of entity names. For example in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only text matching. To overcome the above difficulty, previous work usually deals with such problem by generating large beams and then relies on hand-crafted features to re-rank the final generated KB queries, e.g., \newcite{golub2016character} used $K=50$ for SimpleQuestions, which slows down the speed of the model. Here we propose an alternative solution to this problem: having observed that different entity candidates usually connect to different relations, we propose to use relations detected in questions to help entity disambiguation in the \emph{initial entity linking}. Concretely, we add an additional component between the steps (1) and (2) above, which is also a relation detection model on question text but is used to re-rank the entity candidates. We call this \emph{relation detection on entity set}, since it is detecting relation for a set of entity candidates instead of for single specific entities. } Sections \ref{ssec:ent_reranking} and \ref{ssec:rel} elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. \removed { \begin{algorithm*}[htbp] { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \textbf{Entity Re-Ranking}: Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing only the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Perform relation detection using the \emph{reformatted question text} in which the topic entity is replaced by an especial token \emph{$<$e$>$} (Sec. \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Sec. \ref{ssec:rel})\; \textbf{Constraint Detection} (optional): Compute similarity between any $n$-gram in $q$ and any neighbor node $c$ (connected by relation $r_c$) of each entity in the above query, add the high scoring $c$ and $r_c$ to the query (Sec. \ref{ssec:constraint}). \label{algo:pipeline} \caption{\scriptsize{KBQA with two-step relation detection}}} \end{algorithm*} } \subsection{Entity Re-Ranking} \label{ssec:ent_reranking} In this step, we use the \emph{raw question text} as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$. We call this step \emph{relation detection on entity set} since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. \ref{sec:re_method}. For each question $q$, after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ($R^{l}_q$) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$, given the original entity linker score $s_{linker}$, and the score of the most confident relation $r\in R_q^{l} \cap R_e$, we sum these two scores to re-rank the entities: \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s_{\mathrm{rerank}}(e;q) =& \alpha \cdot s_{\mathrm{linker}}(e;q) \nonumber \\ + & (1-\alpha) \cdot\max_{r \in R_q^{l} \cap R_e} s_{\mathrm{rel}}(r;q).\nonumber \end{align} Finally, we select top $K'$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K'}^{'}(q)$. We use the same example in Fig \ref{fig:example}(a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as ``\emph{episodes\_written}'', ``\emph{author\_of}'' and ``\emph{profession}''. Then, according to the connections of entity candidates in KB, we find that the TV writer ``\emph{Mike Kelley}'' will be scored higher than the baseball player ``\emph{Mike Kelley}'', because the former has the relations ``\emph{episodes\_written}'' and ``\emph{profession}''. This method can be viewed as exploiting entity-relation collocation for entity linking. \subsection{Relation Detection} \label{ssec:rel} In this step, for each candidate entity $e \in EL_K'(q)$, we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB.\footnote{{Note that the number of entities and the number of relation candidates will be much smaller than those in the previous step.}} Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$'s entity mention in $q$ with a token ``\emph{$<$e$>$}''. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$: $s_{rel} (r;e,q)$. \subsection{Query Generation} \label{ssec:query_gen} Finally, the system outputs the $<$entity, relation (or core-chain)$>$ pair $(\hat{e}, \hat{r})$ according to: {{ \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s(\hat{e}, \hat{r}; q) =& \max_{e \in EL_{K'}^{'}(q), r \in R_e} \left ( \beta \cdot s_{\mathrm{rerank}}(e;q) \right. \nonumber\\ &\left.+ (1-\beta) \cdot s_{\mathrm{rel}} (r;e,q) \right), \nonumber \end{align} }} where $\beta$ is a hyperparameter to be tuned. %possibly because the $s_{rel}$ scores are closer to each other. \subsection{Constraint Detection} \label{ssec:constraint} Similar to \cite{yih2015semantic}, we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) \textbf{Sub-graph generation}: given the top scored query generated by the previous 3 steps\footnote{{Starting with the top-1 query suffers more from error propagation. However we still achieve state-of-the-art on WebQSP in Sec.\ref{sec:exp}, showing the advantage of our relation detection model. We leave in future work beam-search and feature extraction on beam for final answer re-ranking like in previous research.}}, for each node $v$ (answer node or the CVT node like in Figure \ref{fig:example}(b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$) with any relation, and generate a sub-graph associated to the original query. (2) \textbf{Entity-linking on sub-graph nodes}: we compute a matching score between each $n$-gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta$ (tuned on training set), we will add the constraint entity $c$ (and $r_c$) to the query by attaching it to the corresponding node $v$ on the core-chain. \section{Experiments} \label{sec:exp} \vspace{-0.1em} \subsection{Task Introduction \& Settings} We use the SimpleQuestions \cite{bordes2015large} and WebQSP \cite{yih-EtAl:2016:P16-2} datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. \noindent \textbf{SimpleQuestions (SQ): } It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) \cite{bordes2015large}, in order to compare with previous research. \newcite{yin2016simple} also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results\footnote{The two resources have been downloaded from \scriptsize{\url{https://github.com/Gorov/SimpleQuestions-EntityLinking}}}. Therefore, our results can be compared with their reported results on both tasks. \noindent \textbf{WebQSP (WQ): } A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following \newcite{yih-EtAl:2016:P16-2}, we use S-MART \cite{yang-chang:2015:ACL-IJCNLP} entity-linking outputs.\footnote{{\url{https://github.com/scottyih/STAGG}}} In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set.\footnote{{The dataset is available at \scriptsize{\url{https://github.com/Gorov/KBQA_RE_data}}.}} For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\leq$ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs (\{50, 100, 200, 400\})\footnote{{For CNNs we double the size for fair comparison.}}; (2) learning rate (\{0.1, 0.5, 1.0, 2.0\}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section \ref{ssec:hier_matching}); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have \emph{entity replacement} first (see Section \ref{ssec:rel} and Figure \ref{tab:re_example}). All word vectors are initialized with 300-$d$ pretrained word embeddings \cite{mikolov2013distributed}. The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. \vspace{-0.2em} \subsection{Relation Detection Results} \vspace{-0.1em} \label{ssec:exp_re} Table \ref{tab:rel} shows the results on two relation detection tasks. The AMPCNN result is from \cite{yin2016simple}, which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from \cite{yih2015semantic}, where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3\% (p $<$ 0.001 and 0.01 compared to the best baseline \emph{BiLSTM w/ words} on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2\% to 88.9\%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. \paragraph{Ablation Test:} The bottom of Table \ref{tab:rel} shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3\% vs. 91.2/88.8\%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section \ref{ssec:hier_matching}). For the attention-based baseline, we tried the model from \cite{parikh-EtAl:2016:EMNLP2016} and its one-way variations, where the one-way model gives better results\footnote{{We also tried to apply the same attention method on deep BiLSTM with residual connections, but it does not lead to better results compared to HR-BiLSTM. We hypothesize that the idea of hierarchical matching with attention mechanism may work better for long sequences, and the new advanced attention mechanisms \cite{wang-jiang:2016:N16-1,wang2017bilateral} might help hierarchical matching. We leave the above directions to future work.}}. Note that residual learning significantly helps on WebQSP (80.65\% to 82.53\%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. \paragraph{Analysis} Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that \emph{training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable}. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94\%) compared to HR-BiLSTM (95.67\%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, \emph{training deep BiLSTMs is more difficult without shortcut connections}. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99\%, even lower than the 95.25\% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that \emph{HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction}. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11\%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. \subsection{KBQA End-Task Results} Table \ref{tab:overall_results} compares our system with two published baselines (1) STAGG \cite{yih2015semantic}, the state-of-the-art on WebQSP\footnote{{The STAGG score on SQ is from \cite{bao-EtAl:2016:COLING}.}} and (2) AMPCNN \cite{yin2016simple}, the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table \ref{tab:overall_results}). Compared to the \emph{baseline relation detector} (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3\% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.% which shows the importance of our proposed improved relation detector. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in \cite{yih2015semantic}, constraint detection is crucial for our system\footnote{Note that another reason is that we are evaluating on accuracy here. When evaluating on F1 the gap will be smaller.}. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5\% top-1 accuracy), leaving a huge potential (77.5\% vs. 58.0\%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see \newcite{yih2015semantic} for the three models used), we also try to use the top-3 relation detectors from Section \ref{ssec:exp_re}. As shown on the last row of Table \ref{tab:overall_results}, this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. \section{Conclusion} KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in \cite{liang2016neural}, to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions \cite{su-EtAl:2016:EMNLP2016} and ComplexQuestions \cite{bao-EtAl:2016:COLING} to handle more characteristics of general QA. \bibliographystyle{acl_natbib} \clearpage \newpage \input{acl2017_appendix} \end{document} \section*{Appendix A: Detailed Feature List for \emph{SimpleLinker}} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. The features we used in the \emph{SimpleLinker} include: \begin{enumerate} \item The proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; \item The relative position of the entity mention in the question. We denote the beginning position of $m$ in $q$ as $p_m$ (in characters), then we have the feature $\frac{p_m}{\vert q\vert }$. \end{enumerate} The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} + \frac{p_m}{\vert q\vert } \nonumber \end{align} \section*{Appendix B: Mathematics for the Relation Detection Models} \paragraph{BiCNN}: given a pair of inputs of question $q$ and relation $r$, we use two CNNs (with shared parameters in the experiments) to get the hidden states for each position of $q$ and $r$: $h_{1:N}^q$ and $h_{1:M}^r$, where $N$ and $M$ are the lengths of $q$ and $r$, respectively. By applying max-pooling on $h_{1:N}^q$ and $h_{1:M}^r$ we get the question representation $h_{max}^q$ where $h_{max}^q[i] = max_{1 \leq j \leq N} h_j^q[i]$. Similarly, we get the relation representation $h_{max}^r$. Then the score of $r$ given $q$ is defined as $s_{rel}(r;q)=cos(h_{max}^q, h_{max}^r)$. \paragraph{APCNN}: given the same representations $h_{1:N}^q$ and $h_{1:M}^r$ as above, we compute the alignment score $a_{ij} = (h_{i}^r)^T \mathbf{U} h_{j}^q$. In the experiments, we use identity matrix $\mathbf{U}=\mathbf{I}$. Based on the alignment score, we can compute the attention score for each $h_i^r$ as $w^r_i=\frac{e^{max_j a_{ij}}}{\sum_{k=1:M} e^{max_j a_{kj}}}$; and the attention score for each $h_j^q$ as $w^q_j=\frac{e^{max_i a_{ij}}}{\sum_{k=1:N} e^{max_i a_{ik}}}$. Then the score of relation $r$ given $q$ is \begin{align} s_{rel}(r;q)=cos(\sum_{1\leq j \leq N} w^q_j h_{j}^q, \sum_{1 \leq i \leq M} w^r_i h_{i}^r) \nonumber \end{align} \paragraph{Entity Replacement}: when we use entity replacement, the representation vector $h_{max}^q$ depends on the topic entity $e$. Therefore we denote the similarity score as $s_{rel}(r;e, q)$.
Improved Neural Relation Detection for Knowledge Base Question Answering
1704.06194
Table 2: Accuracy on the SimpleQuestions and WebQSP relation detection tasks (test sets). The top shows performance of baselines. On the bottom we give the results of our proposed model together with the ablation tests.
[ "[BOLD] Model", "[BOLD] Relation Input Views", "[BOLD] Accuracy [BOLD] SimpleQuestions", "[BOLD] Accuracy [BOLD] WebQSP" ]
[ [ "AMPCNN Yin et al. ( 2016 )", "words", "91.3", "-" ], [ "BiCNN Yih et al. ( 2015 )", "char-3-gram", "90.0", "77.74" ], [ "BiLSTM w/ words", "words", "91.2", "79.32" ], [ "BiLSTM w/ relation names", "rel_names", "88.9", "78.96" ], [ "Hier-Res-BiLSTM (HR-BiLSTM)", "words + rel_names", "[BOLD] 93.3", "[BOLD] 82.53" ], [ "w/o rel_name", "words", "91.3", "81.69" ], [ "w/o rel_words", "rel_names", "88.8", "79.68" ], [ "w/o residual learning (weighted sum on two layers)", "words + rel_names", "92.5", "80.65" ], [ "replacing residual with attention Parikh et al. ( 2016 )", "words + rel_names", "92.6", "81.38" ], [ "single-layer BiLSTM question encoder", "words + rel_names", "92.8", "78.41" ], [ "replacing BiLSTM with CNN (HR-CNN)", "words + rel_names", "92.9", "79.08" ] ]
We could perform the above hierarchical matching by computing the similarity between each layer of Γ and hr separately and doing the (weighted) sum between the two scores. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult.
\section*{Appendix A: Detailed Score Computation for Constraint Detection} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. Based on the above numbers we compute the proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} \nonumber \end{align} \section*{Appendix B: Special Rules for Constraint Detection} \begin{enumerate} \item Special threshold for date constraints. The time stamps in KB usually follow the year-month-day format, while the time in WebQSP are usually years. This makes the overlap between the date entities in questions and the KB entity names smaller (length of overlap is usually 4). To deal with this, we only check whether the dates in questions could match the years in KB, thus have a special threshold of $\theta=1$ for date constraints. \item Filtering the constraints for answer nodes. Sometimes the answer node could connect to huge number of other nodes, e.g. when the question is asking for a country and we have an answer candidate \emph{the U.S.}. From the observation on the WebQSP datasets, we found that for most of the time, the gold constraints on answers are their entity types (e.g., whether the question is asking for a country or a city). Based on this observation, in the constraint detection step, for the answer nodes we only keep the tuples with \emph{type} relations (i.e. the relation name contains the word ``\emph{type}''), such as \emph{common.topic.notable\_types, education.educational\_institution.school\_type} etc. \end{enumerate} \section*{Appendix C: Effects of Entity Re-Ranking on SimpleQuestions} Removing entity re-ranking step results in significant performance drop (see Table \ref{tab:overall_results}, the row of \emph{w/o entity re-ranking}). Table \ref{tab:final_linking} evaluates our re-ranker as an separate task. Our re-ranker results in large improvement, especially when the beam sizes are smaller than 10. This is indicating another important usage of our proposed improved relation detection model on entity linking re-ranking. \documentclass[11pt,letterpaper]{article} \usepackage[nohyperref]{acl2017} \usepackage[linesnumbered,ruled]{algorithm2e} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{117} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\bq}{\mathbf{q}} \newcommand{\br}{\mathbf{r}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bb}{\bm{\beta}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bg}{\bm{\gamma}} \newcommand{\bG}{\mathbf{\Gamma}} \newcommand{\removed}[1]{} \title{Improved Neural Relation Detection for Knowledge Base Question Answering} \author{Mo Yu$^{\dagger}$\quad Wenpeng Yin$^{\star}$\quad Kazi Saidul Hasan$^{\ddagger}$\quad Cicero dos Santos$^{\dagger}$\\ {\bf Bing Xiang}$^{\ddagger}$\quad {\bf Bowen Zhou}$^{\dagger}$\\ {\tt $^{\dagger}$AI Foundations, IBM Research, USA}\\ {\tt $^{\star}$Center for Information and Language Processing, LMU Munich}\\ {\tt $^{\ddagger}$IBM Watson, USA}\\ {\tt \small \{yum,kshasan,cicerons,bingxia,zhou\}@us.ibm.com, wenpeng@cis.lmu.de} } \date{} \begin{document} \maketitle \begin{abstract} Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. \end{abstract} \section{Introduction} \label{sec:intro} Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples \cite{berant2013semantic,yao2014freebase,bordes2015large,bast2015more,yih2015semantic,xu2016enhancing}. For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure \ref{fig:example} illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$\emph{head-entity, relation, tail-entity}$>$ KB tuple \cite{fader2013paraphrase,yih2014semantic,bordes2015large}; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) \emph{entity linking}, which links $n$-grams in questions to KB entities, and (2) \emph{relation detection}, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the \emph{relation detection} subtask and further explore how it can contribute to the KBQA system. Although general relation detection\footnote{In the information extraction field such tasks are usually called \emph{relation extraction} or \emph{relation classification}.} methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M \cite{bordes2015large}, contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions \cite{bordes2015large} data set has 14\% of the golden test relations not observed in golden training tuples. Third, as shown in Figure \ref{fig:example}(b), for some KBQA tasks like WebQuestions \cite{berant2013semantic}, we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (\emph{BiLSTM}s) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed \emph{improved relation detection} could benefit the KBQA end task, we also propose a simple KBQA implementation composed of \emph{two-step relation detection}. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the \emph{raw question text} by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each \emph{topic entity}\footnote{Following \newcite{yih2015semantic}, here \emph{topic entity} refers to the root of the (directed) query tree; and \emph{core-chain} is the directed path of relation from root to the answer node.} selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. \section{Related Work} \label{sec:relatedwork} \paragraph{Relation Extraction} Relation extraction (RE) is an important sub-field of information extraction. General research in this field usually works on a (small) pre-defined relation set, where given a text paragraph and two target entities, the goal is to determine whether the text indicates any types of relations between the entities or not. As a result RE is usually formulated as a \textbf{classification task}. Traditional RE methods rely on large amount of hand-crafted features \cite{zhou_exploring_2005,rink-harabagiu:2010:SemEval,sun_semi-supervised_2011}. Recent research benefits a lot from the advancement of deep learning: from word embeddings \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} to deep networks like CNNs and LSTMs \cite{zeng-EtAl:2014:Coling,santos2015classifying,vu-EtAl:2016:N16-1} and attention models \cite{zhou-EtAl:2016:P16-2,wang-EtAl:2016:P16-12}. The above research assumes there is a fixed (closed) set of relation types, thus no zero-shot learning capability is required. The number of relations is usually not large: The widely used ACE2005 has 11/32 coarse/fine-grained relations; SemEval2010 Task8 has 19 relations; TAC-KBP2015 has 74 relations although it considers open-domain Wikipedia relations. All are much fewer than thousands of relations in KBQA. As a result, few work in this field focuses on dealing with large number of relations or unseen relations. \newcite{yu-EtAl:2016:N16-12} proposed to use relation embeddings in a low-rank tensor method. However their relation embeddings are still trained in supervised way and the number of relations is not large in the experiments. \paragraph{Relation Detection in KBQA Systems} Relation detection for KBQA also starts with feature-rich approaches \cite{yao2014information,bast2015more} towards usages of deep networks \cite{yih2015semantic,xu2016enhancing,dai-li-xu:2016:P16-1} and attention models \cite{yin2016simple,golub2016character}. Many of the above relation detection research could naturally support large relation vocabulary and open relation sets (especially for QA with OpenIE KB like ParaLex \cite{fader2013paraphrase}), in order to fit the goal of open-domain question answering. Different KBQA data sets have different levels of requirement about the above open-domain capacity. For example, most of the gold test relations in WebQuestions can be observed during training, thus some prior work on this task adopted the close domain assumption like in the general RE research. While for data sets like SimpleQuestions and ParaLex, the capacity to support large relation sets and unseen relations becomes more necessary. To the end, there are two main solutions: (1) use pre-trained relation embeddings (e.g. from TransE \cite{bordes2013translating}), like \cite{dai-li-xu:2016:P16-1}; (2) factorize the relation names to sequences and formulate relation detection as a \textbf{sequence matching and ranking} task. Such factorization works because that the relation names usually comprise meaningful word sequences. For example, \newcite{yin2016simple} split relations to word sequences for single-relation detection. \newcite{liang2016neural} also achieve good performance on WebQSP with word-level relation representation in an end-to-end neural programmer model. \newcite{yih2015semantic} use character tri-grams as inputs on both question and relation sides. \newcite{golub2016character} propose a generative framework for single-relation KBQA which predicts relation with a character-level sequence-to-sequence model. Another difference between relation detection in KBQA and general RE is that general RE research assumes that the two argument entities are both available. Thus it usually benefits from features \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} or attention mechanisms \cite{wang-EtAl:2016:P16-12} based on the entity information (e.g. entity types or entity embeddings). For relation detection in KBQA, such information is mostly missing because: (1) one question usually contains single argument (the topic entity) and (2) one KB entity could have multiple types (type vocabulary size larger than 1,500). This makes KB entity typing itself a difficult problem so no previous used entity information in the relation detection model.\footnote{Such entity information has been used in KBQA systems as features for the final answer re-rankers.} \section{Background: Different Granularity in KB Relations} Previous research \cite{yih2015semantic,yin2016simple} formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. \vspace{0.4em} \noindent \textbf{(1) Relation Name as a Single Token} (\emph{relation-level}). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure \ref{fig:example}, when treating relation names as single tokens, it will be difficult to match the questions to relation names ``\emph{episodes\_written}'' and ``\emph{starring\_roles}'' if these names do not appear in training data -- their relation embeddings $\bh^r$s will be random vectors thus are not comparable to question embeddings $\bh^q$s. \vspace{0.4em} \noindent \textbf{(2) Relation as Word Sequence} (\emph{word-level}). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure \ref{fig:example}(b), when doing only word-level matching, it is difficult to rank the target relation ``\emph{starring\_roles}'' higher compared to the incorrect relation ``\emph{plays\_produced}''. This is because the incorrect relation contains word ``\emph{plays}'', which is more similar to the question (containing word ``\emph{play}'') in the embedding space. On the other hand, if the target relation co-occurs with questions related to ``\emph{tv appearance}'' in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like ``\emph{tv show}'' and ``\emph{play on}''. The two types of relation representation contain different levels of abstraction. As shown in Table \ref{tab:re_example}, the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section \ref{sec:re_method} gives the details of our proposed approach. \section{Improved KB Relation Detection} \label{sec:re_method} This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. \subsection{Relation Representations from Different Granularity} We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\br=\{r^{word}_1,\cdots,r^{word}_{M_1}\} \cup \{r^{rel}_1,\cdots,r^{rel}_{M_2}\}$, where the first $M_1$ tokens are words (e.g. \emph{\{episode, written\}}), and the last $M_2$ tokens are relation names, e.g., \emph{\{episode\_written\}} or \emph{\{starring\_roles, series\}} (when the target is a chain like in Figure \ref{fig:example}(b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\bB^{word}_{1:M_1}:\bB^{rel}_{1:M_2}]$ (each row vector $\bb_i$ is the concatenation between forward/backward representations at $i$). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply \emph{one} max-pooling on these two sets of vectors and get the final relation representation $\bh^r$. \subsection{Different Abstractions of Questions Representations} From Table \ref{tab:re_example}, we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\bq=\{q_1,\cdots,q_N\}$ and gets hidden representations $\bG^{(1)}_{1:N}=[\bg^{(1)}_1;\cdots;\bg^{(1)}_N]$. The second-layer BiLSTM works on $\bG^{(1)}_{1:N}$ to get the second set of hidden representations $\bG^{(2)}_{1:N}$. Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. \subsection{Hierarchical Matching between Relation and Question} \label{ssec:hier_matching} Now we have question contexts of different lengths encoded in $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$. Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (\emph{Hierarchical Matching}). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table \ref{tab:re_example}, the relation word \emph{written} could be matched to either the same single word in the question or a much longer phrase \emph{be the writer of}. We could perform the above hierarchical matching by computing the similarity between each layer of $\bG$ and $\bh^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table \ref{tab:rel}). Our analysis in Section \ref{ssec:exp_re} shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks \cite{he2016deep} for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such \emph{Hierarchical Residual Matching}: (1) Connecting each $\bg^{(1)}_i$ and $\bg^{(2)}_i$, resulting in a $\bg^{'}_i=\bg^{(1)}_i + \bg^{(2)}_i$ for each position $i$. Then the final question representation $\bh^q$ becomes a max-pooling over all $\bg^{'}_i$s, $1$$\leq$i$\leq$$N$. (2) Applying max-pooling on $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$ to get $\bh^{(1)}_{max}$ and $\bh^{(2)}_{max}$, respectively, then setting $\bh^q=\bh^{(1)}_{max}+\bh^{(2)}_{max}$. Finally we compute the matching score of $\br$ given $\bq$ as $s_{rel}(\br;\bq)=cos(\bh^r, \bh^q)$. Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation $\br^+$ and other relations $\br^-$ in the candidate pool $R$. { \abovedisplayskip=5pt \belowdisplayskip=5pt \begin{align} l_{\mathrm{rel}} = \max \{0, \gamma - s_{\mathrm{rel}}(\br^+; \bq) + s_{\mathrm{rel}}(\br^-; \bq)\} \nonumber \end{align} } where $\gamma$ is a constant parameter. Fig \ref{fig:re_model} summarizes the above \emph{Hierarchical Residual BiLSTM (\textbf{HR-BiLSTM})} model. \paragraph{Remark:} Another way of hierarchical matching consists in relying on \textbf{attention mechanism}, e.g. \cite{parikh-EtAl:2016:EMNLP2016}, to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table \ref{tab:rel}). \section{KBQA Enhanced by Relation Detection} \label{sec:kbqa_method} This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work \cite{yih2015semantic,xu2016enhancing}, our KBQA system takes an existing entity linker to produce the top-$K$ linked entities, $EL_K(q)$, for a question $q$ (``\emph{initial entity linking}''). % Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm \ref{algo:pipeline}. \begin{algorithm}[htbp] \small { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \DontPrintSemicolon \BlankLine \textbf{Entity Re-Ranking} (\emph{first-step relation detection}): Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Detect relation(s) using the \emph{reformatted question text} in which the topic entity is replaced by a special token \emph{$<$e$>$} (Section \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section \ref{ssec:query_gen})\; \textbf{Constraint Detection} (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $\hat{r}$ (connecting by a relation $r_c$) , add the high scoring $c$ and $r_c$ to the query (Section \ref{ssec:constraint}). \caption{\label{algo:pipeline}{\footnotesize{KBQA with two-step relation detection}}}} \end{algorithm} Compared to previous approaches, the main difference is that we have an additional \emph{entity re-ranking} step after the \emph{initial entity linking}. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7\% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the \emph{initial entity linking} with relations detected in questions. \removed{ Previous efforts on KBQA usually generate the KB queries from a question $q$ step-by-step as follows: (1) Entity linking, in which the top-$K$ entity candidates for a question $q$ ($EL_K(q)$) are selected. (2) Relation detection, where a topic entity $e$ is selected, and a relation (or chain of relations) is detected from its corresponding relation set $R_e=\{r(e,\cdot) \in KB\}$. (3) Constraint detection, which tries to apply the rest entities in $EL_K(q) \setminus \{e\}$ as constraints to further filter the answers. As the starting step, the accuracy and coverage of this top-$K$ list is critical. However, we have observed that entity linking sometimes becomes bottleneck of KBQA systems. While on WebQSP the best reported linker could get 87.98\% top-1 accuracy on identifying topic entities, on SimpleQuestions this number becomes 72.7\%. Our error analysis shows that such errors are usually due to the ambiguities of entity names. For example in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only text matching. To overcome the above difficulty, previous work usually deals with such problem by generating large beams and then relies on hand-crafted features to re-rank the final generated KB queries, e.g., \newcite{golub2016character} used $K=50$ for SimpleQuestions, which slows down the speed of the model. Here we propose an alternative solution to this problem: having observed that different entity candidates usually connect to different relations, we propose to use relations detected in questions to help entity disambiguation in the \emph{initial entity linking}. Concretely, we add an additional component between the steps (1) and (2) above, which is also a relation detection model on question text but is used to re-rank the entity candidates. We call this \emph{relation detection on entity set}, since it is detecting relation for a set of entity candidates instead of for single specific entities. } Sections \ref{ssec:ent_reranking} and \ref{ssec:rel} elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. \removed { \begin{algorithm*}[htbp] { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \textbf{Entity Re-Ranking}: Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing only the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Perform relation detection using the \emph{reformatted question text} in which the topic entity is replaced by an especial token \emph{$<$e$>$} (Sec. \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Sec. \ref{ssec:rel})\; \textbf{Constraint Detection} (optional): Compute similarity between any $n$-gram in $q$ and any neighbor node $c$ (connected by relation $r_c$) of each entity in the above query, add the high scoring $c$ and $r_c$ to the query (Sec. \ref{ssec:constraint}). \label{algo:pipeline} \caption{\scriptsize{KBQA with two-step relation detection}}} \end{algorithm*} } \subsection{Entity Re-Ranking} \label{ssec:ent_reranking} In this step, we use the \emph{raw question text} as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$. We call this step \emph{relation detection on entity set} since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. \ref{sec:re_method}. For each question $q$, after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ($R^{l}_q$) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$, given the original entity linker score $s_{linker}$, and the score of the most confident relation $r\in R_q^{l} \cap R_e$, we sum these two scores to re-rank the entities: \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s_{\mathrm{rerank}}(e;q) =& \alpha \cdot s_{\mathrm{linker}}(e;q) \nonumber \\ + & (1-\alpha) \cdot\max_{r \in R_q^{l} \cap R_e} s_{\mathrm{rel}}(r;q).\nonumber \end{align} Finally, we select top $K'$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K'}^{'}(q)$. We use the same example in Fig \ref{fig:example}(a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as ``\emph{episodes\_written}'', ``\emph{author\_of}'' and ``\emph{profession}''. Then, according to the connections of entity candidates in KB, we find that the TV writer ``\emph{Mike Kelley}'' will be scored higher than the baseball player ``\emph{Mike Kelley}'', because the former has the relations ``\emph{episodes\_written}'' and ``\emph{profession}''. This method can be viewed as exploiting entity-relation collocation for entity linking. \subsection{Relation Detection} \label{ssec:rel} In this step, for each candidate entity $e \in EL_K'(q)$, we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB.\footnote{{Note that the number of entities and the number of relation candidates will be much smaller than those in the previous step.}} Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$'s entity mention in $q$ with a token ``\emph{$<$e$>$}''. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$: $s_{rel} (r;e,q)$. \subsection{Query Generation} \label{ssec:query_gen} Finally, the system outputs the $<$entity, relation (or core-chain)$>$ pair $(\hat{e}, \hat{r})$ according to: {{ \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s(\hat{e}, \hat{r}; q) =& \max_{e \in EL_{K'}^{'}(q), r \in R_e} \left ( \beta \cdot s_{\mathrm{rerank}}(e;q) \right. \nonumber\\ &\left.+ (1-\beta) \cdot s_{\mathrm{rel}} (r;e,q) \right), \nonumber \end{align} }} where $\beta$ is a hyperparameter to be tuned. %possibly because the $s_{rel}$ scores are closer to each other. \subsection{Constraint Detection} \label{ssec:constraint} Similar to \cite{yih2015semantic}, we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) \textbf{Sub-graph generation}: given the top scored query generated by the previous 3 steps\footnote{{Starting with the top-1 query suffers more from error propagation. However we still achieve state-of-the-art on WebQSP in Sec.\ref{sec:exp}, showing the advantage of our relation detection model. We leave in future work beam-search and feature extraction on beam for final answer re-ranking like in previous research.}}, for each node $v$ (answer node or the CVT node like in Figure \ref{fig:example}(b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$) with any relation, and generate a sub-graph associated to the original query. (2) \textbf{Entity-linking on sub-graph nodes}: we compute a matching score between each $n$-gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta$ (tuned on training set), we will add the constraint entity $c$ (and $r_c$) to the query by attaching it to the corresponding node $v$ on the core-chain. \section{Experiments} \label{sec:exp} \vspace{-0.1em} \subsection{Task Introduction \& Settings} We use the SimpleQuestions \cite{bordes2015large} and WebQSP \cite{yih-EtAl:2016:P16-2} datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. \noindent \textbf{SimpleQuestions (SQ): } It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) \cite{bordes2015large}, in order to compare with previous research. \newcite{yin2016simple} also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results\footnote{The two resources have been downloaded from \scriptsize{\url{https://github.com/Gorov/SimpleQuestions-EntityLinking}}}. Therefore, our results can be compared with their reported results on both tasks. \noindent \textbf{WebQSP (WQ): } A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following \newcite{yih-EtAl:2016:P16-2}, we use S-MART \cite{yang-chang:2015:ACL-IJCNLP} entity-linking outputs.\footnote{{\url{https://github.com/scottyih/STAGG}}} In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set.\footnote{{The dataset is available at \scriptsize{\url{https://github.com/Gorov/KBQA_RE_data}}.}} For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\leq$ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs (\{50, 100, 200, 400\})\footnote{{For CNNs we double the size for fair comparison.}}; (2) learning rate (\{0.1, 0.5, 1.0, 2.0\}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section \ref{ssec:hier_matching}); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have \emph{entity replacement} first (see Section \ref{ssec:rel} and Figure \ref{tab:re_example}). All word vectors are initialized with 300-$d$ pretrained word embeddings \cite{mikolov2013distributed}. The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. \vspace{-0.2em} \subsection{Relation Detection Results} \vspace{-0.1em} \label{ssec:exp_re} Table \ref{tab:rel} shows the results on two relation detection tasks. The AMPCNN result is from \cite{yin2016simple}, which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from \cite{yih2015semantic}, where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3\% (p $<$ 0.001 and 0.01 compared to the best baseline \emph{BiLSTM w/ words} on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2\% to 88.9\%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. \paragraph{Ablation Test:} The bottom of Table \ref{tab:rel} shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3\% vs. 91.2/88.8\%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section \ref{ssec:hier_matching}). For the attention-based baseline, we tried the model from \cite{parikh-EtAl:2016:EMNLP2016} and its one-way variations, where the one-way model gives better results\footnote{{We also tried to apply the same attention method on deep BiLSTM with residual connections, but it does not lead to better results compared to HR-BiLSTM. We hypothesize that the idea of hierarchical matching with attention mechanism may work better for long sequences, and the new advanced attention mechanisms \cite{wang-jiang:2016:N16-1,wang2017bilateral} might help hierarchical matching. We leave the above directions to future work.}}. Note that residual learning significantly helps on WebQSP (80.65\% to 82.53\%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. \paragraph{Analysis} Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that \emph{training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable}. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94\%) compared to HR-BiLSTM (95.67\%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, \emph{training deep BiLSTMs is more difficult without shortcut connections}. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99\%, even lower than the 95.25\% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that \emph{HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction}. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11\%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. \subsection{KBQA End-Task Results} Table \ref{tab:overall_results} compares our system with two published baselines (1) STAGG \cite{yih2015semantic}, the state-of-the-art on WebQSP\footnote{{The STAGG score on SQ is from \cite{bao-EtAl:2016:COLING}.}} and (2) AMPCNN \cite{yin2016simple}, the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table \ref{tab:overall_results}). Compared to the \emph{baseline relation detector} (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3\% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.% which shows the importance of our proposed improved relation detector. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in \cite{yih2015semantic}, constraint detection is crucial for our system\footnote{Note that another reason is that we are evaluating on accuracy here. When evaluating on F1 the gap will be smaller.}. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5\% top-1 accuracy), leaving a huge potential (77.5\% vs. 58.0\%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see \newcite{yih2015semantic} for the three models used), we also try to use the top-3 relation detectors from Section \ref{ssec:exp_re}. As shown on the last row of Table \ref{tab:overall_results}, this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. \section{Conclusion} KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in \cite{liang2016neural}, to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions \cite{su-EtAl:2016:EMNLP2016} and ComplexQuestions \cite{bao-EtAl:2016:COLING} to handle more characteristics of general QA. \bibliographystyle{acl_natbib} \clearpage \newpage \input{acl2017_appendix} \end{document} \section*{Appendix A: Detailed Feature List for \emph{SimpleLinker}} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. The features we used in the \emph{SimpleLinker} include: \begin{enumerate} \item The proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; \item The relative position of the entity mention in the question. We denote the beginning position of $m$ in $q$ as $p_m$ (in characters), then we have the feature $\frac{p_m}{\vert q\vert }$. \end{enumerate} The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} + \frac{p_m}{\vert q\vert } \nonumber \end{align} \section*{Appendix B: Mathematics for the Relation Detection Models} \paragraph{BiCNN}: given a pair of inputs of question $q$ and relation $r$, we use two CNNs (with shared parameters in the experiments) to get the hidden states for each position of $q$ and $r$: $h_{1:N}^q$ and $h_{1:M}^r$, where $N$ and $M$ are the lengths of $q$ and $r$, respectively. By applying max-pooling on $h_{1:N}^q$ and $h_{1:M}^r$ we get the question representation $h_{max}^q$ where $h_{max}^q[i] = max_{1 \leq j \leq N} h_j^q[i]$. Similarly, we get the relation representation $h_{max}^r$. Then the score of $r$ given $q$ is defined as $s_{rel}(r;q)=cos(h_{max}^q, h_{max}^r)$. \paragraph{APCNN}: given the same representations $h_{1:N}^q$ and $h_{1:M}^r$ as above, we compute the alignment score $a_{ij} = (h_{i}^r)^T \mathbf{U} h_{j}^q$. In the experiments, we use identity matrix $\mathbf{U}=\mathbf{I}$. Based on the alignment score, we can compute the attention score for each $h_i^r$ as $w^r_i=\frac{e^{max_j a_{ij}}}{\sum_{k=1:M} e^{max_j a_{kj}}}$; and the attention score for each $h_j^q$ as $w^q_j=\frac{e^{max_i a_{ij}}}{\sum_{k=1:N} e^{max_i a_{ik}}}$. Then the score of relation $r$ given $q$ is \begin{align} s_{rel}(r;q)=cos(\sum_{1\leq j \leq N} w^q_j h_{j}^q, \sum_{1 \leq i \leq M} w^r_i h_{i}^r) \nonumber \end{align} \paragraph{Entity Replacement}: when we use entity replacement, the representation vector $h_{max}^q$ depends on the topic entity $e$. Therefore we denote the similarity score as $s_{rel}(r;e, q)$. \section*{Appendix A: Detailed Score Computation for Constraint Detection} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. Based on the above numbers we compute the proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} \nonumber \end{align} \section*{Appendix B: Special Rules for Constraint Detection} \begin{enumerate} \item Special threshold for date constraints. The time stamps in KB usually follow the year-month-day format, while the time in WebQSP are usually years. This makes the overlap between the date entities in questions and the KB entity names smaller (length of overlap is usually 4). To deal with this, we only check whether the dates in questions could match the years in KB, thus have a special threshold of $\theta=1$ for date constraints. \item Filtering the constraints for answer nodes. Sometimes the answer node could connect to huge number of other nodes, e.g. when the question is asking for a country and we have an answer candidate \emph{the U.S.}. From the observation on the WebQSP datasets, we found that for most of the time, the gold constraints on answers are their entity types (e.g., whether the question is asking for a country or a city). Based on this observation, in the constraint detection step, for the answer nodes we only keep the tuples with \emph{type} relations (i.e. the relation name contains the word ``\emph{type}''), such as \emph{common.topic.notable\_types, education.educational\_institution.school\_type} etc. \end{enumerate} \section*{Appendix C: Effects of Entity Re-Ranking on SimpleQuestions} Removing entity re-ranking step results in significant performance drop (see Table \ref{tab:overall_results}, the row of \emph{w/o entity re-ranking}). Table \ref{tab:final_linking} evaluates our re-ranker as an separate task. Our re-ranker results in large improvement, especially when the beam sizes are smaller than 10. This is indicating another important usage of our proposed improved relation detection model on entity linking re-ranking. \documentclass[11pt,letterpaper]{article} \usepackage[nohyperref]{acl2017} \usepackage[linesnumbered,ruled]{algorithm2e} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{117} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\bq}{\mathbf{q}} \newcommand{\br}{\mathbf{r}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bb}{\bm{\beta}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bg}{\bm{\gamma}} \newcommand{\bG}{\mathbf{\Gamma}} \newcommand{\removed}[1]{} \title{Improved Neural Relation Detection for Knowledge Base Question Answering} \author{Mo Yu$^{\dagger}$\quad Wenpeng Yin$^{\star}$\quad Kazi Saidul Hasan$^{\ddagger}$\quad Cicero dos Santos$^{\dagger}$\\ {\bf Bing Xiang}$^{\ddagger}$\quad {\bf Bowen Zhou}$^{\dagger}$\\ {\tt $^{\dagger}$AI Foundations, IBM Research, USA}\\ {\tt $^{\star}$Center for Information and Language Processing, LMU Munich}\\ {\tt $^{\ddagger}$IBM Watson, USA}\\ {\tt \small \{yum,kshasan,cicerons,bingxia,zhou\}@us.ibm.com, wenpeng@cis.lmu.de} } \date{} \begin{document} \maketitle \begin{abstract} Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. \end{abstract} \section{Introduction} \label{sec:intro} Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples \cite{berant2013semantic,yao2014freebase,bordes2015large,bast2015more,yih2015semantic,xu2016enhancing}. For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure \ref{fig:example} illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$\emph{head-entity, relation, tail-entity}$>$ KB tuple \cite{fader2013paraphrase,yih2014semantic,bordes2015large}; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) \emph{entity linking}, which links $n$-grams in questions to KB entities, and (2) \emph{relation detection}, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the \emph{relation detection} subtask and further explore how it can contribute to the KBQA system. Although general relation detection\footnote{In the information extraction field such tasks are usually called \emph{relation extraction} or \emph{relation classification}.} methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M \cite{bordes2015large}, contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions \cite{bordes2015large} data set has 14\% of the golden test relations not observed in golden training tuples. Third, as shown in Figure \ref{fig:example}(b), for some KBQA tasks like WebQuestions \cite{berant2013semantic}, we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (\emph{BiLSTM}s) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed \emph{improved relation detection} could benefit the KBQA end task, we also propose a simple KBQA implementation composed of \emph{two-step relation detection}. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the \emph{raw question text} by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each \emph{topic entity}\footnote{Following \newcite{yih2015semantic}, here \emph{topic entity} refers to the root of the (directed) query tree; and \emph{core-chain} is the directed path of relation from root to the answer node.} selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. \section{Related Work} \label{sec:relatedwork} \paragraph{Relation Extraction} Relation extraction (RE) is an important sub-field of information extraction. General research in this field usually works on a (small) pre-defined relation set, where given a text paragraph and two target entities, the goal is to determine whether the text indicates any types of relations between the entities or not. As a result RE is usually formulated as a \textbf{classification task}. Traditional RE methods rely on large amount of hand-crafted features \cite{zhou_exploring_2005,rink-harabagiu:2010:SemEval,sun_semi-supervised_2011}. Recent research benefits a lot from the advancement of deep learning: from word embeddings \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} to deep networks like CNNs and LSTMs \cite{zeng-EtAl:2014:Coling,santos2015classifying,vu-EtAl:2016:N16-1} and attention models \cite{zhou-EtAl:2016:P16-2,wang-EtAl:2016:P16-12}. The above research assumes there is a fixed (closed) set of relation types, thus no zero-shot learning capability is required. The number of relations is usually not large: The widely used ACE2005 has 11/32 coarse/fine-grained relations; SemEval2010 Task8 has 19 relations; TAC-KBP2015 has 74 relations although it considers open-domain Wikipedia relations. All are much fewer than thousands of relations in KBQA. As a result, few work in this field focuses on dealing with large number of relations or unseen relations. \newcite{yu-EtAl:2016:N16-12} proposed to use relation embeddings in a low-rank tensor method. However their relation embeddings are still trained in supervised way and the number of relations is not large in the experiments. \paragraph{Relation Detection in KBQA Systems} Relation detection for KBQA also starts with feature-rich approaches \cite{yao2014information,bast2015more} towards usages of deep networks \cite{yih2015semantic,xu2016enhancing,dai-li-xu:2016:P16-1} and attention models \cite{yin2016simple,golub2016character}. Many of the above relation detection research could naturally support large relation vocabulary and open relation sets (especially for QA with OpenIE KB like ParaLex \cite{fader2013paraphrase}), in order to fit the goal of open-domain question answering. Different KBQA data sets have different levels of requirement about the above open-domain capacity. For example, most of the gold test relations in WebQuestions can be observed during training, thus some prior work on this task adopted the close domain assumption like in the general RE research. While for data sets like SimpleQuestions and ParaLex, the capacity to support large relation sets and unseen relations becomes more necessary. To the end, there are two main solutions: (1) use pre-trained relation embeddings (e.g. from TransE \cite{bordes2013translating}), like \cite{dai-li-xu:2016:P16-1}; (2) factorize the relation names to sequences and formulate relation detection as a \textbf{sequence matching and ranking} task. Such factorization works because that the relation names usually comprise meaningful word sequences. For example, \newcite{yin2016simple} split relations to word sequences for single-relation detection. \newcite{liang2016neural} also achieve good performance on WebQSP with word-level relation representation in an end-to-end neural programmer model. \newcite{yih2015semantic} use character tri-grams as inputs on both question and relation sides. \newcite{golub2016character} propose a generative framework for single-relation KBQA which predicts relation with a character-level sequence-to-sequence model. Another difference between relation detection in KBQA and general RE is that general RE research assumes that the two argument entities are both available. Thus it usually benefits from features \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} or attention mechanisms \cite{wang-EtAl:2016:P16-12} based on the entity information (e.g. entity types or entity embeddings). For relation detection in KBQA, such information is mostly missing because: (1) one question usually contains single argument (the topic entity) and (2) one KB entity could have multiple types (type vocabulary size larger than 1,500). This makes KB entity typing itself a difficult problem so no previous used entity information in the relation detection model.\footnote{Such entity information has been used in KBQA systems as features for the final answer re-rankers.} \section{Background: Different Granularity in KB Relations} Previous research \cite{yih2015semantic,yin2016simple} formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. \vspace{0.4em} \noindent \textbf{(1) Relation Name as a Single Token} (\emph{relation-level}). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure \ref{fig:example}, when treating relation names as single tokens, it will be difficult to match the questions to relation names ``\emph{episodes\_written}'' and ``\emph{starring\_roles}'' if these names do not appear in training data -- their relation embeddings $\bh^r$s will be random vectors thus are not comparable to question embeddings $\bh^q$s. \vspace{0.4em} \noindent \textbf{(2) Relation as Word Sequence} (\emph{word-level}). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure \ref{fig:example}(b), when doing only word-level matching, it is difficult to rank the target relation ``\emph{starring\_roles}'' higher compared to the incorrect relation ``\emph{plays\_produced}''. This is because the incorrect relation contains word ``\emph{plays}'', which is more similar to the question (containing word ``\emph{play}'') in the embedding space. On the other hand, if the target relation co-occurs with questions related to ``\emph{tv appearance}'' in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like ``\emph{tv show}'' and ``\emph{play on}''. The two types of relation representation contain different levels of abstraction. As shown in Table \ref{tab:re_example}, the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section \ref{sec:re_method} gives the details of our proposed approach. \section{Improved KB Relation Detection} \label{sec:re_method} This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. \subsection{Relation Representations from Different Granularity} We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\br=\{r^{word}_1,\cdots,r^{word}_{M_1}\} \cup \{r^{rel}_1,\cdots,r^{rel}_{M_2}\}$, where the first $M_1$ tokens are words (e.g. \emph{\{episode, written\}}), and the last $M_2$ tokens are relation names, e.g., \emph{\{episode\_written\}} or \emph{\{starring\_roles, series\}} (when the target is a chain like in Figure \ref{fig:example}(b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\bB^{word}_{1:M_1}:\bB^{rel}_{1:M_2}]$ (each row vector $\bb_i$ is the concatenation between forward/backward representations at $i$). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply \emph{one} max-pooling on these two sets of vectors and get the final relation representation $\bh^r$. \subsection{Different Abstractions of Questions Representations} From Table \ref{tab:re_example}, we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\bq=\{q_1,\cdots,q_N\}$ and gets hidden representations $\bG^{(1)}_{1:N}=[\bg^{(1)}_1;\cdots;\bg^{(1)}_N]$. The second-layer BiLSTM works on $\bG^{(1)}_{1:N}$ to get the second set of hidden representations $\bG^{(2)}_{1:N}$. Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. \subsection{Hierarchical Matching between Relation and Question} \label{ssec:hier_matching} Now we have question contexts of different lengths encoded in $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$. Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (\emph{Hierarchical Matching}). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table \ref{tab:re_example}, the relation word \emph{written} could be matched to either the same single word in the question or a much longer phrase \emph{be the writer of}. We could perform the above hierarchical matching by computing the similarity between each layer of $\bG$ and $\bh^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table \ref{tab:rel}). Our analysis in Section \ref{ssec:exp_re} shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks \cite{he2016deep} for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such \emph{Hierarchical Residual Matching}: (1) Connecting each $\bg^{(1)}_i$ and $\bg^{(2)}_i$, resulting in a $\bg^{'}_i=\bg^{(1)}_i + \bg^{(2)}_i$ for each position $i$. Then the final question representation $\bh^q$ becomes a max-pooling over all $\bg^{'}_i$s, $1$$\leq$i$\leq$$N$. (2) Applying max-pooling on $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$ to get $\bh^{(1)}_{max}$ and $\bh^{(2)}_{max}$, respectively, then setting $\bh^q=\bh^{(1)}_{max}+\bh^{(2)}_{max}$. Finally we compute the matching score of $\br$ given $\bq$ as $s_{rel}(\br;\bq)=cos(\bh^r, \bh^q)$. Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation $\br^+$ and other relations $\br^-$ in the candidate pool $R$. { \abovedisplayskip=5pt \belowdisplayskip=5pt \begin{align} l_{\mathrm{rel}} = \max \{0, \gamma - s_{\mathrm{rel}}(\br^+; \bq) + s_{\mathrm{rel}}(\br^-; \bq)\} \nonumber \end{align} } where $\gamma$ is a constant parameter. Fig \ref{fig:re_model} summarizes the above \emph{Hierarchical Residual BiLSTM (\textbf{HR-BiLSTM})} model. \paragraph{Remark:} Another way of hierarchical matching consists in relying on \textbf{attention mechanism}, e.g. \cite{parikh-EtAl:2016:EMNLP2016}, to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table \ref{tab:rel}). \section{KBQA Enhanced by Relation Detection} \label{sec:kbqa_method} This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work \cite{yih2015semantic,xu2016enhancing}, our KBQA system takes an existing entity linker to produce the top-$K$ linked entities, $EL_K(q)$, for a question $q$ (``\emph{initial entity linking}''). % Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm \ref{algo:pipeline}. \begin{algorithm}[htbp] \small { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \DontPrintSemicolon \BlankLine \textbf{Entity Re-Ranking} (\emph{first-step relation detection}): Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Detect relation(s) using the \emph{reformatted question text} in which the topic entity is replaced by a special token \emph{$<$e$>$} (Section \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section \ref{ssec:query_gen})\; \textbf{Constraint Detection} (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $\hat{r}$ (connecting by a relation $r_c$) , add the high scoring $c$ and $r_c$ to the query (Section \ref{ssec:constraint}). \caption{\label{algo:pipeline}{\footnotesize{KBQA with two-step relation detection}}}} \end{algorithm} Compared to previous approaches, the main difference is that we have an additional \emph{entity re-ranking} step after the \emph{initial entity linking}. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7\% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the \emph{initial entity linking} with relations detected in questions. \removed{ Previous efforts on KBQA usually generate the KB queries from a question $q$ step-by-step as follows: (1) Entity linking, in which the top-$K$ entity candidates for a question $q$ ($EL_K(q)$) are selected. (2) Relation detection, where a topic entity $e$ is selected, and a relation (or chain of relations) is detected from its corresponding relation set $R_e=\{r(e,\cdot) \in KB\}$. (3) Constraint detection, which tries to apply the rest entities in $EL_K(q) \setminus \{e\}$ as constraints to further filter the answers. As the starting step, the accuracy and coverage of this top-$K$ list is critical. However, we have observed that entity linking sometimes becomes bottleneck of KBQA systems. While on WebQSP the best reported linker could get 87.98\% top-1 accuracy on identifying topic entities, on SimpleQuestions this number becomes 72.7\%. Our error analysis shows that such errors are usually due to the ambiguities of entity names. For example in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only text matching. To overcome the above difficulty, previous work usually deals with such problem by generating large beams and then relies on hand-crafted features to re-rank the final generated KB queries, e.g., \newcite{golub2016character} used $K=50$ for SimpleQuestions, which slows down the speed of the model. Here we propose an alternative solution to this problem: having observed that different entity candidates usually connect to different relations, we propose to use relations detected in questions to help entity disambiguation in the \emph{initial entity linking}. Concretely, we add an additional component between the steps (1) and (2) above, which is also a relation detection model on question text but is used to re-rank the entity candidates. We call this \emph{relation detection on entity set}, since it is detecting relation for a set of entity candidates instead of for single specific entities. } Sections \ref{ssec:ent_reranking} and \ref{ssec:rel} elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. \removed { \begin{algorithm*}[htbp] { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \textbf{Entity Re-Ranking}: Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing only the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Perform relation detection using the \emph{reformatted question text} in which the topic entity is replaced by an especial token \emph{$<$e$>$} (Sec. \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Sec. \ref{ssec:rel})\; \textbf{Constraint Detection} (optional): Compute similarity between any $n$-gram in $q$ and any neighbor node $c$ (connected by relation $r_c$) of each entity in the above query, add the high scoring $c$ and $r_c$ to the query (Sec. \ref{ssec:constraint}). \label{algo:pipeline} \caption{\scriptsize{KBQA with two-step relation detection}}} \end{algorithm*} } \subsection{Entity Re-Ranking} \label{ssec:ent_reranking} In this step, we use the \emph{raw question text} as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$. We call this step \emph{relation detection on entity set} since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. \ref{sec:re_method}. For each question $q$, after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ($R^{l}_q$) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$, given the original entity linker score $s_{linker}$, and the score of the most confident relation $r\in R_q^{l} \cap R_e$, we sum these two scores to re-rank the entities: \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s_{\mathrm{rerank}}(e;q) =& \alpha \cdot s_{\mathrm{linker}}(e;q) \nonumber \\ + & (1-\alpha) \cdot\max_{r \in R_q^{l} \cap R_e} s_{\mathrm{rel}}(r;q).\nonumber \end{align} Finally, we select top $K'$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K'}^{'}(q)$. We use the same example in Fig \ref{fig:example}(a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as ``\emph{episodes\_written}'', ``\emph{author\_of}'' and ``\emph{profession}''. Then, according to the connections of entity candidates in KB, we find that the TV writer ``\emph{Mike Kelley}'' will be scored higher than the baseball player ``\emph{Mike Kelley}'', because the former has the relations ``\emph{episodes\_written}'' and ``\emph{profession}''. This method can be viewed as exploiting entity-relation collocation for entity linking. \subsection{Relation Detection} \label{ssec:rel} In this step, for each candidate entity $e \in EL_K'(q)$, we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB.\footnote{{Note that the number of entities and the number of relation candidates will be much smaller than those in the previous step.}} Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$'s entity mention in $q$ with a token ``\emph{$<$e$>$}''. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$: $s_{rel} (r;e,q)$. \subsection{Query Generation} \label{ssec:query_gen} Finally, the system outputs the $<$entity, relation (or core-chain)$>$ pair $(\hat{e}, \hat{r})$ according to: {{ \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s(\hat{e}, \hat{r}; q) =& \max_{e \in EL_{K'}^{'}(q), r \in R_e} \left ( \beta \cdot s_{\mathrm{rerank}}(e;q) \right. \nonumber\\ &\left.+ (1-\beta) \cdot s_{\mathrm{rel}} (r;e,q) \right), \nonumber \end{align} }} where $\beta$ is a hyperparameter to be tuned. %possibly because the $s_{rel}$ scores are closer to each other. \subsection{Constraint Detection} \label{ssec:constraint} Similar to \cite{yih2015semantic}, we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) \textbf{Sub-graph generation}: given the top scored query generated by the previous 3 steps\footnote{{Starting with the top-1 query suffers more from error propagation. However we still achieve state-of-the-art on WebQSP in Sec.\ref{sec:exp}, showing the advantage of our relation detection model. We leave in future work beam-search and feature extraction on beam for final answer re-ranking like in previous research.}}, for each node $v$ (answer node or the CVT node like in Figure \ref{fig:example}(b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$) with any relation, and generate a sub-graph associated to the original query. (2) \textbf{Entity-linking on sub-graph nodes}: we compute a matching score between each $n$-gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta$ (tuned on training set), we will add the constraint entity $c$ (and $r_c$) to the query by attaching it to the corresponding node $v$ on the core-chain. \section{Experiments} \label{sec:exp} \vspace{-0.1em} \subsection{Task Introduction \& Settings} We use the SimpleQuestions \cite{bordes2015large} and WebQSP \cite{yih-EtAl:2016:P16-2} datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. \noindent \textbf{SimpleQuestions (SQ): } It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) \cite{bordes2015large}, in order to compare with previous research. \newcite{yin2016simple} also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results\footnote{The two resources have been downloaded from \scriptsize{\url{https://github.com/Gorov/SimpleQuestions-EntityLinking}}}. Therefore, our results can be compared with their reported results on both tasks. \noindent \textbf{WebQSP (WQ): } A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following \newcite{yih-EtAl:2016:P16-2}, we use S-MART \cite{yang-chang:2015:ACL-IJCNLP} entity-linking outputs.\footnote{{\url{https://github.com/scottyih/STAGG}}} In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set.\footnote{{The dataset is available at \scriptsize{\url{https://github.com/Gorov/KBQA_RE_data}}.}} For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\leq$ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs (\{50, 100, 200, 400\})\footnote{{For CNNs we double the size for fair comparison.}}; (2) learning rate (\{0.1, 0.5, 1.0, 2.0\}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section \ref{ssec:hier_matching}); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have \emph{entity replacement} first (see Section \ref{ssec:rel} and Figure \ref{tab:re_example}). All word vectors are initialized with 300-$d$ pretrained word embeddings \cite{mikolov2013distributed}. The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. \vspace{-0.2em} \subsection{Relation Detection Results} \vspace{-0.1em} \label{ssec:exp_re} Table \ref{tab:rel} shows the results on two relation detection tasks. The AMPCNN result is from \cite{yin2016simple}, which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from \cite{yih2015semantic}, where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3\% (p $<$ 0.001 and 0.01 compared to the best baseline \emph{BiLSTM w/ words} on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2\% to 88.9\%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. \paragraph{Ablation Test:} The bottom of Table \ref{tab:rel} shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3\% vs. 91.2/88.8\%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section \ref{ssec:hier_matching}). For the attention-based baseline, we tried the model from \cite{parikh-EtAl:2016:EMNLP2016} and its one-way variations, where the one-way model gives better results\footnote{{We also tried to apply the same attention method on deep BiLSTM with residual connections, but it does not lead to better results compared to HR-BiLSTM. We hypothesize that the idea of hierarchical matching with attention mechanism may work better for long sequences, and the new advanced attention mechanisms \cite{wang-jiang:2016:N16-1,wang2017bilateral} might help hierarchical matching. We leave the above directions to future work.}}. Note that residual learning significantly helps on WebQSP (80.65\% to 82.53\%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. \paragraph{Analysis} Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that \emph{training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable}. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94\%) compared to HR-BiLSTM (95.67\%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, \emph{training deep BiLSTMs is more difficult without shortcut connections}. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99\%, even lower than the 95.25\% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that \emph{HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction}. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11\%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. \subsection{KBQA End-Task Results} Table \ref{tab:overall_results} compares our system with two published baselines (1) STAGG \cite{yih2015semantic}, the state-of-the-art on WebQSP\footnote{{The STAGG score on SQ is from \cite{bao-EtAl:2016:COLING}.}} and (2) AMPCNN \cite{yin2016simple}, the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table \ref{tab:overall_results}). Compared to the \emph{baseline relation detector} (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3\% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.% which shows the importance of our proposed improved relation detector. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in \cite{yih2015semantic}, constraint detection is crucial for our system\footnote{Note that another reason is that we are evaluating on accuracy here. When evaluating on F1 the gap will be smaller.}. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5\% top-1 accuracy), leaving a huge potential (77.5\% vs. 58.0\%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see \newcite{yih2015semantic} for the three models used), we also try to use the top-3 relation detectors from Section \ref{ssec:exp_re}. As shown on the last row of Table \ref{tab:overall_results}, this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. \section{Conclusion} KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in \cite{liang2016neural}, to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions \cite{su-EtAl:2016:EMNLP2016} and ComplexQuestions \cite{bao-EtAl:2016:COLING} to handle more characteristics of general QA. \bibliographystyle{acl_natbib} \clearpage \newpage \input{acl2017_appendix} \end{document} \section*{Appendix A: Detailed Feature List for \emph{SimpleLinker}} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. The features we used in the \emph{SimpleLinker} include: \begin{enumerate} \item The proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; \item The relative position of the entity mention in the question. We denote the beginning position of $m$ in $q$ as $p_m$ (in characters), then we have the feature $\frac{p_m}{\vert q\vert }$. \end{enumerate} The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} + \frac{p_m}{\vert q\vert } \nonumber \end{align} \section*{Appendix B: Mathematics for the Relation Detection Models} \paragraph{BiCNN}: given a pair of inputs of question $q$ and relation $r$, we use two CNNs (with shared parameters in the experiments) to get the hidden states for each position of $q$ and $r$: $h_{1:N}^q$ and $h_{1:M}^r$, where $N$ and $M$ are the lengths of $q$ and $r$, respectively. By applying max-pooling on $h_{1:N}^q$ and $h_{1:M}^r$ we get the question representation $h_{max}^q$ where $h_{max}^q[i] = max_{1 \leq j \leq N} h_j^q[i]$. Similarly, we get the relation representation $h_{max}^r$. Then the score of $r$ given $q$ is defined as $s_{rel}(r;q)=cos(h_{max}^q, h_{max}^r)$. \paragraph{APCNN}: given the same representations $h_{1:N}^q$ and $h_{1:M}^r$ as above, we compute the alignment score $a_{ij} = (h_{i}^r)^T \mathbf{U} h_{j}^q$. In the experiments, we use identity matrix $\mathbf{U}=\mathbf{I}$. Based on the alignment score, we can compute the attention score for each $h_i^r$ as $w^r_i=\frac{e^{max_j a_{ij}}}{\sum_{k=1:M} e^{max_j a_{kj}}}$; and the attention score for each $h_j^q$ as $w^q_j=\frac{e^{max_i a_{ij}}}{\sum_{k=1:N} e^{max_i a_{ik}}}$. Then the score of relation $r$ given $q$ is \begin{align} s_{rel}(r;q)=cos(\sum_{1\leq j \leq N} w^q_j h_{j}^q, \sum_{1 \leq i \leq M} w^r_i h_{i}^r) \nonumber \end{align} \paragraph{Entity Replacement}: when we use entity replacement, the representation vector $h_{max}^q$ depends on the topic entity $e$. Therefore we denote the similarity score as $s_{rel}(r;e, q)$.
Improved Neural Relation Detection for Knowledge Base Question Answering
1704.06194
Table 3: KBQA results on SimpleQuestions (SQ) and WebQSP (WQ) test sets. The numbers in green color are directly comparable to our results since we start with the same entity linking results.
[ "[BOLD] System", "[BOLD] Accuracy [BOLD] SQ", "[BOLD] Accuracy [BOLD] WQ" ]
[ [ "STAGG", "72.8", " [BOLD] 63.9" ], [ "AMPCNN Yin et al. ( 2016 )", " [ITALIC] 76.4", "-" ], [ "Baseline: Our Method w/", "75.1", "60.0" ], [ "baseline relation detector", "75.1", "60.0" ], [ "Our Method", "[BOLD] 77.0", "63.0" ], [ "w/o entity re-ranking", "74.9", "60.6" ], [ "w/o constraints", "-", "58.0" ], [ "Our Method (multi-detectors)", "[BOLD] 78.7", "[BOLD] 63.9" ] ]
(2) AMPCNN Yin et al. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. Removing entity re-ranking step results in significant performance drop (see w/o entity re-ranking). Our re-ranker results in large improvement, especially when the beam sizes are smaller than 10. This is indicating another important usage of our proposed improved relation detection model on entity linking re-ranking.
\section*{Appendix A: Detailed Score Computation for Constraint Detection} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. Based on the above numbers we compute the proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} \nonumber \end{align} \section*{Appendix B: Special Rules for Constraint Detection} \begin{enumerate} \item Special threshold for date constraints. The time stamps in KB usually follow the year-month-day format, while the time in WebQSP are usually years. This makes the overlap between the date entities in questions and the KB entity names smaller (length of overlap is usually 4). To deal with this, we only check whether the dates in questions could match the years in KB, thus have a special threshold of $\theta=1$ for date constraints. \item Filtering the constraints for answer nodes. Sometimes the answer node could connect to huge number of other nodes, e.g. when the question is asking for a country and we have an answer candidate \emph{the U.S.}. From the observation on the WebQSP datasets, we found that for most of the time, the gold constraints on answers are their entity types (e.g., whether the question is asking for a country or a city). Based on this observation, in the constraint detection step, for the answer nodes we only keep the tuples with \emph{type} relations (i.e. the relation name contains the word ``\emph{type}''), such as \emph{common.topic.notable\_types, education.educational\_institution.school\_type} etc. \end{enumerate} \section*{Appendix C: Effects of Entity Re-Ranking on SimpleQuestions} Removing entity re-ranking step results in significant performance drop (see Table \ref{tab:overall_results}, the row of \emph{w/o entity re-ranking}). Table \ref{tab:final_linking} evaluates our re-ranker as an separate task. Our re-ranker results in large improvement, especially when the beam sizes are smaller than 10. This is indicating another important usage of our proposed improved relation detection model on entity linking re-ranking. \documentclass[11pt,letterpaper]{article} \usepackage[nohyperref]{acl2017} \usepackage[linesnumbered,ruled]{algorithm2e} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{117} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\bq}{\mathbf{q}} \newcommand{\br}{\mathbf{r}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bb}{\bm{\beta}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bg}{\bm{\gamma}} \newcommand{\bG}{\mathbf{\Gamma}} \newcommand{\removed}[1]{} \title{Improved Neural Relation Detection for Knowledge Base Question Answering} \author{Mo Yu$^{\dagger}$\quad Wenpeng Yin$^{\star}$\quad Kazi Saidul Hasan$^{\ddagger}$\quad Cicero dos Santos$^{\dagger}$\\ {\bf Bing Xiang}$^{\ddagger}$\quad {\bf Bowen Zhou}$^{\dagger}$\\ {\tt $^{\dagger}$AI Foundations, IBM Research, USA}\\ {\tt $^{\star}$Center for Information and Language Processing, LMU Munich}\\ {\tt $^{\ddagger}$IBM Watson, USA}\\ {\tt \small \{yum,kshasan,cicerons,bingxia,zhou\}@us.ibm.com, wenpeng@cis.lmu.de} } \date{} \begin{document} \maketitle \begin{abstract} Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. \end{abstract} \section{Introduction} \label{sec:intro} Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples \cite{berant2013semantic,yao2014freebase,bordes2015large,bast2015more,yih2015semantic,xu2016enhancing}. For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure \ref{fig:example} illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$\emph{head-entity, relation, tail-entity}$>$ KB tuple \cite{fader2013paraphrase,yih2014semantic,bordes2015large}; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) \emph{entity linking}, which links $n$-grams in questions to KB entities, and (2) \emph{relation detection}, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the \emph{relation detection} subtask and further explore how it can contribute to the KBQA system. Although general relation detection\footnote{In the information extraction field such tasks are usually called \emph{relation extraction} or \emph{relation classification}.} methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M \cite{bordes2015large}, contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions \cite{bordes2015large} data set has 14\% of the golden test relations not observed in golden training tuples. Third, as shown in Figure \ref{fig:example}(b), for some KBQA tasks like WebQuestions \cite{berant2013semantic}, we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (\emph{BiLSTM}s) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed \emph{improved relation detection} could benefit the KBQA end task, we also propose a simple KBQA implementation composed of \emph{two-step relation detection}. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the \emph{raw question text} by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each \emph{topic entity}\footnote{Following \newcite{yih2015semantic}, here \emph{topic entity} refers to the root of the (directed) query tree; and \emph{core-chain} is the directed path of relation from root to the answer node.} selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. \section{Related Work} \label{sec:relatedwork} \paragraph{Relation Extraction} Relation extraction (RE) is an important sub-field of information extraction. General research in this field usually works on a (small) pre-defined relation set, where given a text paragraph and two target entities, the goal is to determine whether the text indicates any types of relations between the entities or not. As a result RE is usually formulated as a \textbf{classification task}. Traditional RE methods rely on large amount of hand-crafted features \cite{zhou_exploring_2005,rink-harabagiu:2010:SemEval,sun_semi-supervised_2011}. Recent research benefits a lot from the advancement of deep learning: from word embeddings \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} to deep networks like CNNs and LSTMs \cite{zeng-EtAl:2014:Coling,santos2015classifying,vu-EtAl:2016:N16-1} and attention models \cite{zhou-EtAl:2016:P16-2,wang-EtAl:2016:P16-12}. The above research assumes there is a fixed (closed) set of relation types, thus no zero-shot learning capability is required. The number of relations is usually not large: The widely used ACE2005 has 11/32 coarse/fine-grained relations; SemEval2010 Task8 has 19 relations; TAC-KBP2015 has 74 relations although it considers open-domain Wikipedia relations. All are much fewer than thousands of relations in KBQA. As a result, few work in this field focuses on dealing with large number of relations or unseen relations. \newcite{yu-EtAl:2016:N16-12} proposed to use relation embeddings in a low-rank tensor method. However their relation embeddings are still trained in supervised way and the number of relations is not large in the experiments. \paragraph{Relation Detection in KBQA Systems} Relation detection for KBQA also starts with feature-rich approaches \cite{yao2014information,bast2015more} towards usages of deep networks \cite{yih2015semantic,xu2016enhancing,dai-li-xu:2016:P16-1} and attention models \cite{yin2016simple,golub2016character}. Many of the above relation detection research could naturally support large relation vocabulary and open relation sets (especially for QA with OpenIE KB like ParaLex \cite{fader2013paraphrase}), in order to fit the goal of open-domain question answering. Different KBQA data sets have different levels of requirement about the above open-domain capacity. For example, most of the gold test relations in WebQuestions can be observed during training, thus some prior work on this task adopted the close domain assumption like in the general RE research. While for data sets like SimpleQuestions and ParaLex, the capacity to support large relation sets and unseen relations becomes more necessary. To the end, there are two main solutions: (1) use pre-trained relation embeddings (e.g. from TransE \cite{bordes2013translating}), like \cite{dai-li-xu:2016:P16-1}; (2) factorize the relation names to sequences and formulate relation detection as a \textbf{sequence matching and ranking} task. Such factorization works because that the relation names usually comprise meaningful word sequences. For example, \newcite{yin2016simple} split relations to word sequences for single-relation detection. \newcite{liang2016neural} also achieve good performance on WebQSP with word-level relation representation in an end-to-end neural programmer model. \newcite{yih2015semantic} use character tri-grams as inputs on both question and relation sides. \newcite{golub2016character} propose a generative framework for single-relation KBQA which predicts relation with a character-level sequence-to-sequence model. Another difference between relation detection in KBQA and general RE is that general RE research assumes that the two argument entities are both available. Thus it usually benefits from features \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} or attention mechanisms \cite{wang-EtAl:2016:P16-12} based on the entity information (e.g. entity types or entity embeddings). For relation detection in KBQA, such information is mostly missing because: (1) one question usually contains single argument (the topic entity) and (2) one KB entity could have multiple types (type vocabulary size larger than 1,500). This makes KB entity typing itself a difficult problem so no previous used entity information in the relation detection model.\footnote{Such entity information has been used in KBQA systems as features for the final answer re-rankers.} \section{Background: Different Granularity in KB Relations} Previous research \cite{yih2015semantic,yin2016simple} formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. \vspace{0.4em} \noindent \textbf{(1) Relation Name as a Single Token} (\emph{relation-level}). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure \ref{fig:example}, when treating relation names as single tokens, it will be difficult to match the questions to relation names ``\emph{episodes\_written}'' and ``\emph{starring\_roles}'' if these names do not appear in training data -- their relation embeddings $\bh^r$s will be random vectors thus are not comparable to question embeddings $\bh^q$s. \vspace{0.4em} \noindent \textbf{(2) Relation as Word Sequence} (\emph{word-level}). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure \ref{fig:example}(b), when doing only word-level matching, it is difficult to rank the target relation ``\emph{starring\_roles}'' higher compared to the incorrect relation ``\emph{plays\_produced}''. This is because the incorrect relation contains word ``\emph{plays}'', which is more similar to the question (containing word ``\emph{play}'') in the embedding space. On the other hand, if the target relation co-occurs with questions related to ``\emph{tv appearance}'' in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like ``\emph{tv show}'' and ``\emph{play on}''. The two types of relation representation contain different levels of abstraction. As shown in Table \ref{tab:re_example}, the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section \ref{sec:re_method} gives the details of our proposed approach. \section{Improved KB Relation Detection} \label{sec:re_method} This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. \subsection{Relation Representations from Different Granularity} We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\br=\{r^{word}_1,\cdots,r^{word}_{M_1}\} \cup \{r^{rel}_1,\cdots,r^{rel}_{M_2}\}$, where the first $M_1$ tokens are words (e.g. \emph{\{episode, written\}}), and the last $M_2$ tokens are relation names, e.g., \emph{\{episode\_written\}} or \emph{\{starring\_roles, series\}} (when the target is a chain like in Figure \ref{fig:example}(b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\bB^{word}_{1:M_1}:\bB^{rel}_{1:M_2}]$ (each row vector $\bb_i$ is the concatenation between forward/backward representations at $i$). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply \emph{one} max-pooling on these two sets of vectors and get the final relation representation $\bh^r$. \subsection{Different Abstractions of Questions Representations} From Table \ref{tab:re_example}, we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\bq=\{q_1,\cdots,q_N\}$ and gets hidden representations $\bG^{(1)}_{1:N}=[\bg^{(1)}_1;\cdots;\bg^{(1)}_N]$. The second-layer BiLSTM works on $\bG^{(1)}_{1:N}$ to get the second set of hidden representations $\bG^{(2)}_{1:N}$. Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. \subsection{Hierarchical Matching between Relation and Question} \label{ssec:hier_matching} Now we have question contexts of different lengths encoded in $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$. Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (\emph{Hierarchical Matching}). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table \ref{tab:re_example}, the relation word \emph{written} could be matched to either the same single word in the question or a much longer phrase \emph{be the writer of}. We could perform the above hierarchical matching by computing the similarity between each layer of $\bG$ and $\bh^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table \ref{tab:rel}). Our analysis in Section \ref{ssec:exp_re} shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks \cite{he2016deep} for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such \emph{Hierarchical Residual Matching}: (1) Connecting each $\bg^{(1)}_i$ and $\bg^{(2)}_i$, resulting in a $\bg^{'}_i=\bg^{(1)}_i + \bg^{(2)}_i$ for each position $i$. Then the final question representation $\bh^q$ becomes a max-pooling over all $\bg^{'}_i$s, $1$$\leq$i$\leq$$N$. (2) Applying max-pooling on $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$ to get $\bh^{(1)}_{max}$ and $\bh^{(2)}_{max}$, respectively, then setting $\bh^q=\bh^{(1)}_{max}+\bh^{(2)}_{max}$. Finally we compute the matching score of $\br$ given $\bq$ as $s_{rel}(\br;\bq)=cos(\bh^r, \bh^q)$. Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation $\br^+$ and other relations $\br^-$ in the candidate pool $R$. { \abovedisplayskip=5pt \belowdisplayskip=5pt \begin{align} l_{\mathrm{rel}} = \max \{0, \gamma - s_{\mathrm{rel}}(\br^+; \bq) + s_{\mathrm{rel}}(\br^-; \bq)\} \nonumber \end{align} } where $\gamma$ is a constant parameter. Fig \ref{fig:re_model} summarizes the above \emph{Hierarchical Residual BiLSTM (\textbf{HR-BiLSTM})} model. \paragraph{Remark:} Another way of hierarchical matching consists in relying on \textbf{attention mechanism}, e.g. \cite{parikh-EtAl:2016:EMNLP2016}, to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table \ref{tab:rel}). \section{KBQA Enhanced by Relation Detection} \label{sec:kbqa_method} This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work \cite{yih2015semantic,xu2016enhancing}, our KBQA system takes an existing entity linker to produce the top-$K$ linked entities, $EL_K(q)$, for a question $q$ (``\emph{initial entity linking}''). % Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm \ref{algo:pipeline}. \begin{algorithm}[htbp] \small { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \DontPrintSemicolon \BlankLine \textbf{Entity Re-Ranking} (\emph{first-step relation detection}): Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Detect relation(s) using the \emph{reformatted question text} in which the topic entity is replaced by a special token \emph{$<$e$>$} (Section \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section \ref{ssec:query_gen})\; \textbf{Constraint Detection} (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $\hat{r}$ (connecting by a relation $r_c$) , add the high scoring $c$ and $r_c$ to the query (Section \ref{ssec:constraint}). \caption{\label{algo:pipeline}{\footnotesize{KBQA with two-step relation detection}}}} \end{algorithm} Compared to previous approaches, the main difference is that we have an additional \emph{entity re-ranking} step after the \emph{initial entity linking}. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7\% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the \emph{initial entity linking} with relations detected in questions. \removed{ Previous efforts on KBQA usually generate the KB queries from a question $q$ step-by-step as follows: (1) Entity linking, in which the top-$K$ entity candidates for a question $q$ ($EL_K(q)$) are selected. (2) Relation detection, where a topic entity $e$ is selected, and a relation (or chain of relations) is detected from its corresponding relation set $R_e=\{r(e,\cdot) \in KB\}$. (3) Constraint detection, which tries to apply the rest entities in $EL_K(q) \setminus \{e\}$ as constraints to further filter the answers. As the starting step, the accuracy and coverage of this top-$K$ list is critical. However, we have observed that entity linking sometimes becomes bottleneck of KBQA systems. While on WebQSP the best reported linker could get 87.98\% top-1 accuracy on identifying topic entities, on SimpleQuestions this number becomes 72.7\%. Our error analysis shows that such errors are usually due to the ambiguities of entity names. For example in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only text matching. To overcome the above difficulty, previous work usually deals with such problem by generating large beams and then relies on hand-crafted features to re-rank the final generated KB queries, e.g., \newcite{golub2016character} used $K=50$ for SimpleQuestions, which slows down the speed of the model. Here we propose an alternative solution to this problem: having observed that different entity candidates usually connect to different relations, we propose to use relations detected in questions to help entity disambiguation in the \emph{initial entity linking}. Concretely, we add an additional component between the steps (1) and (2) above, which is also a relation detection model on question text but is used to re-rank the entity candidates. We call this \emph{relation detection on entity set}, since it is detecting relation for a set of entity candidates instead of for single specific entities. } Sections \ref{ssec:ent_reranking} and \ref{ssec:rel} elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. \removed { \begin{algorithm*}[htbp] { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \textbf{Entity Re-Ranking}: Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing only the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Perform relation detection using the \emph{reformatted question text} in which the topic entity is replaced by an especial token \emph{$<$e$>$} (Sec. \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Sec. \ref{ssec:rel})\; \textbf{Constraint Detection} (optional): Compute similarity between any $n$-gram in $q$ and any neighbor node $c$ (connected by relation $r_c$) of each entity in the above query, add the high scoring $c$ and $r_c$ to the query (Sec. \ref{ssec:constraint}). \label{algo:pipeline} \caption{\scriptsize{KBQA with two-step relation detection}}} \end{algorithm*} } \subsection{Entity Re-Ranking} \label{ssec:ent_reranking} In this step, we use the \emph{raw question text} as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$. We call this step \emph{relation detection on entity set} since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. \ref{sec:re_method}. For each question $q$, after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ($R^{l}_q$) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$, given the original entity linker score $s_{linker}$, and the score of the most confident relation $r\in R_q^{l} \cap R_e$, we sum these two scores to re-rank the entities: \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s_{\mathrm{rerank}}(e;q) =& \alpha \cdot s_{\mathrm{linker}}(e;q) \nonumber \\ + & (1-\alpha) \cdot\max_{r \in R_q^{l} \cap R_e} s_{\mathrm{rel}}(r;q).\nonumber \end{align} Finally, we select top $K'$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K'}^{'}(q)$. We use the same example in Fig \ref{fig:example}(a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as ``\emph{episodes\_written}'', ``\emph{author\_of}'' and ``\emph{profession}''. Then, according to the connections of entity candidates in KB, we find that the TV writer ``\emph{Mike Kelley}'' will be scored higher than the baseball player ``\emph{Mike Kelley}'', because the former has the relations ``\emph{episodes\_written}'' and ``\emph{profession}''. This method can be viewed as exploiting entity-relation collocation for entity linking. \subsection{Relation Detection} \label{ssec:rel} In this step, for each candidate entity $e \in EL_K'(q)$, we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB.\footnote{{Note that the number of entities and the number of relation candidates will be much smaller than those in the previous step.}} Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$'s entity mention in $q$ with a token ``\emph{$<$e$>$}''. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$: $s_{rel} (r;e,q)$. \subsection{Query Generation} \label{ssec:query_gen} Finally, the system outputs the $<$entity, relation (or core-chain)$>$ pair $(\hat{e}, \hat{r})$ according to: {{ \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s(\hat{e}, \hat{r}; q) =& \max_{e \in EL_{K'}^{'}(q), r \in R_e} \left ( \beta \cdot s_{\mathrm{rerank}}(e;q) \right. \nonumber\\ &\left.+ (1-\beta) \cdot s_{\mathrm{rel}} (r;e,q) \right), \nonumber \end{align} }} where $\beta$ is a hyperparameter to be tuned. %possibly because the $s_{rel}$ scores are closer to each other. \subsection{Constraint Detection} \label{ssec:constraint} Similar to \cite{yih2015semantic}, we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) \textbf{Sub-graph generation}: given the top scored query generated by the previous 3 steps\footnote{{Starting with the top-1 query suffers more from error propagation. However we still achieve state-of-the-art on WebQSP in Sec.\ref{sec:exp}, showing the advantage of our relation detection model. We leave in future work beam-search and feature extraction on beam for final answer re-ranking like in previous research.}}, for each node $v$ (answer node or the CVT node like in Figure \ref{fig:example}(b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$) with any relation, and generate a sub-graph associated to the original query. (2) \textbf{Entity-linking on sub-graph nodes}: we compute a matching score between each $n$-gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta$ (tuned on training set), we will add the constraint entity $c$ (and $r_c$) to the query by attaching it to the corresponding node $v$ on the core-chain. \section{Experiments} \label{sec:exp} \vspace{-0.1em} \subsection{Task Introduction \& Settings} We use the SimpleQuestions \cite{bordes2015large} and WebQSP \cite{yih-EtAl:2016:P16-2} datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. \noindent \textbf{SimpleQuestions (SQ): } It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) \cite{bordes2015large}, in order to compare with previous research. \newcite{yin2016simple} also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results\footnote{The two resources have been downloaded from \scriptsize{\url{https://github.com/Gorov/SimpleQuestions-EntityLinking}}}. Therefore, our results can be compared with their reported results on both tasks. \noindent \textbf{WebQSP (WQ): } A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following \newcite{yih-EtAl:2016:P16-2}, we use S-MART \cite{yang-chang:2015:ACL-IJCNLP} entity-linking outputs.\footnote{{\url{https://github.com/scottyih/STAGG}}} In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set.\footnote{{The dataset is available at \scriptsize{\url{https://github.com/Gorov/KBQA_RE_data}}.}} For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\leq$ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs (\{50, 100, 200, 400\})\footnote{{For CNNs we double the size for fair comparison.}}; (2) learning rate (\{0.1, 0.5, 1.0, 2.0\}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section \ref{ssec:hier_matching}); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have \emph{entity replacement} first (see Section \ref{ssec:rel} and Figure \ref{tab:re_example}). All word vectors are initialized with 300-$d$ pretrained word embeddings \cite{mikolov2013distributed}. The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. \vspace{-0.2em} \subsection{Relation Detection Results} \vspace{-0.1em} \label{ssec:exp_re} Table \ref{tab:rel} shows the results on two relation detection tasks. The AMPCNN result is from \cite{yin2016simple}, which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from \cite{yih2015semantic}, where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3\% (p $<$ 0.001 and 0.01 compared to the best baseline \emph{BiLSTM w/ words} on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2\% to 88.9\%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. \paragraph{Ablation Test:} The bottom of Table \ref{tab:rel} shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3\% vs. 91.2/88.8\%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section \ref{ssec:hier_matching}). For the attention-based baseline, we tried the model from \cite{parikh-EtAl:2016:EMNLP2016} and its one-way variations, where the one-way model gives better results\footnote{{We also tried to apply the same attention method on deep BiLSTM with residual connections, but it does not lead to better results compared to HR-BiLSTM. We hypothesize that the idea of hierarchical matching with attention mechanism may work better for long sequences, and the new advanced attention mechanisms \cite{wang-jiang:2016:N16-1,wang2017bilateral} might help hierarchical matching. We leave the above directions to future work.}}. Note that residual learning significantly helps on WebQSP (80.65\% to 82.53\%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. \paragraph{Analysis} Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that \emph{training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable}. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94\%) compared to HR-BiLSTM (95.67\%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, \emph{training deep BiLSTMs is more difficult without shortcut connections}. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99\%, even lower than the 95.25\% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that \emph{HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction}. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11\%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. \subsection{KBQA End-Task Results} Table \ref{tab:overall_results} compares our system with two published baselines (1) STAGG \cite{yih2015semantic}, the state-of-the-art on WebQSP\footnote{{The STAGG score on SQ is from \cite{bao-EtAl:2016:COLING}.}} and (2) AMPCNN \cite{yin2016simple}, the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table \ref{tab:overall_results}). Compared to the \emph{baseline relation detector} (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3\% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.% which shows the importance of our proposed improved relation detector. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in \cite{yih2015semantic}, constraint detection is crucial for our system\footnote{Note that another reason is that we are evaluating on accuracy here. When evaluating on F1 the gap will be smaller.}. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5\% top-1 accuracy), leaving a huge potential (77.5\% vs. 58.0\%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see \newcite{yih2015semantic} for the three models used), we also try to use the top-3 relation detectors from Section \ref{ssec:exp_re}. As shown on the last row of Table \ref{tab:overall_results}, this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. \section{Conclusion} KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in \cite{liang2016neural}, to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions \cite{su-EtAl:2016:EMNLP2016} and ComplexQuestions \cite{bao-EtAl:2016:COLING} to handle more characteristics of general QA. \bibliographystyle{acl_natbib} \clearpage \newpage \input{acl2017_appendix} \end{document} \section*{Appendix A: Detailed Feature List for \emph{SimpleLinker}} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. The features we used in the \emph{SimpleLinker} include: \begin{enumerate} \item The proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; \item The relative position of the entity mention in the question. We denote the beginning position of $m$ in $q$ as $p_m$ (in characters), then we have the feature $\frac{p_m}{\vert q\vert }$. \end{enumerate} The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} + \frac{p_m}{\vert q\vert } \nonumber \end{align} \section*{Appendix B: Mathematics for the Relation Detection Models} \paragraph{BiCNN}: given a pair of inputs of question $q$ and relation $r$, we use two CNNs (with shared parameters in the experiments) to get the hidden states for each position of $q$ and $r$: $h_{1:N}^q$ and $h_{1:M}^r$, where $N$ and $M$ are the lengths of $q$ and $r$, respectively. By applying max-pooling on $h_{1:N}^q$ and $h_{1:M}^r$ we get the question representation $h_{max}^q$ where $h_{max}^q[i] = max_{1 \leq j \leq N} h_j^q[i]$. Similarly, we get the relation representation $h_{max}^r$. Then the score of $r$ given $q$ is defined as $s_{rel}(r;q)=cos(h_{max}^q, h_{max}^r)$. \paragraph{APCNN}: given the same representations $h_{1:N}^q$ and $h_{1:M}^r$ as above, we compute the alignment score $a_{ij} = (h_{i}^r)^T \mathbf{U} h_{j}^q$. In the experiments, we use identity matrix $\mathbf{U}=\mathbf{I}$. Based on the alignment score, we can compute the attention score for each $h_i^r$ as $w^r_i=\frac{e^{max_j a_{ij}}}{\sum_{k=1:M} e^{max_j a_{kj}}}$; and the attention score for each $h_j^q$ as $w^q_j=\frac{e^{max_i a_{ij}}}{\sum_{k=1:N} e^{max_i a_{ik}}}$. Then the score of relation $r$ given $q$ is \begin{align} s_{rel}(r;q)=cos(\sum_{1\leq j \leq N} w^q_j h_{j}^q, \sum_{1 \leq i \leq M} w^r_i h_{i}^r) \nonumber \end{align} \paragraph{Entity Replacement}: when we use entity replacement, the representation vector $h_{max}^q$ depends on the topic entity $e$. Therefore we denote the similarity score as $s_{rel}(r;e, q)$. \section*{Appendix A: Detailed Score Computation for Constraint Detection} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. Based on the above numbers we compute the proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} \nonumber \end{align} \section*{Appendix B: Special Rules for Constraint Detection} \begin{enumerate} \item Special threshold for date constraints. The time stamps in KB usually follow the year-month-day format, while the time in WebQSP are usually years. This makes the overlap between the date entities in questions and the KB entity names smaller (length of overlap is usually 4). To deal with this, we only check whether the dates in questions could match the years in KB, thus have a special threshold of $\theta=1$ for date constraints. \item Filtering the constraints for answer nodes. Sometimes the answer node could connect to huge number of other nodes, e.g. when the question is asking for a country and we have an answer candidate \emph{the U.S.}. From the observation on the WebQSP datasets, we found that for most of the time, the gold constraints on answers are their entity types (e.g., whether the question is asking for a country or a city). Based on this observation, in the constraint detection step, for the answer nodes we only keep the tuples with \emph{type} relations (i.e. the relation name contains the word ``\emph{type}''), such as \emph{common.topic.notable\_types, education.educational\_institution.school\_type} etc. \end{enumerate} \section*{Appendix C: Effects of Entity Re-Ranking on SimpleQuestions} Removing entity re-ranking step results in significant performance drop (see Table \ref{tab:overall_results}, the row of \emph{w/o entity re-ranking}). Table \ref{tab:final_linking} evaluates our re-ranker as an separate task. Our re-ranker results in large improvement, especially when the beam sizes are smaller than 10. This is indicating another important usage of our proposed improved relation detection model on entity linking re-ranking. \documentclass[11pt,letterpaper]{article} \usepackage[nohyperref]{acl2017} \usepackage[linesnumbered,ruled]{algorithm2e} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{117} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\bq}{\mathbf{q}} \newcommand{\br}{\mathbf{r}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bb}{\bm{\beta}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bg}{\bm{\gamma}} \newcommand{\bG}{\mathbf{\Gamma}} \newcommand{\removed}[1]{} \title{Improved Neural Relation Detection for Knowledge Base Question Answering} \author{Mo Yu$^{\dagger}$\quad Wenpeng Yin$^{\star}$\quad Kazi Saidul Hasan$^{\ddagger}$\quad Cicero dos Santos$^{\dagger}$\\ {\bf Bing Xiang}$^{\ddagger}$\quad {\bf Bowen Zhou}$^{\dagger}$\\ {\tt $^{\dagger}$AI Foundations, IBM Research, USA}\\ {\tt $^{\star}$Center for Information and Language Processing, LMU Munich}\\ {\tt $^{\ddagger}$IBM Watson, USA}\\ {\tt \small \{yum,kshasan,cicerons,bingxia,zhou\}@us.ibm.com, wenpeng@cis.lmu.de} } \date{} \begin{document} \maketitle \begin{abstract} Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. \end{abstract} \section{Introduction} \label{sec:intro} Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples \cite{berant2013semantic,yao2014freebase,bordes2015large,bast2015more,yih2015semantic,xu2016enhancing}. For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure \ref{fig:example} illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$\emph{head-entity, relation, tail-entity}$>$ KB tuple \cite{fader2013paraphrase,yih2014semantic,bordes2015large}; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) \emph{entity linking}, which links $n$-grams in questions to KB entities, and (2) \emph{relation detection}, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the \emph{relation detection} subtask and further explore how it can contribute to the KBQA system. Although general relation detection\footnote{In the information extraction field such tasks are usually called \emph{relation extraction} or \emph{relation classification}.} methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M \cite{bordes2015large}, contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions \cite{bordes2015large} data set has 14\% of the golden test relations not observed in golden training tuples. Third, as shown in Figure \ref{fig:example}(b), for some KBQA tasks like WebQuestions \cite{berant2013semantic}, we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (\emph{BiLSTM}s) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed \emph{improved relation detection} could benefit the KBQA end task, we also propose a simple KBQA implementation composed of \emph{two-step relation detection}. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the \emph{raw question text} by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each \emph{topic entity}\footnote{Following \newcite{yih2015semantic}, here \emph{topic entity} refers to the root of the (directed) query tree; and \emph{core-chain} is the directed path of relation from root to the answer node.} selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. \section{Related Work} \label{sec:relatedwork} \paragraph{Relation Extraction} Relation extraction (RE) is an important sub-field of information extraction. General research in this field usually works on a (small) pre-defined relation set, where given a text paragraph and two target entities, the goal is to determine whether the text indicates any types of relations between the entities or not. As a result RE is usually formulated as a \textbf{classification task}. Traditional RE methods rely on large amount of hand-crafted features \cite{zhou_exploring_2005,rink-harabagiu:2010:SemEval,sun_semi-supervised_2011}. Recent research benefits a lot from the advancement of deep learning: from word embeddings \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} to deep networks like CNNs and LSTMs \cite{zeng-EtAl:2014:Coling,santos2015classifying,vu-EtAl:2016:N16-1} and attention models \cite{zhou-EtAl:2016:P16-2,wang-EtAl:2016:P16-12}. The above research assumes there is a fixed (closed) set of relation types, thus no zero-shot learning capability is required. The number of relations is usually not large: The widely used ACE2005 has 11/32 coarse/fine-grained relations; SemEval2010 Task8 has 19 relations; TAC-KBP2015 has 74 relations although it considers open-domain Wikipedia relations. All are much fewer than thousands of relations in KBQA. As a result, few work in this field focuses on dealing with large number of relations or unseen relations. \newcite{yu-EtAl:2016:N16-12} proposed to use relation embeddings in a low-rank tensor method. However their relation embeddings are still trained in supervised way and the number of relations is not large in the experiments. \paragraph{Relation Detection in KBQA Systems} Relation detection for KBQA also starts with feature-rich approaches \cite{yao2014information,bast2015more} towards usages of deep networks \cite{yih2015semantic,xu2016enhancing,dai-li-xu:2016:P16-1} and attention models \cite{yin2016simple,golub2016character}. Many of the above relation detection research could naturally support large relation vocabulary and open relation sets (especially for QA with OpenIE KB like ParaLex \cite{fader2013paraphrase}), in order to fit the goal of open-domain question answering. Different KBQA data sets have different levels of requirement about the above open-domain capacity. For example, most of the gold test relations in WebQuestions can be observed during training, thus some prior work on this task adopted the close domain assumption like in the general RE research. While for data sets like SimpleQuestions and ParaLex, the capacity to support large relation sets and unseen relations becomes more necessary. To the end, there are two main solutions: (1) use pre-trained relation embeddings (e.g. from TransE \cite{bordes2013translating}), like \cite{dai-li-xu:2016:P16-1}; (2) factorize the relation names to sequences and formulate relation detection as a \textbf{sequence matching and ranking} task. Such factorization works because that the relation names usually comprise meaningful word sequences. For example, \newcite{yin2016simple} split relations to word sequences for single-relation detection. \newcite{liang2016neural} also achieve good performance on WebQSP with word-level relation representation in an end-to-end neural programmer model. \newcite{yih2015semantic} use character tri-grams as inputs on both question and relation sides. \newcite{golub2016character} propose a generative framework for single-relation KBQA which predicts relation with a character-level sequence-to-sequence model. Another difference between relation detection in KBQA and general RE is that general RE research assumes that the two argument entities are both available. Thus it usually benefits from features \cite{nguyen_employing_2014,gormley-yu-dredze:2015:EMNLP} or attention mechanisms \cite{wang-EtAl:2016:P16-12} based on the entity information (e.g. entity types or entity embeddings). For relation detection in KBQA, such information is mostly missing because: (1) one question usually contains single argument (the topic entity) and (2) one KB entity could have multiple types (type vocabulary size larger than 1,500). This makes KB entity typing itself a difficult problem so no previous used entity information in the relation detection model.\footnote{Such entity information has been used in KBQA systems as features for the final answer re-rankers.} \section{Background: Different Granularity in KB Relations} Previous research \cite{yih2015semantic,yin2016simple} formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. \vspace{0.4em} \noindent \textbf{(1) Relation Name as a Single Token} (\emph{relation-level}). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure \ref{fig:example}, when treating relation names as single tokens, it will be difficult to match the questions to relation names ``\emph{episodes\_written}'' and ``\emph{starring\_roles}'' if these names do not appear in training data -- their relation embeddings $\bh^r$s will be random vectors thus are not comparable to question embeddings $\bh^q$s. \vspace{0.4em} \noindent \textbf{(2) Relation as Word Sequence} (\emph{word-level}). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure \ref{fig:example}(b), when doing only word-level matching, it is difficult to rank the target relation ``\emph{starring\_roles}'' higher compared to the incorrect relation ``\emph{plays\_produced}''. This is because the incorrect relation contains word ``\emph{plays}'', which is more similar to the question (containing word ``\emph{play}'') in the embedding space. On the other hand, if the target relation co-occurs with questions related to ``\emph{tv appearance}'' in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like ``\emph{tv show}'' and ``\emph{play on}''. The two types of relation representation contain different levels of abstraction. As shown in Table \ref{tab:re_example}, the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section \ref{sec:re_method} gives the details of our proposed approach. \section{Improved KB Relation Detection} \label{sec:re_method} This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. \subsection{Relation Representations from Different Granularity} We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\br=\{r^{word}_1,\cdots,r^{word}_{M_1}\} \cup \{r^{rel}_1,\cdots,r^{rel}_{M_2}\}$, where the first $M_1$ tokens are words (e.g. \emph{\{episode, written\}}), and the last $M_2$ tokens are relation names, e.g., \emph{\{episode\_written\}} or \emph{\{starring\_roles, series\}} (when the target is a chain like in Figure \ref{fig:example}(b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\bB^{word}_{1:M_1}:\bB^{rel}_{1:M_2}]$ (each row vector $\bb_i$ is the concatenation between forward/backward representations at $i$). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply \emph{one} max-pooling on these two sets of vectors and get the final relation representation $\bh^r$. \subsection{Different Abstractions of Questions Representations} From Table \ref{tab:re_example}, we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\bq=\{q_1,\cdots,q_N\}$ and gets hidden representations $\bG^{(1)}_{1:N}=[\bg^{(1)}_1;\cdots;\bg^{(1)}_N]$. The second-layer BiLSTM works on $\bG^{(1)}_{1:N}$ to get the second set of hidden representations $\bG^{(2)}_{1:N}$. Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. \subsection{Hierarchical Matching between Relation and Question} \label{ssec:hier_matching} Now we have question contexts of different lengths encoded in $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$. Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (\emph{Hierarchical Matching}). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table \ref{tab:re_example}, the relation word \emph{written} could be matched to either the same single word in the question or a much longer phrase \emph{be the writer of}. We could perform the above hierarchical matching by computing the similarity between each layer of $\bG$ and $\bh^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table \ref{tab:rel}). Our analysis in Section \ref{ssec:exp_re} shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks \cite{he2016deep} for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such \emph{Hierarchical Residual Matching}: (1) Connecting each $\bg^{(1)}_i$ and $\bg^{(2)}_i$, resulting in a $\bg^{'}_i=\bg^{(1)}_i + \bg^{(2)}_i$ for each position $i$. Then the final question representation $\bh^q$ becomes a max-pooling over all $\bg^{'}_i$s, $1$$\leq$i$\leq$$N$. (2) Applying max-pooling on $\bG^{(1)}_{1:N}$ and $\bG^{(2)}_{1:N}$ to get $\bh^{(1)}_{max}$ and $\bh^{(2)}_{max}$, respectively, then setting $\bh^q=\bh^{(1)}_{max}+\bh^{(2)}_{max}$. Finally we compute the matching score of $\br$ given $\bq$ as $s_{rel}(\br;\bq)=cos(\bh^r, \bh^q)$. Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation $\br^+$ and other relations $\br^-$ in the candidate pool $R$. { \abovedisplayskip=5pt \belowdisplayskip=5pt \begin{align} l_{\mathrm{rel}} = \max \{0, \gamma - s_{\mathrm{rel}}(\br^+; \bq) + s_{\mathrm{rel}}(\br^-; \bq)\} \nonumber \end{align} } where $\gamma$ is a constant parameter. Fig \ref{fig:re_model} summarizes the above \emph{Hierarchical Residual BiLSTM (\textbf{HR-BiLSTM})} model. \paragraph{Remark:} Another way of hierarchical matching consists in relying on \textbf{attention mechanism}, e.g. \cite{parikh-EtAl:2016:EMNLP2016}, to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table \ref{tab:rel}). \section{KBQA Enhanced by Relation Detection} \label{sec:kbqa_method} This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work \cite{yih2015semantic,xu2016enhancing}, our KBQA system takes an existing entity linker to produce the top-$K$ linked entities, $EL_K(q)$, for a question $q$ (``\emph{initial entity linking}''). % Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm \ref{algo:pipeline}. \begin{algorithm}[htbp] \small { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \DontPrintSemicolon \BlankLine \textbf{Entity Re-Ranking} (\emph{first-step relation detection}): Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Detect relation(s) using the \emph{reformatted question text} in which the topic entity is replaced by a special token \emph{$<$e$>$} (Section \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section \ref{ssec:query_gen})\; \textbf{Constraint Detection} (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $\hat{r}$ (connecting by a relation $r_c$) , add the high scoring $c$ and $r_c$ to the query (Section \ref{ssec:constraint}). \caption{\label{algo:pipeline}{\footnotesize{KBQA with two-step relation detection}}}} \end{algorithm} Compared to previous approaches, the main difference is that we have an additional \emph{entity re-ranking} step after the \emph{initial entity linking}. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7\% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the \emph{initial entity linking} with relations detected in questions. \removed{ Previous efforts on KBQA usually generate the KB queries from a question $q$ step-by-step as follows: (1) Entity linking, in which the top-$K$ entity candidates for a question $q$ ($EL_K(q)$) are selected. (2) Relation detection, where a topic entity $e$ is selected, and a relation (or chain of relations) is detected from its corresponding relation set $R_e=\{r(e,\cdot) \in KB\}$. (3) Constraint detection, which tries to apply the rest entities in $EL_K(q) \setminus \{e\}$ as constraints to further filter the answers. As the starting step, the accuracy and coverage of this top-$K$ list is critical. However, we have observed that entity linking sometimes becomes bottleneck of KBQA systems. While on WebQSP the best reported linker could get 87.98\% top-1 accuracy on identifying topic entities, on SimpleQuestions this number becomes 72.7\%. Our error analysis shows that such errors are usually due to the ambiguities of entity names. For example in Fig \ref{fig:example}(a), there are \emph{TV writer} and \emph{baseball player} ``\emph{Mike Kelley}'', which is impossible to distinguish with only text matching. To overcome the above difficulty, previous work usually deals with such problem by generating large beams and then relies on hand-crafted features to re-rank the final generated KB queries, e.g., \newcite{golub2016character} used $K=50$ for SimpleQuestions, which slows down the speed of the model. Here we propose an alternative solution to this problem: having observed that different entity candidates usually connect to different relations, we propose to use relations detected in questions to help entity disambiguation in the \emph{initial entity linking}. Concretely, we add an additional component between the steps (1) and (2) above, which is also a relation detection model on question text but is used to re-rank the entity candidates. We call this \emph{relation detection on entity set}, since it is detecting relation for a set of entity candidates instead of for single specific entities. } Sections \ref{ssec:ent_reranking} and \ref{ssec:rel} elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. \removed { \begin{algorithm*}[htbp] { \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Question $q$, Knowledge Base $KB$, the initial top-$K$ entity candidates $EL_K(q)$ } \Output{Top query tuple $(\hat{e},\hat{r}, \{(c, r_c)\})$} \textbf{Entity Re-Ranking}: Use the \emph{raw question text} as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL'_{K'}(q)$ containing only the top-$K'$ entity candidates (Section \ref{ssec:ent_reranking})\; \textbf{Relation Detection}: Perform relation detection using the \emph{reformatted question text} in which the topic entity is replaced by an especial token \emph{$<$e$>$} (Sec. \ref{ssec:rel})\; \textbf{Query Generation}: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Sec. \ref{ssec:rel})\; \textbf{Constraint Detection} (optional): Compute similarity between any $n$-gram in $q$ and any neighbor node $c$ (connected by relation $r_c$) of each entity in the above query, add the high scoring $c$ and $r_c$ to the query (Sec. \ref{ssec:constraint}). \label{algo:pipeline} \caption{\scriptsize{KBQA with two-step relation detection}}} \end{algorithm*} } \subsection{Entity Re-Ranking} \label{ssec:ent_reranking} In this step, we use the \emph{raw question text} as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$. We call this step \emph{relation detection on entity set} since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. \ref{sec:re_method}. For each question $q$, after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ($R^{l}_q$) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$, given the original entity linker score $s_{linker}$, and the score of the most confident relation $r\in R_q^{l} \cap R_e$, we sum these two scores to re-rank the entities: \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s_{\mathrm{rerank}}(e;q) =& \alpha \cdot s_{\mathrm{linker}}(e;q) \nonumber \\ + & (1-\alpha) \cdot\max_{r \in R_q^{l} \cap R_e} s_{\mathrm{rel}}(r;q).\nonumber \end{align} Finally, we select top $K'$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K'}^{'}(q)$. We use the same example in Fig \ref{fig:example}(a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as ``\emph{episodes\_written}'', ``\emph{author\_of}'' and ``\emph{profession}''. Then, according to the connections of entity candidates in KB, we find that the TV writer ``\emph{Mike Kelley}'' will be scored higher than the baseball player ``\emph{Mike Kelley}'', because the former has the relations ``\emph{episodes\_written}'' and ``\emph{profession}''. This method can be viewed as exploiting entity-relation collocation for entity linking. \subsection{Relation Detection} \label{ssec:rel} In this step, for each candidate entity $e \in EL_K'(q)$, we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB.\footnote{{Note that the number of entities and the number of relation candidates will be much smaller than those in the previous step.}} Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$'s entity mention in $q$ with a token ``\emph{$<$e$>$}''. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$: $s_{rel} (r;e,q)$. \subsection{Query Generation} \label{ssec:query_gen} Finally, the system outputs the $<$entity, relation (or core-chain)$>$ pair $(\hat{e}, \hat{r})$ according to: {{ \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{align} s(\hat{e}, \hat{r}; q) =& \max_{e \in EL_{K'}^{'}(q), r \in R_e} \left ( \beta \cdot s_{\mathrm{rerank}}(e;q) \right. \nonumber\\ &\left.+ (1-\beta) \cdot s_{\mathrm{rel}} (r;e,q) \right), \nonumber \end{align} }} where $\beta$ is a hyperparameter to be tuned. %possibly because the $s_{rel}$ scores are closer to each other. \subsection{Constraint Detection} \label{ssec:constraint} Similar to \cite{yih2015semantic}, we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) \textbf{Sub-graph generation}: given the top scored query generated by the previous 3 steps\footnote{{Starting with the top-1 query suffers more from error propagation. However we still achieve state-of-the-art on WebQSP in Sec.\ref{sec:exp}, showing the advantage of our relation detection model. We leave in future work beam-search and feature extraction on beam for final answer re-ranking like in previous research.}}, for each node $v$ (answer node or the CVT node like in Figure \ref{fig:example}(b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$) with any relation, and generate a sub-graph associated to the original query. (2) \textbf{Entity-linking on sub-graph nodes}: we compute a matching score between each $n$-gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta$ (tuned on training set), we will add the constraint entity $c$ (and $r_c$) to the query by attaching it to the corresponding node $v$ on the core-chain. \section{Experiments} \label{sec:exp} \vspace{-0.1em} \subsection{Task Introduction \& Settings} We use the SimpleQuestions \cite{bordes2015large} and WebQSP \cite{yih-EtAl:2016:P16-2} datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. \noindent \textbf{SimpleQuestions (SQ): } It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) \cite{bordes2015large}, in order to compare with previous research. \newcite{yin2016simple} also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results\footnote{The two resources have been downloaded from \scriptsize{\url{https://github.com/Gorov/SimpleQuestions-EntityLinking}}}. Therefore, our results can be compared with their reported results on both tasks. \noindent \textbf{WebQSP (WQ): } A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following \newcite{yih-EtAl:2016:P16-2}, we use S-MART \cite{yang-chang:2015:ACL-IJCNLP} entity-linking outputs.\footnote{{\url{https://github.com/scottyih/STAGG}}} In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set.\footnote{{The dataset is available at \scriptsize{\url{https://github.com/Gorov/KBQA_RE_data}}.}} For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\leq$ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs (\{50, 100, 200, 400\})\footnote{{For CNNs we double the size for fair comparison.}}; (2) learning rate (\{0.1, 0.5, 1.0, 2.0\}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section \ref{ssec:hier_matching}); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have \emph{entity replacement} first (see Section \ref{ssec:rel} and Figure \ref{tab:re_example}). All word vectors are initialized with 300-$d$ pretrained word embeddings \cite{mikolov2013distributed}. The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. \vspace{-0.2em} \subsection{Relation Detection Results} \vspace{-0.1em} \label{ssec:exp_re} Table \ref{tab:rel} shows the results on two relation detection tasks. The AMPCNN result is from \cite{yin2016simple}, which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from \cite{yih2015semantic}, where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3\% (p $<$ 0.001 and 0.01 compared to the best baseline \emph{BiLSTM w/ words} on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2\% to 88.9\%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. \paragraph{Ablation Test:} The bottom of Table \ref{tab:rel} shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3\% vs. 91.2/88.8\%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section \ref{ssec:hier_matching}). For the attention-based baseline, we tried the model from \cite{parikh-EtAl:2016:EMNLP2016} and its one-way variations, where the one-way model gives better results\footnote{{We also tried to apply the same attention method on deep BiLSTM with residual connections, but it does not lead to better results compared to HR-BiLSTM. We hypothesize that the idea of hierarchical matching with attention mechanism may work better for long sequences, and the new advanced attention mechanisms \cite{wang-jiang:2016:N16-1,wang2017bilateral} might help hierarchical matching. We leave the above directions to future work.}}. Note that residual learning significantly helps on WebQSP (80.65\% to 82.53\%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. \paragraph{Analysis} Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that \emph{training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable}. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94\%) compared to HR-BiLSTM (95.67\%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, \emph{training deep BiLSTMs is more difficult without shortcut connections}. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99\%, even lower than the 95.25\% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that \emph{HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction}. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11\%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. \subsection{KBQA End-Task Results} Table \ref{tab:overall_results} compares our system with two published baselines (1) STAGG \cite{yih2015semantic}, the state-of-the-art on WebQSP\footnote{{The STAGG score on SQ is from \cite{bao-EtAl:2016:COLING}.}} and (2) AMPCNN \cite{yin2016simple}, the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table \ref{tab:overall_results}). Compared to the \emph{baseline relation detector} (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3\% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.% which shows the importance of our proposed improved relation detector. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in \cite{yih2015semantic}, constraint detection is crucial for our system\footnote{Note that another reason is that we are evaluating on accuracy here. When evaluating on F1 the gap will be smaller.}. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5\% top-1 accuracy), leaving a huge potential (77.5\% vs. 58.0\%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see \newcite{yih2015semantic} for the three models used), we also try to use the top-3 relation detectors from Section \ref{ssec:exp_re}. As shown on the last row of Table \ref{tab:overall_results}, this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. \section{Conclusion} KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in \cite{liang2016neural}, to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions \cite{su-EtAl:2016:EMNLP2016} and ComplexQuestions \cite{bao-EtAl:2016:COLING} to handle more characteristics of general QA. \bibliographystyle{acl_natbib} \clearpage \newpage \input{acl2017_appendix} \end{document} \section*{Appendix A: Detailed Feature List for \emph{SimpleLinker}} Given an input question $q$ and an entity name $e$ in KB, we denote the lengths of the question and the entity name as $\vert q \vert$ and $\vert n_e \vert$. For a mention $m$ of the entity $e$ which is an $n$-gram in $q$, we compute the longest consecutive common sub-sequence between $m$ and $e$, and denote its length as $\vert m \cap e \vert$. All the lengths above are measured by the number of characters. The features we used in the \emph{SimpleLinker} include: \begin{enumerate} \item The proportions of the length of the overlap between entity mention and entity name (in characters) in the entity name $\frac{\vert m \cap e \vert}{\vert e \vert}$ and in the question $\frac{\vert m \cap e \vert}{\vert q \vert}$; \item The relative position of the entity mention in the question. We denote the beginning position of $m$ in $q$ as $p_m$ (in characters), then we have the feature $\frac{p_m}{\vert q\vert }$. \end{enumerate} The final score for the question has a mention linking to $e$ is \begin{align} s_{linker}(e;q) = \max_m \frac{\vert m \cap e \vert}{\vert q \vert} + \frac{\vert m \cap e \vert}{\vert e \vert} + \frac{p_m}{\vert q\vert } \nonumber \end{align} \section*{Appendix B: Mathematics for the Relation Detection Models} \paragraph{BiCNN}: given a pair of inputs of question $q$ and relation $r$, we use two CNNs (with shared parameters in the experiments) to get the hidden states for each position of $q$ and $r$: $h_{1:N}^q$ and $h_{1:M}^r$, where $N$ and $M$ are the lengths of $q$ and $r$, respectively. By applying max-pooling on $h_{1:N}^q$ and $h_{1:M}^r$ we get the question representation $h_{max}^q$ where $h_{max}^q[i] = max_{1 \leq j \leq N} h_j^q[i]$. Similarly, we get the relation representation $h_{max}^r$. Then the score of $r$ given $q$ is defined as $s_{rel}(r;q)=cos(h_{max}^q, h_{max}^r)$. \paragraph{APCNN}: given the same representations $h_{1:N}^q$ and $h_{1:M}^r$ as above, we compute the alignment score $a_{ij} = (h_{i}^r)^T \mathbf{U} h_{j}^q$. In the experiments, we use identity matrix $\mathbf{U}=\mathbf{I}$. Based on the alignment score, we can compute the attention score for each $h_i^r$ as $w^r_i=\frac{e^{max_j a_{ij}}}{\sum_{k=1:M} e^{max_j a_{kj}}}$; and the attention score for each $h_j^q$ as $w^q_j=\frac{e^{max_i a_{ij}}}{\sum_{k=1:N} e^{max_i a_{ik}}}$. Then the score of relation $r$ given $q$ is \begin{align} s_{rel}(r;q)=cos(\sum_{1\leq j \leq N} w^q_j h_{j}^q, \sum_{1 \leq i \leq M} w^r_i h_{i}^r) \nonumber \end{align} \paragraph{Entity Replacement}: when we use entity replacement, the representation vector $h_{max}^q$ depends on the topic entity $e$. Therefore we denote the similarity score as $s_{rel}(r;e, q)$.
Semi-supervised sequence tagging with bidirectional language models
1705.00108
Table 5: Comparison of CoNLL-2003 test set F1 when the LM embeddings are included at different layers in the baseline tagger.
[ "[BOLD] Use LM embeddings at", "[ITALIC] F1± [BOLD] std" ]
[ [ "input to the first RNN layer", "91.55±0.21" ], [ "output of the first RNN layer", "[BOLD] 91.93± [BOLD] 0.19" ], [ "output of the second RNN layer", "91.72±0.13" ] ]
We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These results are consistent with Søgaard and Goldberg
\documentclass[11pt,a4paper]{article} \usepackage[nohyperref]{acl2017} \usepackage[normalem]{ulem} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{561} % Enter the acl Paper ID here \newcommand\wacomment[1]{\textcolor{blue}{\textbf{[#1] --\textsc{Waleed}}}} \newcommand\wadelete[1]{} \newcommand\waadd[1]{#1} \newcommand\mpcomment[1]{\textcolor{blue}{\textbf{[#1] --\textsc{Matt}}}} \title{Semi-supervised sequence tagging with bidirectional language models} \author{Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power \\ Allen Institute for Artificial Intelligence \\ {\tt \{matthewp,waleeda,chandrab,russellp\}@allenai.org} } \date{} \begin{document} \maketitle \begin{abstract} Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre-trained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers. \end{abstract} \section{Introduction} Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information \citep{word2vec,Pennington2014GloveGV} and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks \citep{NLPfromScratch:Collobert2011}. However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases ``A Central Bank spokesman'' and ``The Central African Republic'', the word `Central' is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions \citep{yang-transfer-iclr07,Ma2016EndtoendSL,lample-EtAl:2016:N16-1,joint-many-iclr07}. Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks \citep[e.g.,][]{Sgaard2016DeepML,yang-transfer-iclr07}. In this paper, we explore an alternate semi-supervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an \textit{LM embedding}) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context. Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87\% to 91.93\% $F_1$ for the CoNLL 2003 NER task, a more then 1\% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37\% $F_1$) for the CoNLL 2000 Chunking task. As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers. \section{Language model augmented sequence taggers (TagLM)} \subsection{Overview} The main components in our language-model-augmented sequence tagger (TagLM) are illustrated in Fig.~\ref{fig:major_components}. After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3). \subsection{Baseline sequence tagging model} \label{sec:baseline} Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies \citep{Ma2016EndtoendSL,lample-EtAl:2016:N16-1,yang-transfer-iclr07,chiu-nichols-2016} (left side of Figure \ref{overview-figure}). Given a sentence of tokens $(t_1, t_2, \ldots, t_N)$ it first forms a representation, $\mathbf{x}_k$, for each token by concatenating a character based representation $\mathbf{c}_k$ with a token embedding $\mathbf{w}_k$: \begin{align} \mathbf{c}_k & = C(t_k; \mathbf{\theta}_c) \nonumber \\ \mathbf{w}_k & = E(t_k; \mathbf{\theta}_w) \nonumber \\ \mathbf{x}_k & = [\mathbf{c}_k; \mathbf{w}_k] \label{eqn:token_rep} \end{align} The character representation $\mathbf{c}_k$ captures morphological information and is either a convolutional neural network (CNN) \citep{Ma2016EndtoendSL,chiu-nichols-2016} or RNN \citep{yang-transfer-iclr07,lample-EtAl:2016:N16-1}. It is parameterized by $C(\cdot, \mathbf{\theta}_c)$ with parameters $\mathbf{\theta}_c$. The token embeddings, $\mathbf{w}_k$, are obtained as a lookup $E(\cdot, \mathbf{\theta}_w)$, initialized using pre-trained word embeddings, and fine tuned during training \citep{NLPfromScratch:Collobert2011}. To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, $k$, the hidden state $\mathbf{h}_{k,i}$ of RNN layer $i$ is formed by concatenating the hidden states from the forward ($\overrightarrow{\mathbf{h}}_{k,i}$) and backward ($\overleftarrow{\mathbf{h}}_{k,i}$) RNNs. As a result, the bidirectional RNN is able to use both past and future information to make a prediction at token $k$. More formally, for the first RNN layer that operates on $\mathbf{x}_k$ to output $\mathbf{h}_{k,1}$: \begin{align} \overrightarrow{\mathbf{h}}_{k,1} & = \overrightarrow{R}_1 (\mathbf{x}_k, \overrightarrow{\mathbf{h}}_{k-1,1}; \theta_{\overrightarrow{R}_1}) \nonumber \\ \overleftarrow{\mathbf{h}}_{k,1} & = \overleftarrow{R}_1 (\mathbf{x}_k, \overleftarrow{\mathbf{h}}_{k+1,1}; \theta_{\overleftarrow{R}_1}) \nonumber \\ \mathbf{h}_{k,1} & = [\overrightarrow{\mathbf{h}}_{k,1}; \overleftarrow{\mathbf{h}}_{k,1}] \label{eqn:rnn1} \end{align} The second RNN layer is similar and uses $\mathbf{h}_{k,1}$ to output $\mathbf{h}_{k,2}$. In this paper, we use $L=2$ layers of RNNs in all experiments and parameterize $R_i$ as either Gated Recurrent Units (GRU) \citep{GRU:Cho2014} or Long Short-Term Memory units (LSTM) \citep{LSTM:Hochreiter1997} depending on the task. Finally, the output of the final RNN layer $\mathbf{h}_{k,L}$ is used to predict a score for each possible tag using a single dense layer. Due to the dependencies between successive tags in our sequence labeling tasks (e.g. using the BIOES labeling scheme, it is not possible for \texttt{I-PER} to follow \texttt{B-LOC}), it is beneficial to model and decode each sentence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bigram, computing the sentence conditional random field (CRF) loss \citep{CRF:Lafferty2001} using the forward-backward algorithm at training time, and using the Viterbi algorithm to find the most likely tag sequence at test time, similar to \citet{NLPfromScratch:Collobert2011}. \subsection{Bidirectional LM} \label{sec:bidirectional_lm} A language model computes the probability of a token sequence $(t_1, t_2, \ldots, t_N)$ \[ p(t_1, t_2, \ldots, t_N) = \prod_{k=1}^N p({t_k} \mid t_1, t_2, \ldots, t_{k-1}). \] Recent state of the art neural language models \citep{Jzefowicz2016ExploringTL} use a similar architecture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history $(t_1, t_2, \ldots, t_k)$ into a fixed dimensional vector $\overrightarrow{\mathbf{h}}^{LM}_k$. This is the \textit{forward LM embedding} of the token at position $k$ and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token $t_{k+1}$ using a softmax layer over words in the vocabulary. The need to capture future context in the LM embeddings suggests it is beneficial to also consider a \textit{backward} LM in additional to the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes \[ p(t_1, t_2, \ldots, t_N) = \prod_{k=1}^N p(t_k \mid t_{k+1}, t_{k+2}, \ldots, t_N). \] A backward LM can be implemented in an analogous way to a forward LM and produces the \textit{backward LM embedding} $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, \ldots, t_N)$, the output embeddings of the top layer LSTM. In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., $\mathbf{h}^{LM}_k = [\overrightarrow{\mathbf{h}}^{LM}_k; \overleftarrow{\mathbf{h}}^{LM}_k]$. Note that in our formulation, the forward and backward LMs are independent, without any shared parameters. \subsection{Combining LM with sequence model} \label{sec:combining} Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings $\mathbf{h}^{LM}$ with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace (\ref{eqn:rnn1}) with \begin{equation} \mathbf{h}_{k,1} = [\overrightarrow{\mathbf{h}}_{k,1}; \overleftarrow{\mathbf{h}}_{k,1}; \mathbf{h}_k^{LM}]. \label{eqn:lm_rnn1} \end{equation} There are alternate possibilities for adding the LM embeddings to the sequence model. One possibility adds a non-linear mapping after the concatenation and before the second RNN (e.g. replacing (\ref{eqn:lm_rnn1}) with $f([\overrightarrow{\mathbf{h}}_{k,1}; \overleftarrow{\mathbf{h}}_{k,1}; \mathbf{h}_k^{LM}])$ where $f$ is a non-linear function). Another possibility introduces an attention-like mechanism that weights the all LM embeddings in a sentence before including them in the sequence model. Our initial results with the simple concatenation were encouraging so we did not explore these alternatives in this study, preferring to leave them for future work. \section{Experiments} We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task \citep{CoNLL2003NER} and the CoNLL 2000 Chunking task \citep{CoNLL2000Chunking}. We report the official evaluation metric (micro-averaged $F_1$). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options \citep[e.g.,][]{Ratinov2009DesignCA}. Following \citet{chiu-nichols-2016}, we use the Senna word embeddings \citep{NLPfromScratch:Collobert2011} and pre-processed the text by lowercasing all tokens and replacing all digits with 0. \paragraph{CoNLL 2003 NER.} The CoNLL 2003 NER task consists of newswire from the Reuters RCV1 corpus tagged with four different entity types (\texttt{PER}, \texttt{LOC}, \texttt{ORG}, \texttt{MISC}). It includes standard train, development and test sets. Following previous work \citep{yang-transfer-iclr07,chiu-nichols-2016} we trained on both the train and development sets after tuning hyperparameters on the development set. The hyperparameters for our baseline model are similar to \citet{yang-transfer-iclr07}. We use two bidirectional GRUs with 80 hidden units and 25 dimensional character embeddings for the token character encoder. The sequence layer uses two bidirectional GRUs with 300 hidden units each. For regularization, we add 25\% dropout to the input of each GRU, but not to the recurrent connections. \paragraph{CoNLL 2000 chunking.} The CoNLL 2000 chunking task uses sections 15-18 from the Wall Street Journal corpus for training and section 20 for testing. It defines 11 syntactic chunk types (e.g., \texttt{NP}, \texttt{VP}, \texttt{ADJP}) in addition to \texttt{other}. We randomly sampled 1000 sentences from the training set as a held-out development set. The baseline sequence tagger uses 30 dimensional character embeddings and a CNN with 30 filters of width 3 characters followed by a tanh non-linearity for the token character encoder. The sequence layer uses two bidirectional LSTMs with 200 hidden units. Following \citet{Ma2016EndtoendSL} we added 50\% dropout to the character embeddings, the input to each LSTM layer (but not recurrent connections) and to the output of the final LSTM layer. \paragraph{Pre-trained language models.} \label{sec:language_models} The primary bidirectional LMs we used in this study were trained on the 1B Word Benchmark \cite{Chelba2014OneBW}, a publicly available benchmark for large-scale language modeling. The training split has approximately 800 million tokens, about a 4000X increase over the number training tokens in the CoNLL datasets. \citet{Jzefowicz2016ExploringTL} explored several model architectures and released their best single model and training recipes. Following \citet{Sak2014LongSM}, they used linear projection layers at the output of each LSTM layer to reduce the computation time but still maintain a large LSTM state. Their single best model took three weeks to train on 32 GPUs and achieved 30.0 test perplexity. It uses a character CNN with 4096 filters for input, followed by two stacked LSTMs, each with 8192 hidden units and a 1024 dimensional projection layer. We use \texttt{CNN-BIG-LSTM} to refer to this language model in our results. In addition to \texttt{CNN-BIG-LSTM} from \citet{Jzefowicz2016ExploringTL},\footnote{\url{https://github.com/tensorflow/models/tree/master/lm_1b}} we used the same corpus to train two additional language models with fewer parameters: forward \texttt{LSTM-2048-512} and backward \texttt{LSTM-2048-512}. Both language models use token embeddings as input to a single layer LSTM with 2048 units and a 512 dimension projection layer. We closely followed the procedure outlined in \citet{Jzefowicz2016ExploringTL}, except we used synchronous parameter updates across four GPUs instead of asynchronous updates across 32 GPUs and ended training after 10 epochs. The test set perplexities for our forward and backward \texttt{LSTM-2048-512} language models are 47.7 and 47.3, respectively.\footnote{Due to different implementations, the perplexity of the forward LM with similar configurations in \citet{Jzefowicz2016ExploringTL} is different (45.0 vs. 47.7).} \paragraph{Training.} All experiments use the Adam optimizer \cite{Kingma2014AdamAM} with gradient norms clipped at 5.0. In all experiments, we fine tune the pre-trained Senna word embeddings but fix all weights in the pre-trained language models. In addition to explicit dropout regularization, we also use early stopping to prevent over-fitting and use the following process to determine when to stop training. We first train with a constant learning rate $\alpha = 0.001$ on the training data and monitor the development set performance at each epoch. Then, at the epoch with the highest development performance, we start a simple learning rate annealing schedule: decrease $\alpha$ an order of magnitude (i.e., divide by ten), train for five epochs, decrease $\alpha$ an order of magnitude again, train for five more epochs and stop. Following \citet{chiu-nichols-2016}, we train each final model configuration ten times with different random seeds and report the mean and standard deviation $F_1$. It is important to estimate the variance of model performance since the test data sets are relatively small. \subsection{Overall system results} \label{sec:overall_system_results} Tables \ref{2003-table} and \ref{2000-table} compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables \ref{2003-table-aux} and \ref{2000-table-aux} compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward \texttt{CNN-BIG-LSTM} and the backward \texttt{LSTM-2048-512}). In the CoNLL 2003 NER task, our model scores 91.93 mean $F_1$, which is a statistically significant increase over the previous best result of 91.62 $\pm 0.33$ from \citet{chiu-nichols-2016} that used gazetteers (at 95\%, two-sided Welch t-test, $p=0.021$). In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean $F_1$, exceeding all previously published results without additional labeled data by more then 1\% absolute $F_1$. The improvement over the previous best result of 95.77 in \citet{joint-many-iclr07} that jointly trains with Penn Treebank (PTB) POS tags is statistically significant at 95\% ($p < 0.001$ assuming standard deviation of $0.1$). Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 $F_1$ in the NER and Chunking tasks, respectively. \paragraph{Adding external resources.} Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external resources (labeled data or task specific gazetteers) are available. Furthermore, Tables \ref{2003-table-aux} and \ref{2000-table-aux} show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the improvements previously obtained by adding other forms of transfer or joint learning. For example, \citet{yang-transfer-iclr07} noted an improvement of only 0.06 $F_1$ in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and \citet{chiu-nichols-2016} reported an increase of 0.71 $F_1$ when adding gazetteers to their baseline. In the Chunking task, previous work has reported from 0.28 to 0.75 improvement in $F_1$ when including supervised labels from the PTB POS tags or CoNLL 2003 entities \citep{yang-transfer-iclr07,Sgaard2016DeepML,joint-many-iclr07}. \subsection{Analysis} To elucidate the characteristics of our LM augmented sequence tagger, we ran a number of additional experiments on the CoNLL 2003 NER task. \paragraph{How to use LM embeddings?} In this experiment, we concatenate the LM embeddings at different locations in the baseline sequence tagger. In particular, we used the LM embeddings $\mathbf{h}_k^{LM}$ to: \begin{itemize} \item augment the \textit{input} of the \textit{first} RNN layer; i.e., \\ $\mathbf{x}_k = [\mathbf{c}_k; \mathbf{w}_k; \mathbf{h}_k^{LM}]$, \item augment the \textit{output} of the \textit{first} RNN layer; i.e., $\mathbf{h}_{k,1} = [\overrightarrow{\mathbf{h}}_{k,1}; \overleftarrow{\mathbf{h}}_{k,1}; \mathbf{h}_k^{LM}]$,\footnote{This configuration the same as Eq.~\ref{eqn:lm_rnn1} in \S\ref{sec:combining}. It was reproduced here for convenience.} and \item augment the \textit{output} of the \textit{second} RNN layer; i.e., $\mathbf{h}_{k,2} = [\overrightarrow{\mathbf{h}}_{k,2}; \overleftarrow{\mathbf{h}}_{k,2}; \mathbf{h}_k^{LM}]$. \end{itemize} Table \ref{2003-table-lm-level} shows that the second alternative performs best. We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These results are consistent with \citet{Sgaard2016DeepML} who found that chunking performance was sensitive to the level at which additional POS supervision was added. \paragraph{Does it matter which language model to use?} In this experiment, we compare six different configurations of the forward and backward language models (including the baseline model which does not use any language models). The results are reported in Table \ref{2003-table-lm-size}. We find that adding backward LM embeddings consistently outperforms forward-only LM embeddings, with $F_1$ improvements between 0.22 and 0.27\%, even with the relatively small backward \texttt{LSTM-2048-512} LM. LM size is important, and replacing the forward \texttt{LSTM-2048-512} with \texttt{CNN-BIG-LSTM} (test perplexities of 47.7 to 30.0 on 1B Word Benchmark) improves $F_1$ by 0.26 - 0.31\%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward \texttt{LSTM-2048-512} with a backward LM analogous to the \texttt{CNN-BIG-LSTM} would further improve performance. To highlight the importance of including language models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data set, we decreased the model size to 512 hidden units with a 256 dimension projection and normalized tokens in the same manner as input to the sequence tagging model (lower-cased, with all digits replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models \textit{decreased} performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding language models help because they learn composition functions (i.e., the RNN parameters in the language model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data. \paragraph{Importance of task specific RNN.} To understand the importance of including a task specific sequence RNN we ran an experiment that removed the task specific sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 $F_1$, well below our baseline. This result confirms that the RNNs in the baseline tagger encode essential information which is not encoded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the language model which is only trained on unlabeled examples. Note that the LM weights are fixed in this experiment. \paragraph{Dataset size.} \textit{A priori}, we expect the addition of LM embeddings to be most beneficial in cases where the task specific annotated datasets are small. To test this hypothesis, we replicated the setup from \citet{yang-transfer-iclr07} that samples 1\% of the CoNLL 2003 training set and compared the performance of TagLM to our baseline without LM. In this scenario, test $F_1$ increased 3.35\% (from 67.66 to 71.01\%) compared to an increase of 1.06\% $F_1$ for a similar comparison with the full training dataset. The analogous increases in \citet{yang-transfer-iclr07} are 3.97\% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28\% $F_1$ for transfer from PTB POS tags. However, they found only a 0.06\% $F_1$ increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and significantly improves performance even with larger training sets. \paragraph{Number of parameters.} Our TagLM formulation increases the number of parameters in the second RNN layer $R_2$ due to the increase in the input dimension $\mathbf{h}_1$ if all other hyperparameters are held constant. To confirm that this did not have a material impact on the results, we ran two additional experiments. In the first, we trained a system without a LM but increased the second RNN layer hidden dimension so that number of parameters was the same as in TagLM. In this case, performance \textit{decreased} slightly (by 0.15\% $F_1$) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden dimension of the second RNN layer in TagLM to give it the same number of parameters as the baseline no LM model. In this case, test $F_1$ \textit{increased} slightly to $92.00 \pm 0.11$ indicating that the additional parameters in TagLM are slightly hurting performance.\footnote{A similar experiment for the Chunking task did not improve $F_1$ so this conclusion is task dependent.} \paragraph{Does the LM transfer across domains?} One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks and the unlabeled text in the 1 Billion Word Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitivity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE.\footnote{\url{https://scienceie.github.io/}} ScienceIE requires end-to-end joint entity and relationship extraction from scientific publications across three diverse fields (computer science, material sciences, and physics) and defines three broad entity types (Task, Material and Process). For this task, TagLM increased $F_1$ on the development set by 4.12\% (from 49.93 to to 54.05\%) for entity extraction over our baseline without LM embeddings and it was a major component in our winning submission to ScienceIE, Scenario 1 \citep{scienceie}. We conclude that LM embeddings can improve the performance of a sequence tagger even when the data comes from a different domain. \section{Related work} \paragraph{Unlabeled data.} TagLM was inspired by the widespread use of pre-trained word embeddings in supervised sequence tagging models. Besides pre-trained word embeddings, our method is most closely related to \citet{li:05}. Instead of using a LM, \citet{li:05} uses a probabilistic generative model to infer context-sensitive latent variables for each token, which are then used as extra features in a supervised CRF tagger \citep{CRF:Lafferty2001}. Other semi-supervised learning methods for structured prediction problems include co-training \citep{blum:98,pierce:01}, expectation maximization \citep{nigam:00,mohit:05}, structural learning \citep{ando:05} and maximum discriminant functions \citep{suzuki:07,suzuki:08}. It is easy to combine TagLM with any of the above methods by including LM embeddings as additional features in the discriminative components of the model (except for expectation maximization). A detailed discussion of semi-supervised learning methods in NLP can be found in \cite{sogaard:13}. \citet{Melamud2016context2vecLG} learned a context encoder from unlabeled data with an objective function similar to a bi-directional LM and applied it to several NLP tasks closely related to the unlabeled objective function: sentence completion, lexical substitution and word sense disambiguation. LM embeddings are related to a class of methods \citep[e.g.,][]{Le2014DistributedRO,Kiros2015SkipThoughtV,Hill2016LearningDR} for learning sentence and document encoders from unlabeled data, which can be used for text classification and textual entailment among other tasks. \citet{Dai2015SemisupervisedSL} pre-trained LSTMs using language models and sequence autoencoders then fine tuned the weights for classification tasks. In contrast to our method that uses unlabeled data to learn token-in-context embeddings, all of these methods use unlabeled data to learn an encoder for an entire text sequence (sentence or document). \paragraph{Neural language models.} LMs have always been a critical component in statistical machine translation systems \citep{koehn:09}. Recently, neural LMs \citep{bengio:03,mikolov:10} have also been integrated in neural machine translation systems \citep[e.g.,][]{kalchbrenner:13,devlin:14} to score candidate translations. In contrast, TagLM uses neural LMs to encode words in the input sequence. Unlike forward LMs, bidirectional LMs have received little prior attention. Most similar to our formulation, \citet{Peris2015ABR} used a bidirectional neural LM in a statistical machine translation system for instance selection. They tied the input token embeddings and softmax weights in the forward and backward directions, unlike our approach which uses two distinct models without any shared parameters. \citet{frinken:12} also used a bidirectional n-gram LM for handwriting recognition. \paragraph{Interpreting RNN states.} Recently, there has been some interest in interpreting the activations of RNNs. \citet{Linzen2016AssessingTA} showed that single LSTM units can learn to predict singular-plural distinctions. \citet{Karpathy2015VisualizingAU} visualized character level LSTM states and showed that individual cells capture long-range dependencies such as line lengths, quotes and brackets. Our work complements these studies by showing that LM states are useful for downstream tasks as a way of interpreting what they learn. \paragraph{Other sequence tagging models.} Current state of the art results in sequence tagging problems are based on bidirectional RNN models. However, many other sequence tagging models have been proposed in the literature for this class of problems \citep[e.g.,][]{CRF:Lafferty2001,collins:02}. LM embeddings could also be used as additional features in other models, although it is not clear whether the model complexity would be sufficient to effectively make use of them. \section{Conclusion} In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method significantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the baseline model is trained on a large number of labeled examples. \section*{Acknowledgments} We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substantial improvements in the final version. \bibliographystyle{acl_natbib} \end{document}
Binary Paragraph Vectors
1611.01116
Table 2: Information retrieval results for 32-bit binary codes constructed by first inferring 32d real-valued paragraph vectors and then employing a separate hashing algorithm for binarization. Paragraph vectors were inferred using PV-DBOW with bigrams.
[ "Hashing algorithm", "20 Newsgroups MAP", "20 Newsgroups NDCG@10", "RCV1 MAP", "RCV1 NDCG@10", "English Wikipedia MAP", "English Wikipedia NDCG@10" ]
[ [ "Random hyperplane projection", "0.27", "0.53", "0.21", "0.66", "0.16", "0.44" ], [ "Iterative quantization", "0.31", "0.58", "0.23", "0.68", "0.17", "0.46" ] ]
We also compared binary paragraph vectors against codes constructed by first inferring short, real-valued paragraph vectors and then using a separate hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. Paragraph vectors in these experiments were inferred using PV-DBOW with bigrams. On the 20 Newsgroups and RCV1 datasets Binary PV-DBOW yielded higher MAP than the two baseline approaches. On English Wikipedia iterative quantization achieved MAP equal to Binary PV-DBOW, while random hyperplane projection yielded lower MAP. Some gain in precision of top hits can be observed for iterative quantization, as indicated by NDCG@10. It is also worth noting that end-to-end inference in Binary PV models is more convenient than inferring real-valued vectors and then using another algorithm for hashing.
\documentclass[11pt,a4paper]{article} \pdfoutput=1 \usepackage[hyperref]{acl2017} \usepackage[pdftex]{graphicx} \def\aclpaperid{43} % Enter the acl Paper ID here \aclfinalcopy % Uncomment this line for the final submission \newcommand{\figref}[1]{Figure~\ref{fig:#1}} \newcommand{\tabref}[1]{Table~\ref{tab:#1}} \newcommand{\sectionref}[1]{Section~\ref{sec:#1}} \newcommand{\equationref}[1]{Eq.~\ref{eq:#1}} \newcommand{\rot}[1]{\rotatebox{90}{#1}} \renewcommand{\arraystretch}{1.2} \bibliographystyle{acl_natbib} \title{Binary Paragraph Vectors} \author{Karol Grzegorczyk \and Marcin Kurdziel\\ AGH University of Science and Technology \\ Department of Computer Science \\ Krakow, Poland \\ \texttt{\{kgr,kurdziel\}@agh.edu.pl}} \date{} \begin{document} \maketitle \begin{abstract} Recently Le \& Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present \emph{Binary Paragraph Vector} models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection. \end{abstract} \section{Introduction}\label{sec:introduction} One of the significant challenges in contemporary information processing is the sheer volume of available data. \citet{gantz2012digital}, for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing~\citep{indyk1998approximate}, relies on hashing data into short, locality-preserving binary codes~\citep{wang2014hashing}. The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. Most of the algorithms from this family are data-oblivious, i.e. can generate hashes for any type of data. Nevertheless, some methods target specific kind of input data, like text or image. In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by~\citet{salakhutdinov2009semantic}. Their \emph{semantic hashing} leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words~(BOW) representation. Salakhutdinov~\&~Hinton report that binary codes allow for up to $20$-fold improvement in document ranking speed, compared to real-valued representation of the same dimensionality. Moreover, they demonstrate that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning binary representation from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov~\&~Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words. Binary codes have also been applied to cross-modal retrieval where text is one of the modalities. Specifically, \citet{wang2013semantic} incorporated tag information that often accompany text documents, while \citet{masci2014multimodal} employed siamese neural networks to learn single binary representation for text and image data. Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. \citet{mikolov2013efficient} proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by \citet{le2014distributed} to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le~\&~Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le~\&~Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation~\citep{gutmann2010noise} or importance sampling~\citep{cho2015using} to approximate the gradients with respect to the softmax logits. An alternative approach to learning representation of pieces of text has been recently described by \citet{kiros2015skip}. Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level. In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by~\citet{lin2015deep} on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While~\citet{lin2015deep} employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents. \section{Binary paragraph vector models}\label{sec:models} The basic idea in binary paragraph vector models is to introduce a sigmoid nonlinearity before the softmax that models the conditional probability of words given the context. If we then enforce binary or near-binary activations in this nonlinearity, the probability distribution over words will be conditioned on a bit vector context, rather than real-valued representation. The inference in the model proceeds like in Paragraph Vector, except the document code is constructed from the sigmoid activations. After rounding, this code can be seen as a distributed binary representation of the document. In the simplest Binary PV-DBOW model~(\figref{pv-dbow-bin}) the dimensionality of the real-valued document embeddings is equal to the length of the binary codes. Despite this low dimensional representation~--~a~useful binary hash will typically have 128 or fewer bits~--~this model performed surprisingly well in our experiments. Note that we cannot simply increase the embedding dimensionality in Binary PV-DBOW in order to learn better codes: binary vectors learned in this way would be too long to be useful in document hashing. The retrieval performance can, however, be improved by using binary codes for initial filtering of documents, and then using a representation with higher capacity to rank the remaining documents by their similarity to the query. \citet{salakhutdinov2009semantic}, for example, used semantic hashing codes for initial filtering and TF-IDF for ranking. A similar document retrieval strategy can be realized with binary paragraph vectors. Furthermore, we can extend the Binary PV-DBOW model to simultaneously learn short binary codes and higher-dimensional real-valued representations. Specifically, in the Real-Binary PV-DBOW model~(\figref{pv-dbow-bin-projections}) we introduce a linear projection between the document embedding matrix and the sigmoid nonlinearity. During training, we learn the softmax parameters and the projection matrix. During inference, softmax weights and the projection matrix are fixed. This way, we simultaneously obtain a high-capacity representation of a document in the embedding matrix, e.g. \mbox{$300$-dimensional} real-valued vector, and a short binary representation from the sigmoid activations. One advantage of using the Real-Binary PV-DBOW model over two separate networks is that we need to store only one set of softmax parameters (and a small projection matrix) in the memory, instead of two large weight matrices. Additionally, only one model needs to be trained, rather than two distinct networks. Binary document codes can also be learned by extending distributed memory models. \citet{le2014distributed} suggest that in PV-DM, a context of the central word can be constructed by either concatenating or averaging the document vector and the embeddings of the surrounding words. However, in Binary PV-DM~(\figref{pv-dm-bin}) we always construct the context by concatenating the relevant vectors before applying the sigmoid nonlinearity. This way, the length of binary codes is not tied to the dimensionality of word embeddings. Softmax layers in the models described above should be trained to predict words in documents given binary context vectors. Training should therefore encourage binary activations in the preceding sigmoid layers. This can be done in several ways. In semantic hashing autoencoders~\citet{salakhutdinov2009semantic} added noise to the sigmoid coding layer. Error backpropagation then countered the noise, by forcing the activations to be close to 0 or 1. Another approach was used by~\citet{krizhevsky2011using} in autoencoders that learned binary codes for small images. During the forward pass, activations in the coding layer were rounded to 0 or 1. Original (i.e. not rounded) activations were used when backpropagating errors. Alternatively, one could model the document codes with stochastic binary neurons. Learning in this case can still proceed with error backpropagation, provided that a suitable gradient estimator is used alongside stochastic activations. We experimented with the methods used in semantic hashing and Krizhevsky's autoencoders, as well as with the two biased gradient estimators for stochastic binary neurons discussed by~\citet{bengio2013estimating}. We also investigated the slope annealing trick~\citep{chung2016hierarchical} when training networks with stochastic binary activations. From our experience, binary paragraph vector models with rounded activations are easy to train and learn better codes than models with noise-based binarization or stochastic neurons. We therefore use Krizhevsky's binarization in our models. \section{Experiments}\label{sec:experiments} To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups\footnote{Available at \url{http://qwone.com/~jason/20Newsgroups}}, a cleansed version (also called v2) of Reuters Corpus Volume~1\footnote{Available at \url{http://trec.nist.gov/data/reuters/reuters.html}}~(RCV1) and English Wikipedia\footnote{A snapshot from April 5th, 2016}. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by~\cite{li2015learning} indicate that performance\ of PV-DBOW can be improved by including \emph{n-grams} in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements. The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. In case of English Wikipedia we held out for testing randomly selected 10\% of the documents. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10)~\citep{jarvelin2002cumulated}. The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. In RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by~\citet{salakhutdinov2009semantic}. That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures for~20~Newsgroups and RCV1 follows~\citet{salakhutdinov2009semantic}, enabling comparison with semantic hashing codes. To assess the relevancy of articles in English Wikipedia we can employ categories assigned to them. However, unlike in RCV1, Wikipedia categories can have multiple parent categories and cyclic dependencies. Therefore, for this dataset we adopted a simplified relevancy measure: two articles are relevant if they share at least one category. We also removed from the test set categories with less than 20 documents as well as documents that were left with no categories. Overall, the relevancy is measured over more than $11,800$ categories, making English Wikipedia harder than the other two benchmarks. We use AdaGrad~\citep{duchi2011adaptive} for training and inference in all experiments reported in this work. During training we employ dropout~\citep{srivastava2014dropout} in the embedding layer. To facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by~\citet{cho2015using}. Binary PV-DM networks use the same number of dimensions for document codes and word embeddings. Performance of $128$- and $32$-bit binary paragraph vector codes is reported in~\tabref{bpv_main_results} and in~\figref{bpv_precision_recall}. For comparison we also report performance of real-valued paragraph vectors. Note that the binary codes perform very well, despite their far lower capacity: on 20 Newsgroups and RCV1 the $128$-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors, while on English Wikipedia its performance is slightly lower. Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of precision-recall curves from Figures~\ref{fig:bpv_precision_recall}a and~\ref{fig:bpv_precision_recall}b with~\citet[Figures 6 \& 7]{salakhutdinov2009semantic} shows that \mbox{$128$-bit} codes learned with this model outperform $128$-bit semantic hashing codes on 20 Newsgroups and RCV1. Moreover, the $32$-bit codes from this model outperform $128$-bit semantic hashing codes on the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3\% recall and better precision for higher recall levels. Note that the difference in this case lies not only in retrieval precision: the short $32$-bit Binary PV-DBOW codes are more efficient for indexing than long~\mbox{$128$-bit} semantic hashing codes. We also compared binary paragraph vectors against codes constructed by first inferring short, real-valued paragraph vectors and then using a separate hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. We experimented with two standard hashing algorithms, namely random hyperplane projection \citep{charikar2002similarity} and iterative quantization \citep{gong2011iterative}. Paragraph vectors in these experiments were inferred using PV-DBOW with bigrams. Results reported in~\tabref{bpv_baseline} show no benefit from using a separate algorithm for binarization. On the 20 Newsgroups and RCV1 datasets Binary PV-DBOW yielded higher MAP than the two baseline approaches. On English Wikipedia iterative quantization achieved MAP equal to Binary PV-DBOW, while random hyperplane projection yielded lower MAP. Some gain in precision of top hits can be observed for iterative quantization, as indicated by NDCG@10. However, precision of top hits can also be improved by querying with Real-Binary PV-DBOW model (\sectionref{real-binary-retrieval}). It is also worth noting that end-to-end inference in Binary PV models is more convenient than inferring real-valued vectors and then using another algorithm for hashing. \citet{li2015learning} argue that PV-DBOW outperforms PV-DM on a sentiment classification task, and demonstrate that the performance of PV-DBOW can be improved by including bigrams in the vocabulary. We observed similar results with Binary PV models. That is, including bigrams in the vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW provided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOW model. \subsection{Transfer learning} In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, if it is possible to use binary paragraph vectors without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. \citet{lau2016empirical} evaluated this approach for real-valued paragraph vectors, with promising results. It is not obvious, however, whether short binary codes would also perform well in similar settings. To shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. The results are presented in~\tabref{bpv_transfer_learning} and in~\figref{bpv_transfer_learning}. The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning. \subsection{Retrieval with Real-Binary models}\label{sec:real-binary-retrieval} As pointed out by~\citet{salakhutdinov2009semantic}, when working with large text collections one can use short binary codes for indexing and a representation with more capacity for ranking. Following this idea, we proposed Real-Binary PV-DBOW model (\sectionref{models}) that can simultaneously learn short binary codes and high-dimensional real-valued representations. We begin evaluation of this model by comparing retrieval precision of real-valued and binary representations learned by it. To this end, we trained a Real-Binary PV-DBOW model with $28$-bit binary codes and $300$-dimensional real-valued representations on the 20 Newsgroups and RCV1 datasets. Results are reported in~\figref{bpv_real_binary_compare}. The real-valued representations learned with this model give lower precision than PV-DBOW vectors but, importantly, improve precision over binary codes for top ranked documents. This justifies their use alongside binary codes. Using short binary codes for initial filtering of documents comes with a tradeoff between the retrieval performance and the recall level. For example, one can select a small subset of similar documents by using 28--32 bit codes and retrieving documents within small Hamming distance to the query. This will improve retrieval performance, and possibly also precision, at the cost of recall. Conversely, short codes provide a less fine-grained hashing and can be used to index documents within larger Hamming distance to the query. They can therefore be used to improve recall at the cost of retrieval performance, and possibly also precision. For these reasons, we evaluated Real-Binary PV-DBOW models with different code sizes and under different limits on the Hamming distance to the query. In general, we cannot expect these models to achieve 100\% recall under the test settings. Furthermore, recall will vary on query-by-query basis. We therefore decided to focus on the NDCG@10 metric in this evaluation, as it is suited for measuring model performance when a short list of relevant documents is sought, and the recall level is not known. MAP and precision-recall curves are not applicable in these settings. Information retrieval results for Real-Binary PV-DBOW are summarized in~\tabref{bpv_real_binary_performance}. The model gives higher NDCG@10 than $32$-bit Binary PV-DBOW codes (\tabref{bpv_main_results}). The difference is large when the initial filtering is restrictive, e.g. when using $28$-bit codes and $1$-$2$ bit Hamming distance limit. Real-Binary PV-DBOW can therefore be useful when one needs to quickly find a short list of relevant documents in a large text collection, and the recall level is not of primary importance. If needed, precision can be further improved by using plain Binary PV-DBOW codes for filtering and standard DBOW representation for raking (\tabref{bpv_real_binary_performance}, column~B). Note, however, that PV-DBOW model would then use approximately~$10$~times more parameters than Real-Binary PV-DBOW. \section{Conclusion} In this article we presented simple neural networks that learn short binary codes for text documents. Our networks extend Paragraph Vector by introducing a sigmoid nonlinearity before the softmax that predicts words in documents. Binary codes inferred with the proposed networks achieve higher retrieval precision than semantic hashing codes on two popular information retrieval benchmarks. They also retain a lot of their precision when trained on an unrelated text corpus. Finally, we presented a network that simultaneously learns short binary codes and longer, real-valued representations. The best codes in our experiments were inferred with Binary PV-DBOW networks. The Binary PV-DM model did not perform so well. \citet{li2015learning} made similar observations for Paragraph Vector models, and argue that in distributed memory model the word context takes a lot of the burden of predicting the central word from the document code. An interesting line of future research could, therefore, focus on models that account for word order, while learning good binary codes. It is also worth noting that \citet{le2014distributed} constructed paragraph vectors by combining DM and DBOW representations. This strategy may proof useful also with binary codes, when employed with hashing algorithms designed for longer codes, e.g. with multi-index hashing~\citep{norouzi2012fast}. \section*{Acknowledgments} This research is supported by National Science Centre, Poland grant no.~\mbox{2013/09/B/ST6/01549} ``Interactive Visual Text Analytics~(IVTA): Development of novel, user-driven text mining and visualization methods for large text corpora exploration.'' This research was carried out with the support of the ``HPC Infrastructure for Grand Challenges of Science and Engineering'' project, co-financed by the European Regional Development Fund under the Innovative Economy Operational Programme. This research was supported in part by PL-Grid Infrastructure. \appendix \section{Visualization of Binary PV codes} For an additional comparison with semantic hashing, we used t-distributed Stochastic Neighbor Embedding~\citep{maaten2008visualizing} to construct two-dimensional visualizations of codes learned by Binary PV-DBOW with bigrams. We used the same subsets of newsgroups and RCV1 topics that were used by~\citet[Figure 5]{salakhutdinov2009semantic}. Codes learned by Binary PV-DBOW (\figref{binpv_tsne}) appear slightly more clustered. ~ \end{document}
Binary Paragraph Vectors
1611.01116
Table 3: Information retrieval results for the Binary PV-DBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes.
[ "[EMPTY]", "MAP", "NDCG@10" ]
[ [ "20 Newsgroups", "0.24", "0.51" ], [ "RCV1", "0.18", "0.66" ] ]
In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, if it is possible to use binary paragraph vectors without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. It is not obvious, however, whether short binary codes would also perform well in similar settings. To shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning.
\documentclass[11pt,a4paper]{article} \pdfoutput=1 \usepackage[hyperref]{acl2017} \usepackage[pdftex]{graphicx} \def\aclpaperid{43} % Enter the acl Paper ID here \aclfinalcopy % Uncomment this line for the final submission \newcommand{\figref}[1]{Figure~\ref{fig:#1}} \newcommand{\tabref}[1]{Table~\ref{tab:#1}} \newcommand{\sectionref}[1]{Section~\ref{sec:#1}} \newcommand{\equationref}[1]{Eq.~\ref{eq:#1}} \newcommand{\rot}[1]{\rotatebox{90}{#1}} \renewcommand{\arraystretch}{1.2} \bibliographystyle{acl_natbib} \title{Binary Paragraph Vectors} \author{Karol Grzegorczyk \and Marcin Kurdziel\\ AGH University of Science and Technology \\ Department of Computer Science \\ Krakow, Poland \\ \texttt{\{kgr,kurdziel\}@agh.edu.pl}} \date{} \begin{document} \maketitle \begin{abstract} Recently Le \& Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present \emph{Binary Paragraph Vector} models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection. \end{abstract} \section{Introduction}\label{sec:introduction} One of the significant challenges in contemporary information processing is the sheer volume of available data. \citet{gantz2012digital}, for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing~\citep{indyk1998approximate}, relies on hashing data into short, locality-preserving binary codes~\citep{wang2014hashing}. The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. Most of the algorithms from this family are data-oblivious, i.e. can generate hashes for any type of data. Nevertheless, some methods target specific kind of input data, like text or image. In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by~\citet{salakhutdinov2009semantic}. Their \emph{semantic hashing} leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words~(BOW) representation. Salakhutdinov~\&~Hinton report that binary codes allow for up to $20$-fold improvement in document ranking speed, compared to real-valued representation of the same dimensionality. Moreover, they demonstrate that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning binary representation from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov~\&~Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words. Binary codes have also been applied to cross-modal retrieval where text is one of the modalities. Specifically, \citet{wang2013semantic} incorporated tag information that often accompany text documents, while \citet{masci2014multimodal} employed siamese neural networks to learn single binary representation for text and image data. Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. \citet{mikolov2013efficient} proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by \citet{le2014distributed} to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le~\&~Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le~\&~Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation~\citep{gutmann2010noise} or importance sampling~\citep{cho2015using} to approximate the gradients with respect to the softmax logits. An alternative approach to learning representation of pieces of text has been recently described by \citet{kiros2015skip}. Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level. In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by~\citet{lin2015deep} on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While~\citet{lin2015deep} employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents. \section{Binary paragraph vector models}\label{sec:models} The basic idea in binary paragraph vector models is to introduce a sigmoid nonlinearity before the softmax that models the conditional probability of words given the context. If we then enforce binary or near-binary activations in this nonlinearity, the probability distribution over words will be conditioned on a bit vector context, rather than real-valued representation. The inference in the model proceeds like in Paragraph Vector, except the document code is constructed from the sigmoid activations. After rounding, this code can be seen as a distributed binary representation of the document. In the simplest Binary PV-DBOW model~(\figref{pv-dbow-bin}) the dimensionality of the real-valued document embeddings is equal to the length of the binary codes. Despite this low dimensional representation~--~a~useful binary hash will typically have 128 or fewer bits~--~this model performed surprisingly well in our experiments. Note that we cannot simply increase the embedding dimensionality in Binary PV-DBOW in order to learn better codes: binary vectors learned in this way would be too long to be useful in document hashing. The retrieval performance can, however, be improved by using binary codes for initial filtering of documents, and then using a representation with higher capacity to rank the remaining documents by their similarity to the query. \citet{salakhutdinov2009semantic}, for example, used semantic hashing codes for initial filtering and TF-IDF for ranking. A similar document retrieval strategy can be realized with binary paragraph vectors. Furthermore, we can extend the Binary PV-DBOW model to simultaneously learn short binary codes and higher-dimensional real-valued representations. Specifically, in the Real-Binary PV-DBOW model~(\figref{pv-dbow-bin-projections}) we introduce a linear projection between the document embedding matrix and the sigmoid nonlinearity. During training, we learn the softmax parameters and the projection matrix. During inference, softmax weights and the projection matrix are fixed. This way, we simultaneously obtain a high-capacity representation of a document in the embedding matrix, e.g. \mbox{$300$-dimensional} real-valued vector, and a short binary representation from the sigmoid activations. One advantage of using the Real-Binary PV-DBOW model over two separate networks is that we need to store only one set of softmax parameters (and a small projection matrix) in the memory, instead of two large weight matrices. Additionally, only one model needs to be trained, rather than two distinct networks. Binary document codes can also be learned by extending distributed memory models. \citet{le2014distributed} suggest that in PV-DM, a context of the central word can be constructed by either concatenating or averaging the document vector and the embeddings of the surrounding words. However, in Binary PV-DM~(\figref{pv-dm-bin}) we always construct the context by concatenating the relevant vectors before applying the sigmoid nonlinearity. This way, the length of binary codes is not tied to the dimensionality of word embeddings. Softmax layers in the models described above should be trained to predict words in documents given binary context vectors. Training should therefore encourage binary activations in the preceding sigmoid layers. This can be done in several ways. In semantic hashing autoencoders~\citet{salakhutdinov2009semantic} added noise to the sigmoid coding layer. Error backpropagation then countered the noise, by forcing the activations to be close to 0 or 1. Another approach was used by~\citet{krizhevsky2011using} in autoencoders that learned binary codes for small images. During the forward pass, activations in the coding layer were rounded to 0 or 1. Original (i.e. not rounded) activations were used when backpropagating errors. Alternatively, one could model the document codes with stochastic binary neurons. Learning in this case can still proceed with error backpropagation, provided that a suitable gradient estimator is used alongside stochastic activations. We experimented with the methods used in semantic hashing and Krizhevsky's autoencoders, as well as with the two biased gradient estimators for stochastic binary neurons discussed by~\citet{bengio2013estimating}. We also investigated the slope annealing trick~\citep{chung2016hierarchical} when training networks with stochastic binary activations. From our experience, binary paragraph vector models with rounded activations are easy to train and learn better codes than models with noise-based binarization or stochastic neurons. We therefore use Krizhevsky's binarization in our models. \section{Experiments}\label{sec:experiments} To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups\footnote{Available at \url{http://qwone.com/~jason/20Newsgroups}}, a cleansed version (also called v2) of Reuters Corpus Volume~1\footnote{Available at \url{http://trec.nist.gov/data/reuters/reuters.html}}~(RCV1) and English Wikipedia\footnote{A snapshot from April 5th, 2016}. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by~\cite{li2015learning} indicate that performance\ of PV-DBOW can be improved by including \emph{n-grams} in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements. The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. In case of English Wikipedia we held out for testing randomly selected 10\% of the documents. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10)~\citep{jarvelin2002cumulated}. The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. In RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by~\citet{salakhutdinov2009semantic}. That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures for~20~Newsgroups and RCV1 follows~\citet{salakhutdinov2009semantic}, enabling comparison with semantic hashing codes. To assess the relevancy of articles in English Wikipedia we can employ categories assigned to them. However, unlike in RCV1, Wikipedia categories can have multiple parent categories and cyclic dependencies. Therefore, for this dataset we adopted a simplified relevancy measure: two articles are relevant if they share at least one category. We also removed from the test set categories with less than 20 documents as well as documents that were left with no categories. Overall, the relevancy is measured over more than $11,800$ categories, making English Wikipedia harder than the other two benchmarks. We use AdaGrad~\citep{duchi2011adaptive} for training and inference in all experiments reported in this work. During training we employ dropout~\citep{srivastava2014dropout} in the embedding layer. To facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by~\citet{cho2015using}. Binary PV-DM networks use the same number of dimensions for document codes and word embeddings. Performance of $128$- and $32$-bit binary paragraph vector codes is reported in~\tabref{bpv_main_results} and in~\figref{bpv_precision_recall}. For comparison we also report performance of real-valued paragraph vectors. Note that the binary codes perform very well, despite their far lower capacity: on 20 Newsgroups and RCV1 the $128$-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors, while on English Wikipedia its performance is slightly lower. Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of precision-recall curves from Figures~\ref{fig:bpv_precision_recall}a and~\ref{fig:bpv_precision_recall}b with~\citet[Figures 6 \& 7]{salakhutdinov2009semantic} shows that \mbox{$128$-bit} codes learned with this model outperform $128$-bit semantic hashing codes on 20 Newsgroups and RCV1. Moreover, the $32$-bit codes from this model outperform $128$-bit semantic hashing codes on the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3\% recall and better precision for higher recall levels. Note that the difference in this case lies not only in retrieval precision: the short $32$-bit Binary PV-DBOW codes are more efficient for indexing than long~\mbox{$128$-bit} semantic hashing codes. We also compared binary paragraph vectors against codes constructed by first inferring short, real-valued paragraph vectors and then using a separate hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. We experimented with two standard hashing algorithms, namely random hyperplane projection \citep{charikar2002similarity} and iterative quantization \citep{gong2011iterative}. Paragraph vectors in these experiments were inferred using PV-DBOW with bigrams. Results reported in~\tabref{bpv_baseline} show no benefit from using a separate algorithm for binarization. On the 20 Newsgroups and RCV1 datasets Binary PV-DBOW yielded higher MAP than the two baseline approaches. On English Wikipedia iterative quantization achieved MAP equal to Binary PV-DBOW, while random hyperplane projection yielded lower MAP. Some gain in precision of top hits can be observed for iterative quantization, as indicated by NDCG@10. However, precision of top hits can also be improved by querying with Real-Binary PV-DBOW model (\sectionref{real-binary-retrieval}). It is also worth noting that end-to-end inference in Binary PV models is more convenient than inferring real-valued vectors and then using another algorithm for hashing. \citet{li2015learning} argue that PV-DBOW outperforms PV-DM on a sentiment classification task, and demonstrate that the performance of PV-DBOW can be improved by including bigrams in the vocabulary. We observed similar results with Binary PV models. That is, including bigrams in the vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW provided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOW model. \subsection{Transfer learning} In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, if it is possible to use binary paragraph vectors without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. \citet{lau2016empirical} evaluated this approach for real-valued paragraph vectors, with promising results. It is not obvious, however, whether short binary codes would also perform well in similar settings. To shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. The results are presented in~\tabref{bpv_transfer_learning} and in~\figref{bpv_transfer_learning}. The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning. \subsection{Retrieval with Real-Binary models}\label{sec:real-binary-retrieval} As pointed out by~\citet{salakhutdinov2009semantic}, when working with large text collections one can use short binary codes for indexing and a representation with more capacity for ranking. Following this idea, we proposed Real-Binary PV-DBOW model (\sectionref{models}) that can simultaneously learn short binary codes and high-dimensional real-valued representations. We begin evaluation of this model by comparing retrieval precision of real-valued and binary representations learned by it. To this end, we trained a Real-Binary PV-DBOW model with $28$-bit binary codes and $300$-dimensional real-valued representations on the 20 Newsgroups and RCV1 datasets. Results are reported in~\figref{bpv_real_binary_compare}. The real-valued representations learned with this model give lower precision than PV-DBOW vectors but, importantly, improve precision over binary codes for top ranked documents. This justifies their use alongside binary codes. Using short binary codes for initial filtering of documents comes with a tradeoff between the retrieval performance and the recall level. For example, one can select a small subset of similar documents by using 28--32 bit codes and retrieving documents within small Hamming distance to the query. This will improve retrieval performance, and possibly also precision, at the cost of recall. Conversely, short codes provide a less fine-grained hashing and can be used to index documents within larger Hamming distance to the query. They can therefore be used to improve recall at the cost of retrieval performance, and possibly also precision. For these reasons, we evaluated Real-Binary PV-DBOW models with different code sizes and under different limits on the Hamming distance to the query. In general, we cannot expect these models to achieve 100\% recall under the test settings. Furthermore, recall will vary on query-by-query basis. We therefore decided to focus on the NDCG@10 metric in this evaluation, as it is suited for measuring model performance when a short list of relevant documents is sought, and the recall level is not known. MAP and precision-recall curves are not applicable in these settings. Information retrieval results for Real-Binary PV-DBOW are summarized in~\tabref{bpv_real_binary_performance}. The model gives higher NDCG@10 than $32$-bit Binary PV-DBOW codes (\tabref{bpv_main_results}). The difference is large when the initial filtering is restrictive, e.g. when using $28$-bit codes and $1$-$2$ bit Hamming distance limit. Real-Binary PV-DBOW can therefore be useful when one needs to quickly find a short list of relevant documents in a large text collection, and the recall level is not of primary importance. If needed, precision can be further improved by using plain Binary PV-DBOW codes for filtering and standard DBOW representation for raking (\tabref{bpv_real_binary_performance}, column~B). Note, however, that PV-DBOW model would then use approximately~$10$~times more parameters than Real-Binary PV-DBOW. \section{Conclusion} In this article we presented simple neural networks that learn short binary codes for text documents. Our networks extend Paragraph Vector by introducing a sigmoid nonlinearity before the softmax that predicts words in documents. Binary codes inferred with the proposed networks achieve higher retrieval precision than semantic hashing codes on two popular information retrieval benchmarks. They also retain a lot of their precision when trained on an unrelated text corpus. Finally, we presented a network that simultaneously learns short binary codes and longer, real-valued representations. The best codes in our experiments were inferred with Binary PV-DBOW networks. The Binary PV-DM model did not perform so well. \citet{li2015learning} made similar observations for Paragraph Vector models, and argue that in distributed memory model the word context takes a lot of the burden of predicting the central word from the document code. An interesting line of future research could, therefore, focus on models that account for word order, while learning good binary codes. It is also worth noting that \citet{le2014distributed} constructed paragraph vectors by combining DM and DBOW representations. This strategy may proof useful also with binary codes, when employed with hashing algorithms designed for longer codes, e.g. with multi-index hashing~\citep{norouzi2012fast}. \section*{Acknowledgments} This research is supported by National Science Centre, Poland grant no.~\mbox{2013/09/B/ST6/01549} ``Interactive Visual Text Analytics~(IVTA): Development of novel, user-driven text mining and visualization methods for large text corpora exploration.'' This research was carried out with the support of the ``HPC Infrastructure for Grand Challenges of Science and Engineering'' project, co-financed by the European Regional Development Fund under the Innovative Economy Operational Programme. This research was supported in part by PL-Grid Infrastructure. \appendix \section{Visualization of Binary PV codes} For an additional comparison with semantic hashing, we used t-distributed Stochastic Neighbor Embedding~\citep{maaten2008visualizing} to construct two-dimensional visualizations of codes learned by Binary PV-DBOW with bigrams. We used the same subsets of newsgroups and RCV1 topics that were used by~\citet[Figure 5]{salakhutdinov2009semantic}. Codes learned by Binary PV-DBOW (\figref{binpv_tsne}) appear slightly more clustered. ~ \end{document}
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution
1606.01603
Table 4: Performance comparison on whether using the proposed unknown words processing.
[ "[EMPTY]", "F-score" ]
[ [ "Without UNK replacement", "52.2" ], [ "With UNK replacement", "[BOLD] 55.3" ] ]
∙ Effect of UNK processing As we have mentioned in the previous section, traditional unknown word replacing methods are vulnerable to the real word test. To alleviate this issue, we proposed the UNK processing mechanism to recover the UNK tokens to the real words. As we can see that, by applying our UNK processing mechanism, the model do learned the positional features in these low-frequency words, and brings over 3% improvements in F-score, which indicated the effectiveness of our UNK processing approach.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{19} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution} \author{Ting Liu$^\dag$, Yiming Cui$^\ddag$, Qingyu Yin$^\dag$, Weinan Zhang$^\dag$, Shijin Wang$^\ddag$ \and Guoping Hu$^\ddag$\\ {$^\dag$Research Center for Social Computing and Information Retrieval,}\\ {Harbin Institute of Technology, Harbin, China}\\ {$^\ddag$iFLYTEK Research, Beijing, China}\\ {$^\dag$\tt\{tliu,qyyin,wnzhang\}@ir.hit.edu.cn}\\ {$^\ddag$\tt\{ymcui,sjwang3,gphu\}@iflytek.com}\\ } \date{} \begin{document} \begin{CJK*}{UTF8}{gbsn} \maketitle \begin{abstract} Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1\% F-score on OntoNotes 5.0 data. \end{abstract} \section{Introduction}\label{introduction} Previous works on zero pronoun (ZP) resolution mainly focused on the supervised learning approaches~\cite{han2006korean,zhao2007,iida2007zero,kong2010,iida2011cross,chen2013}. However, a major obstacle for training the supervised learning models for ZP resolution is the lack of annotated data. An important step is to organize the shared task on anaphora and coreference resolution, such as the ACE evaluations, SemEval-2010 shared task on Coreference Resolution in Multiple Languages~\cite{w8} and CoNLL-2012 shared task on Modeling Multilingual Unrestricted Coreference in OntoNotes~\cite{w6}. Following these shared tasks, the annotated evaluation data can be released for the following researches. Despite the success and contributions of these shared tasks, it still faces the challenge of spending manpower on labeling the extended data for better training performance and domain adaptation. To address the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Inspired by data generation on cloze-style reading comprehension, we can treat the zero pronoun resolution task as a special case of reading comprehension problem. So we can adopt similar data generation methods of reading comprehension to the zero pronoun resolution task. For the noun or pronoun in the document, which has the frequency equal to or greater than 2, we randomly choose one position where the noun or pronoun is located on, and replace it with a specific symbol $\langle blank \rangle$. Let query $\mathcal{Q}$ and answer $\mathcal{A}$ denote the sentence that contains a $\langle blank \rangle$, and the noun or pronoun which is replaced by the $\langle blank \rangle$, respectively. Thus, a pseudo training sample can be represented as a triple: \begin{equation} \nonumber \langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle \end{equation} For the zero pronoun resolution task, a $\langle blank \rangle$ represents a zero pronoun (ZP) in query $\mathcal{Q}$, and $\mathcal{A}$ indicates the corresponding antecedent of the ZP. In this way, tremendous pseudo training samples can be generated from the various documents, such as news corpus. Towards the shortcomings of the previous approaches that are based on feature engineering, we propose a neural network architecture, which is an attention-based neural network model, for zero pronoun resolution. Also we propose a two-step training method, which benefit from both large-scale pseudo training data and task-specific data, showing promising performance. To sum up, the contributions of this paper are listed as follows. \begin{itemize} \item To our knowledge, this is the first time that utilizing reading comprehension neural network model into zero pronoun resolution task. \item We propose a two-step training approach, namely pre-training-then-adaptation, which benefits from both the large-scale automatically generated pseudo training data and task-specific data. \item Towards the shortcomings of the feature engineering approaches, we first propose an attention-based neural network model for zero pronoun resolution. \end{itemize} \section{The Proposed Approach} In this section, we will describe our approach in detail. First, we will describe our method of generating large-scale pseudo training data for zero pronoun resolution. Then we will introduce two-step training approach to alleviate the gaps between pseudo and real training data. Finally, the attention-based neural network model as well as associated unknown words processing techniques will be described. \subsection{Generating Pseudo Training Data}\label{pseudo} In order to get large quantities of training data for neural network model, we propose an approach, which is inspired by~\cite{hermann-etal-2015}, to automatically generate large-scale pseudo training data for zero pronoun resolution. However, our approach is much more simple and general than that of~\cite{hermann-etal-2015}. We will introduce the details of generating the pseudo training data for zero pronoun resolution as follows. First, we collect a large number of documents that are relevant (or homogenous in some sense) to the released OntoNote 5.0 data for zero pronoun resolution task in terms of its domain. In our experiments, we used large-scale news data for training. Given a certain document $\mathcal{D}$, which is composed by a set of sentences $\mathcal{D}=\{s_1,s_2,...,s_n\}$, we randomly choose an answer word $\mathcal{A}$ in the document. Note that, we restrict $\mathcal{A}$ to be either a noun or pronoun, where the part-of-speech is identified using LTP Toolkit \cite{che2010ltp}, as well as the answer word should appear at least twice in the document. Second, after the answer word $\mathcal{A}$ is chosen, the sentence that contains $\mathcal{A}$ is defined as a query $\mathcal{Q}$, in which the answer word $\mathcal{A}$ is replaced by a specific symbol $\langle blank \rangle$. In this way, given the query $\mathcal{Q}$ and document $\mathcal{D}$, the target of the prediction is to recover the answer $\mathcal{A}$. That is quite similar to the zero pronoun resolution task. Therefore, the automatically generated training samples is called \emph{\textbf{pseudo}} training data. Figure~\ref{samp} shows an example of a pseudo training sample. In this way, we can generate tremendous triples of $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$ for training neural network, without making any assumptions on the nature of the original corpus. \subsection{Two-step Training}\label{training} It should be noted that, though we have generated large-scale pseudo training data for neural network training, there is still a gap between pseudo training data and the real zero pronoun resolution task in terms of the query style. So we should do some adaptations to our model to deal with the zero pronoun resolution problems ideally. In this paper, we used an effective approach to deal with the mismatch between pseudo training data and zero pronoun resolution task-specific data. Generally speaking, in the first stage, we use a large amount of the pseudo training data to train a fundamental model, and choose the best model according to the validation accuracy. Then we continue to train from the previous best model using the zero pronoun resolution task-specific training data, which is exactly the same domain and query type as the standard zero pronoun resolution task data. The using of the combination of proposed pseudo training data and task-specific data, i.e. zero pronoun resolution task data, is far more effective than using either of them alone. Though there is a gap between these two data, they share many similar characteristics to each other as illustrated in the previous part, so it is promising to utilize these two types of data together, which will compensate to each other. The two-step training procedure can be concluded as, \begin{itemize} \item Pre-training stage: by using large-scale training data to train the neural network model, we can learn richer word embeddings, as well as relatively reasonable weights in neural networks than just training with a small amount of zero pronoun resolution task training data; \item Adaptation stage: after getting the best model that is produced in the previous step, we continue to train the model with task-specific data, which can force the previous model to adapt to the new data, without losing much knowledge that has learned in the previous stage (such as word embeddings). \end{itemize} As we will see in the experiment section that the proposed two-step training approach is effective and brings significant improvements. \subsection{Attention-based Neural Network Model}\label{model} Our model is primarily an attention-based neural network model, which is similar to {\em Attentive Reader} proposed by \cite{hermann-etal-2015}. Formally, when given a set of training triple $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$, we will construct our network in the following way. Firstly, we project one-hot representation of document $\mathcal{D}$ and query $\mathcal{Q}$ into a continuous space with the shared embedding matrix $W_e$. Then we input these embeddings into different bi-directional RNN to get their contextual representations respectively. In our model, we used the bidirectional Gated Recurrent Unit (GRU) as RNN implementation \cite{cho2014learning}. \begin{equation} e(x) = W_e \cdot x,~where~~x\in \mathcal{D},\mathcal{Q} \end{equation} \begin{equation} \overrightarrow{h_s} = \overrightarrow{GRU}(e(x)) ; \overleftarrow{h_s} = \overleftarrow{GRU}(e(x)) \end{equation} \begin{equation}h_s = [\overrightarrow{h_s}; \overleftarrow{h_s}] \end{equation} For the query representation, instead of concatenating the final forward and backward states as its representations, we directly get an averaged representations on all bi-directional RNN slices, which can be illustrated as \begin{equation} h_{query}= \frac{1}{n}\sum_{t=1}^{n}h_{query}(t) \end{equation} For the document, we place a soft attention over all words in document \cite{bahdanau2014neural}, which indicate the degree to which part of document is attended when filling the blank in the query sentence. Then we calculate a weighted sum of all document tokens to get the attended representation of document. \begin{equation} m(t) = \tanh(W \cdot h_{doc}(t) + U \cdot h_{query}) \end{equation} \begin{equation} \alpha(t) = \frac{\exp(W_s \cdot m(t))}{\sum\limits_{j=1}^{n}\exp(W_s \cdot m(j))} \end{equation} \begin{equation} h_{doc\_att}= h_{doc} \cdot \alpha \end{equation} where variable $\alpha(t)$ is the normalized attention weight at $t$th word in document, $h_{doc}$ is a matrix that concatenate all $h_{doc}(t)$ in sequence. \begin{equation} h_{doc} =concat[h_{doc}(1),h_{doc}(2),...,h_{doc}(t)] \end{equation} Then we use attended document representation and query representation to estimate the final answer, which can be illustrated as follows, where $V$ is the vocabulary, \begin{equation} r = concat [h_{doc\_att}, h_{query}] \end{equation} \begin{equation} P(\mathcal{A}|\mathcal{D},\mathcal{Q}) \propto softmax(W_r \cdot r)~~, s.t.~~\mathcal{A} \in V \end{equation} Figure~\ref{nn-arch} shows the proposed neural network architecture. Note that, for zero pronoun resolution task, antecedents of zero pronouns are always noun phrases (NPs), while our model generates only one word (a noun or a pronoun) as the result. To better adapt our model to zero pronoun resolution task, we further process the output result in the following procedure. First, for a given zero pronoun, we extract a set of NPs as its candidates utilizing the same strategy as \cite{chen2015}. Then, we use our model to generate an answer (one word) for the zero pronoun. After that, we go through all the candidates from the nearest to the far-most. For an NP candidate, if the produced answer is its head word, we then regard this NP as the antecedent of the given zero pronoun. By doing so, for a given zero pronoun, we generate an NP as the prediction of its antecedent. \subsection{Unknown Words Processing}\label{unk} Because of the restriction on both memory occupation and training time, it is usually suggested to use a shortlist of vocabulary in neural network training. However, we often replace the out-of-vocabularies to a unique special token, such as $\langle unk \rangle$. But this may place an obstacle in real world test. When the model predicts the answer as $\langle unk \rangle$, we do not know what is the exact word it represents in the document, as there may have many $\langle unk \rangle$s in the document. In this paper, we propose to use a simple but effective way to handle unknown words issue. The idea is straightforward, which can be illustrated as follows. \begin{itemize} \item Identify all unknown words inside of each $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$; \item Instead of replacing all these unknown words into one unique token $\langle unk \rangle$, we make a hash table to project these unique unknown words to numbered tokens, such as $\langle unk1 \rangle, \langle unk2 \rangle, ..., \langle unkN \rangle$ in terms of its occurrence order in the document. Note that, the same words are projected to the same unknown word tokens, and all these projections are only valid inside of current sample. For example, $\langle unk1 \rangle$ indicate the first unknown word, say ``apple'', in the current sample, but in another sample the $\langle unk1 \rangle$ may indicate the unknown word ``orange''. That is, the unknown word labels are indicating position features rather than the exact word; \item Insert these unknown marks in the vocabulary. These marks may only take up dozens of slots, which is negligible to the size of shortlists (usually 30K $\sim$ 100K). \end{itemize} We take one sentence ``The weather of today is not as pleasant as the weather of yesterday.'' as an example to show our unknown word processing method, which is shown in Figure 3. If we do not discriminate the unknown words and assign different unknown words with the same token $\langle unk \rangle$, it would be impossible for us to know what is the exact word that $\langle unk \rangle$ represents for in the real test. However, when using our proposed unknown word processing method, if the model predicts a $\langle unkX \rangle$ as the answer, we can simply scan through the original document and identify its position according to its unknown word number $X$ and replace the $\langle unkX \rangle$ with the real word. For example, in Figure \ref{unk-proc}, if we adopt original unknown words processing method, we could not know whether the $\langle unk \rangle$ is the word ``weather'' or ``pleasant''. However, when using our approach, if the model predicts an answer as $\langle unk1 \rangle$, from the original text, we can know that $\langle unk1 \rangle$ represents the word ``weather''. \section{Experiments} \subsection{Data} In our experiments, we choose a selection of public news data to generate large-scale pseudo training data for pre-training our neural network model (pre-training step)\footnote{The news data is available at \url{http://www.sogou.com/labs/dl/cs.html}}. In the adaptation step, we used the official dataset OntoNotes Release 5.0\footnote{\url{http://catalog.ldc.upenn.edu/LDC2013T19}} which is provided by CoNLL-2012 shared task, to carry out our experiments. The CoNLL-2012 shared task dataset consists of three parts: a training set, a development set and a test set. The datasets are made up of 6 different domains, namely Broadcast News (BN), Newswires (NW), Broadcast Conversations (BC), Telephone Conversations (TC), Web Blogs (WB), and Magazines (MZ). We closely follow the experimental settings as \citep{kong2010,chen2014,chen2015,chen2016}, where we treat the training set for training and the development set for testing, because only the training and development set are annotated with ZPs. The statistics of training and testing data is shown in Table 1 and 2 respectively. \subsection{Neural Network Setups} Training details of our neural network models are listed as follows. \begin{itemize} \item Embedding: We use randomly initialized embedding matrix with uniformed distribution in the interval [-0.1,0.1], and set units number as 256. No pre-trained word embeddings are used. \item Hidden Layer: We use GRU with 256 units, and initialize the internal matrix by random orthogonal matrices \cite{saxe2013exact}. As GRU still suffers from the gradient exploding problem, we set gradient clipping threshold to 10. \item Vocabulary: As the whole vocabulary is very large (over 800K), we set a shortlist of 100K according to the word frequency and unknown words are mapped to 20 $\langle unkX \rangle$ using the proposed method. \item Optimization: We used ADAM update rule \cite{kingma2014adam} with an initial learning rate of 0.001, and used negative log-likelihood as the training objective. The batch size is set to 32. \end{itemize} All models are trained on Tesla K40 GPU. Our model is implemented with Theano \cite{theano2016} and Keras \cite{chollet2015keras}. \subsection{Experimental results} Same to the previous researches that are related to zero pronoun resolution, we evaluate our system performance in terms of F-score (F). We focus on AZP resolution process, where we assume that gold AZPs and gold parse trees are given\footnote{All gold information are provided by the CoNLL-2012 shared task dataset}. The same experimental setting is utilized in \cite{chen2014,chen2015,chen2016}. The overall results are shown in Table~\ref{result}, where the performances of each domain are listed in detail and overall performance is also shown in the last column. \subsubsection*{$\bullet$~~ Overall Performance} We employ four Chinese ZP resolution baseline systems on OntoNotes 5.0 dataset. As we can see that our model significantly outperforms the previous state-of-the-art system \cite{chen2016} by 3.1\% in overall F-score, and substantially outperform the other systems by a large margin. When observing the performances of different domains, our approach also gives relatively consistent improvements among various domains, except for BN and TC with a slight drop. All these results approve that our proposed approach is effective and achieves significant improvements in AZP resolution. In our quantitative analysis, we investigated the reasons of the declines in the BN and TC domain. A primary observation is that the word distributions in these domains are fairly different from others. The average document length of BN and TC are quite longer than other domains, which suggest that there is a bigger chance to have unknown words than other domains, and add difficulties to the model training. Also, we have found that in the BN and TC domains, the texts are often in oral form, which means that there are many irregular expressions in the context. Such expressions add noise to the model, and it is difficult for the model to extract useful information in these contexts. These phenomena indicate that further improvements can be obtained by filtering stop words in contexts, or increasing the size of task-specific data, while we leave this in the future work. \subsubsection*{$\bullet$~~ Effect of UNK processing} As we have mentioned in the previous section, traditional unknown word replacing methods are vulnerable to the real word test. To alleviate this issue, we proposed the UNK processing mechanism to recover the UNK tokens to the real words. In Table \ref{unk-result}, we compared the performance that with and without the proposed UNK processing, to show whether the proposed UNK processing method is effective. As we can see that, by applying our UNK processing mechanism, the model do learned the positional features in these low-frequency words, and brings over 3\% improvements in F-score, which indicated the effectiveness of our UNK processing approach. \subsubsection*{$\bullet$~~ Effect of Domain Adaptation} We also tested out whether our domain adaptation method is effective. In this experiments, we used three different types of training data: only pseudo training data, only task-specific data, and our adaptation method, i.e. using pseudo training data in the pre-training step and task-specific data for domain adaptation step. The results are given in Table \ref{da-result}. As we can see that, using either pseudo training data or task-specific data alone can not bring inspiring result. By adopting our domain adaptation method, the model could give significant improvements over the other models, which demonstrate the effectiveness of our proposed two-step training approach. An intuition behind this phenomenon is that though pseudo training data is fairly big enough to train a reliable model parameters, there is still a gap to the real zero pronoun resolution tasks. On the contrary, though task-specific training data is exactly the same type as the real test, the quantity is not enough to train a reasonable model (such as word embedding). So it is better to make use of both to take the full advantage. However, as the original task-specific data is fairly small compared to pseudo training data, we also wondered if the large-scale pseudo training data is only providing rich word embedding information. So we use the large-scale pseudo training data for embedding training using GloVe toolkit \citep{pennington-etal-2014}, and initialize the word embeddings in the ``only task-specific data'' system. From the result we can see that the pseudo training data provide more information than word embeddings, because though we used GloVe embeddings in ``only task-specific data'', it still can not outperform the system that uses domain adaptation which supports our claim. \section{Error Analysis} To better evaluate our proposed approach, we performed a qualitative analysis of errors, where two major errors are revealed by our analysis, as discussed below. \subsection{Effect of Unknown Words} Our approach does not do well when there are lots of $\langle unk \rangle$s in the context of ZPs, especially when the $\langle unk \rangle$s appears near the ZP. An example is given below, where words with $\#$ are regarded as $\langle unk \rangle$s in our model. \begin{quote}\small {\bf $\phi$ }\ 登上$^\#$\ 太平山$^\#$ \ 顶 \ , \ 将 \ 香港岛$^\#$ \ 和 \ 维多利亚港$^\#$ \ 的 \ 美景 \ 尽收眼底\ 。\\ {\bf $\phi$ } Successfully climbed$^\#$ the peak of [Taiping Mountain]$^\#$, to have a panoramic view of the beauty of [Hong Kong Island]$^\#$ and [Victoria Harbour]$^\#$. \end{quote} In this case, the words ``登上/climbed'' and ``太平山/Taiping Mountain'' that appears immediately after the ZP ``$\phi$'' are all regarded as $\langle unk \rangle$s in our model. As we model the sequence of words by RNN, the $\langle unk \rangle$s make the model more difficult to capture the semantic information of the sentence, which in turn influence the overall performance. Especially for the words that are near the ZP, which play important roles when modeling context information for the ZP. By looking at the word ``顶/peak'', it is hard to comprehend the context information, due to the several surrounding $\langle unk \rangle$s. Though our proposed unknown words processing method is effective in empirical evaluation, we think that more advanced method for unknown words processing would be of a great help in improving comprehension of the context. \subsection{Long Distance Antecedents} Also, our model makes incorrect decisions when the correct antecedents of ZPs are in long distance. As our model chooses answer from words in the context, if there are lots of words between the ZP and its antecedent, more noise information are introduced, and adds more difficulty in choosing the right answer. For example: \begin{quote}\small 我\ 帮\ 不\ 了\ 那个\ 人\ ...\ ...\ 那\ 天\ 结束\ 后\ {\bf $\phi$ }\ 回到\ 家中\ 。\\ I can't help that guy ... ... After that day, {\bf $\phi$ } return home. \end{quote} In this case, the correct antecedent of ZP ``$\phi$'' is the NP candidate ``我/I''. By seeing the contexts, we observe that there are over 30 words between the ZP and its antecedent. Although our model does not intend to fill the ZP gap only with the words near the ZP, as most of the antecedents appear just a few words before the ZPs, our model prefers the nearer words as correct antecedents. Hence, once there are lots of words between ZP and its nearest antecedent, our model can sometimes make wrong decisions. To correctly handle such cases, our model should learn how to filter the useless words and enhance the learning of long-term dependency. \section{Related Work} \subsection{Zero pronoun resolution} For Chinese zero pronoun (ZP) resolution, early studies employed heuristic rules to Chinese ZP resolution. \citet{converse2006} proposes a rule-based method to resolve the zero pronouns, by utilizing Hobbs algorithm \cite{hobbs1978resolving} in the CTB documents. Then, supervised approaches to this task have been vastly explored. \citet{zhao2007} first present a supervised machine learning approach to the identification and resolution of Chinese ZPs. \citet{kong2010} develop a tree-kernel based approach for Chinese ZP resolution. More recently, unsupervised approaches have been proposed. \citet{chen2014} develop an unsupervised language-independent approach, utilizing the integer linear programming to using ten overt pronouns. \citet{chen2015} propose an end-to-end unsupervised probabilistic model for Chinese ZP resolution, using a salience model to capture discourse information. Also, there have been many works on ZP resolution for other languages. These studies can be divided into rule-based and supervised machine learning approaches. \citet{ferrandez2000} proposed a set of hand-crafted rules for Spanish ZP resolution. Recently, supervised approaches have been exploited for ZP resolution in Korean \cite{han2006korean} and Japanese \cite{isozaki2003japanese,iida2006,iida2007zero,sasano2011}. \citet{iida2011cross} developed a cross-lingual approach for Japanese and Italian ZPs where an ILP-based model was employed to zero anaphora detection and resolution. In sum, most recent researches on ZP resolution are supervised approaches, which means that their performance highly relies on large-scale annotated data. Even for the unsupervised approach \cite{chen2014}, they also utilize a supervised pronoun resolver to resolve ZPs. Therefore, the advantage of our proposed approach is obvious. We are able to generate large-scale pseudo training data for ZP resolution, and also we can benefit from the task-specific data for fine-tuning via the proposed two-step training approach. \subsection{Cloze-style Reading Comprehension} Our neural network model is mainly motivated by the recent researches on cloze-style reading comprehension tasks, which aims to predict one-word answer given the document and query. These models can be seen as a general model of mining the relations between the document and query, so it is promising to combine these models to the specific domain. A representative work of cloze-style reading comprehension is done by \citet{hermann-etal-2015}. They proposed a methodology for obtaining large quantities of $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$ triples. By using this method, a large number of training data can be obtained without much human intervention, and make it possible to train a reliable neural network. They used attention-based neural networks for this task. Evaluation on CNN/DailyMail datasets showed that their approach is much effective than traditional baseline systems. While our work is similar to \citet{hermann-etal-2015}, there are several differences which can be illustrated as follows. Firstly, though we both utilize the large-scale corpus, they require that the document should accompany with a brief summary of it, while this is not always available in most of the document, and it may place an obstacle in generating limitless training data. In our work, we do not assume any prerequisite of the training data, and directly extract queries from the document, which makes it easy to generate large-scale training data. Secondly, their work mainly focuses on reading comprehension in the general domain. We are able to exploit large-scale training data for solving problems in the specific domain, and we proposed two-step training method which can be easily adapted to other domains as well. \section{Conclusion} In this study, we propose an effective way to generate and exploit large-scale pseudo training data for zero pronoun resolution task. The main idea behind our approach is to automatically generate large-scale pseudo training data and then utilize an attention-based neural network model to resolve zero pronouns. For training purpose, two-step training approach is employed, i.e. a {\bf pre-training} and {\bf adaptation} step, and this can be also easily applied to other tasks as well. The experimental results on OntoNotes 5.0 corpus are encouraging, showing that the proposed model and accompanying approaches significantly outperforms the state-of-the-art systems. The future work will be carried out on two main aspects: First, as experimental results show that the unknown words processing is a critical part in comprehending context, we will explore more effective way to handle the UNK issue. Second, we will develop other neural network architecture to make it more appropriate for zero pronoun resolution task. \section*{Acknowledgements} We would like to thank the anonymous reviewers for their thorough reviewing and proposing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015407, Key Projects of National Natural Science Foundation of China via grant 61632011, and National Natural Science Youth Foundation of China via grant 61502120. \bibliographystyle{acl_natbib} \end{CJK*} \end{document}
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution
1606.01603
Table 3: Experimental result (F-score) on the OntoNotes 5.0 test data. The best results are marked with bold face. † indicates that our approach is statistical significant over the baselines (using t-test, with p<0.05). The number in the brackets indicate the number of AZPs.
[ "[EMPTY]", "NW (84)", "MZ (162)", "WB (284)", "BN (390)", "BC (510)", "TC (283)", "[BOLD] Overall" ]
[ [ "Kong and Zhou ( 2010 )", "34.5", "32.7", "45.4", "51.0", "43.5", "48.4", "44.9" ], [ "Chen and Ng ( 2014 )", "38.1", "31.0", "50.4", "45.9", "53.8", "[BOLD] 54.9", "48.7" ], [ "Chen and Ng ( 2015 )", "46.4", "39.0", "51.8", "53.8", "49.4", "52.7", "50.2" ], [ "Chen and Ng ( 2016 )", "48.8", "41.5", "56.3", "[BOLD] 55.4", "50.8", "53.1", "52.2" ], [ "Our Approach†", "[BOLD] 59.2", "[BOLD] 51.3", "[BOLD] 60.5", "53.9", "[BOLD] 55.5", "52.9", "[BOLD] 55.3" ] ]
Same to the previous researches that are related to zero pronoun resolution, we evaluate our system performance in terms of F-score (F).
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{19} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution} \author{Ting Liu$^\dag$, Yiming Cui$^\ddag$, Qingyu Yin$^\dag$, Weinan Zhang$^\dag$, Shijin Wang$^\ddag$ \and Guoping Hu$^\ddag$\\ {$^\dag$Research Center for Social Computing and Information Retrieval,}\\ {Harbin Institute of Technology, Harbin, China}\\ {$^\ddag$iFLYTEK Research, Beijing, China}\\ {$^\dag$\tt\{tliu,qyyin,wnzhang\}@ir.hit.edu.cn}\\ {$^\ddag$\tt\{ymcui,sjwang3,gphu\}@iflytek.com}\\ } \date{} \begin{document} \begin{CJK*}{UTF8}{gbsn} \maketitle \begin{abstract} Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1\% F-score on OntoNotes 5.0 data. \end{abstract} \section{Introduction}\label{introduction} Previous works on zero pronoun (ZP) resolution mainly focused on the supervised learning approaches~\cite{han2006korean,zhao2007,iida2007zero,kong2010,iida2011cross,chen2013}. However, a major obstacle for training the supervised learning models for ZP resolution is the lack of annotated data. An important step is to organize the shared task on anaphora and coreference resolution, such as the ACE evaluations, SemEval-2010 shared task on Coreference Resolution in Multiple Languages~\cite{w8} and CoNLL-2012 shared task on Modeling Multilingual Unrestricted Coreference in OntoNotes~\cite{w6}. Following these shared tasks, the annotated evaluation data can be released for the following researches. Despite the success and contributions of these shared tasks, it still faces the challenge of spending manpower on labeling the extended data for better training performance and domain adaptation. To address the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Inspired by data generation on cloze-style reading comprehension, we can treat the zero pronoun resolution task as a special case of reading comprehension problem. So we can adopt similar data generation methods of reading comprehension to the zero pronoun resolution task. For the noun or pronoun in the document, which has the frequency equal to or greater than 2, we randomly choose one position where the noun or pronoun is located on, and replace it with a specific symbol $\langle blank \rangle$. Let query $\mathcal{Q}$ and answer $\mathcal{A}$ denote the sentence that contains a $\langle blank \rangle$, and the noun or pronoun which is replaced by the $\langle blank \rangle$, respectively. Thus, a pseudo training sample can be represented as a triple: \begin{equation} \nonumber \langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle \end{equation} For the zero pronoun resolution task, a $\langle blank \rangle$ represents a zero pronoun (ZP) in query $\mathcal{Q}$, and $\mathcal{A}$ indicates the corresponding antecedent of the ZP. In this way, tremendous pseudo training samples can be generated from the various documents, such as news corpus. Towards the shortcomings of the previous approaches that are based on feature engineering, we propose a neural network architecture, which is an attention-based neural network model, for zero pronoun resolution. Also we propose a two-step training method, which benefit from both large-scale pseudo training data and task-specific data, showing promising performance. To sum up, the contributions of this paper are listed as follows. \begin{itemize} \item To our knowledge, this is the first time that utilizing reading comprehension neural network model into zero pronoun resolution task. \item We propose a two-step training approach, namely pre-training-then-adaptation, which benefits from both the large-scale automatically generated pseudo training data and task-specific data. \item Towards the shortcomings of the feature engineering approaches, we first propose an attention-based neural network model for zero pronoun resolution. \end{itemize} \section{The Proposed Approach} In this section, we will describe our approach in detail. First, we will describe our method of generating large-scale pseudo training data for zero pronoun resolution. Then we will introduce two-step training approach to alleviate the gaps between pseudo and real training data. Finally, the attention-based neural network model as well as associated unknown words processing techniques will be described. \subsection{Generating Pseudo Training Data}\label{pseudo} In order to get large quantities of training data for neural network model, we propose an approach, which is inspired by~\cite{hermann-etal-2015}, to automatically generate large-scale pseudo training data for zero pronoun resolution. However, our approach is much more simple and general than that of~\cite{hermann-etal-2015}. We will introduce the details of generating the pseudo training data for zero pronoun resolution as follows. First, we collect a large number of documents that are relevant (or homogenous in some sense) to the released OntoNote 5.0 data for zero pronoun resolution task in terms of its domain. In our experiments, we used large-scale news data for training. Given a certain document $\mathcal{D}$, which is composed by a set of sentences $\mathcal{D}=\{s_1,s_2,...,s_n\}$, we randomly choose an answer word $\mathcal{A}$ in the document. Note that, we restrict $\mathcal{A}$ to be either a noun or pronoun, where the part-of-speech is identified using LTP Toolkit \cite{che2010ltp}, as well as the answer word should appear at least twice in the document. Second, after the answer word $\mathcal{A}$ is chosen, the sentence that contains $\mathcal{A}$ is defined as a query $\mathcal{Q}$, in which the answer word $\mathcal{A}$ is replaced by a specific symbol $\langle blank \rangle$. In this way, given the query $\mathcal{Q}$ and document $\mathcal{D}$, the target of the prediction is to recover the answer $\mathcal{A}$. That is quite similar to the zero pronoun resolution task. Therefore, the automatically generated training samples is called \emph{\textbf{pseudo}} training data. Figure~\ref{samp} shows an example of a pseudo training sample. In this way, we can generate tremendous triples of $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$ for training neural network, without making any assumptions on the nature of the original corpus. \subsection{Two-step Training}\label{training} It should be noted that, though we have generated large-scale pseudo training data for neural network training, there is still a gap between pseudo training data and the real zero pronoun resolution task in terms of the query style. So we should do some adaptations to our model to deal with the zero pronoun resolution problems ideally. In this paper, we used an effective approach to deal with the mismatch between pseudo training data and zero pronoun resolution task-specific data. Generally speaking, in the first stage, we use a large amount of the pseudo training data to train a fundamental model, and choose the best model according to the validation accuracy. Then we continue to train from the previous best model using the zero pronoun resolution task-specific training data, which is exactly the same domain and query type as the standard zero pronoun resolution task data. The using of the combination of proposed pseudo training data and task-specific data, i.e. zero pronoun resolution task data, is far more effective than using either of them alone. Though there is a gap between these two data, they share many similar characteristics to each other as illustrated in the previous part, so it is promising to utilize these two types of data together, which will compensate to each other. The two-step training procedure can be concluded as, \begin{itemize} \item Pre-training stage: by using large-scale training data to train the neural network model, we can learn richer word embeddings, as well as relatively reasonable weights in neural networks than just training with a small amount of zero pronoun resolution task training data; \item Adaptation stage: after getting the best model that is produced in the previous step, we continue to train the model with task-specific data, which can force the previous model to adapt to the new data, without losing much knowledge that has learned in the previous stage (such as word embeddings). \end{itemize} As we will see in the experiment section that the proposed two-step training approach is effective and brings significant improvements. \subsection{Attention-based Neural Network Model}\label{model} Our model is primarily an attention-based neural network model, which is similar to {\em Attentive Reader} proposed by \cite{hermann-etal-2015}. Formally, when given a set of training triple $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$, we will construct our network in the following way. Firstly, we project one-hot representation of document $\mathcal{D}$ and query $\mathcal{Q}$ into a continuous space with the shared embedding matrix $W_e$. Then we input these embeddings into different bi-directional RNN to get their contextual representations respectively. In our model, we used the bidirectional Gated Recurrent Unit (GRU) as RNN implementation \cite{cho2014learning}. \begin{equation} e(x) = W_e \cdot x,~where~~x\in \mathcal{D},\mathcal{Q} \end{equation} \begin{equation} \overrightarrow{h_s} = \overrightarrow{GRU}(e(x)) ; \overleftarrow{h_s} = \overleftarrow{GRU}(e(x)) \end{equation} \begin{equation}h_s = [\overrightarrow{h_s}; \overleftarrow{h_s}] \end{equation} For the query representation, instead of concatenating the final forward and backward states as its representations, we directly get an averaged representations on all bi-directional RNN slices, which can be illustrated as \begin{equation} h_{query}= \frac{1}{n}\sum_{t=1}^{n}h_{query}(t) \end{equation} For the document, we place a soft attention over all words in document \cite{bahdanau2014neural}, which indicate the degree to which part of document is attended when filling the blank in the query sentence. Then we calculate a weighted sum of all document tokens to get the attended representation of document. \begin{equation} m(t) = \tanh(W \cdot h_{doc}(t) + U \cdot h_{query}) \end{equation} \begin{equation} \alpha(t) = \frac{\exp(W_s \cdot m(t))}{\sum\limits_{j=1}^{n}\exp(W_s \cdot m(j))} \end{equation} \begin{equation} h_{doc\_att}= h_{doc} \cdot \alpha \end{equation} where variable $\alpha(t)$ is the normalized attention weight at $t$th word in document, $h_{doc}$ is a matrix that concatenate all $h_{doc}(t)$ in sequence. \begin{equation} h_{doc} =concat[h_{doc}(1),h_{doc}(2),...,h_{doc}(t)] \end{equation} Then we use attended document representation and query representation to estimate the final answer, which can be illustrated as follows, where $V$ is the vocabulary, \begin{equation} r = concat [h_{doc\_att}, h_{query}] \end{equation} \begin{equation} P(\mathcal{A}|\mathcal{D},\mathcal{Q}) \propto softmax(W_r \cdot r)~~, s.t.~~\mathcal{A} \in V \end{equation} Figure~\ref{nn-arch} shows the proposed neural network architecture. Note that, for zero pronoun resolution task, antecedents of zero pronouns are always noun phrases (NPs), while our model generates only one word (a noun or a pronoun) as the result. To better adapt our model to zero pronoun resolution task, we further process the output result in the following procedure. First, for a given zero pronoun, we extract a set of NPs as its candidates utilizing the same strategy as \cite{chen2015}. Then, we use our model to generate an answer (one word) for the zero pronoun. After that, we go through all the candidates from the nearest to the far-most. For an NP candidate, if the produced answer is its head word, we then regard this NP as the antecedent of the given zero pronoun. By doing so, for a given zero pronoun, we generate an NP as the prediction of its antecedent. \subsection{Unknown Words Processing}\label{unk} Because of the restriction on both memory occupation and training time, it is usually suggested to use a shortlist of vocabulary in neural network training. However, we often replace the out-of-vocabularies to a unique special token, such as $\langle unk \rangle$. But this may place an obstacle in real world test. When the model predicts the answer as $\langle unk \rangle$, we do not know what is the exact word it represents in the document, as there may have many $\langle unk \rangle$s in the document. In this paper, we propose to use a simple but effective way to handle unknown words issue. The idea is straightforward, which can be illustrated as follows. \begin{itemize} \item Identify all unknown words inside of each $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$; \item Instead of replacing all these unknown words into one unique token $\langle unk \rangle$, we make a hash table to project these unique unknown words to numbered tokens, such as $\langle unk1 \rangle, \langle unk2 \rangle, ..., \langle unkN \rangle$ in terms of its occurrence order in the document. Note that, the same words are projected to the same unknown word tokens, and all these projections are only valid inside of current sample. For example, $\langle unk1 \rangle$ indicate the first unknown word, say ``apple'', in the current sample, but in another sample the $\langle unk1 \rangle$ may indicate the unknown word ``orange''. That is, the unknown word labels are indicating position features rather than the exact word; \item Insert these unknown marks in the vocabulary. These marks may only take up dozens of slots, which is negligible to the size of shortlists (usually 30K $\sim$ 100K). \end{itemize} We take one sentence ``The weather of today is not as pleasant as the weather of yesterday.'' as an example to show our unknown word processing method, which is shown in Figure 3. If we do not discriminate the unknown words and assign different unknown words with the same token $\langle unk \rangle$, it would be impossible for us to know what is the exact word that $\langle unk \rangle$ represents for in the real test. However, when using our proposed unknown word processing method, if the model predicts a $\langle unkX \rangle$ as the answer, we can simply scan through the original document and identify its position according to its unknown word number $X$ and replace the $\langle unkX \rangle$ with the real word. For example, in Figure \ref{unk-proc}, if we adopt original unknown words processing method, we could not know whether the $\langle unk \rangle$ is the word ``weather'' or ``pleasant''. However, when using our approach, if the model predicts an answer as $\langle unk1 \rangle$, from the original text, we can know that $\langle unk1 \rangle$ represents the word ``weather''. \section{Experiments} \subsection{Data} In our experiments, we choose a selection of public news data to generate large-scale pseudo training data for pre-training our neural network model (pre-training step)\footnote{The news data is available at \url{http://www.sogou.com/labs/dl/cs.html}}. In the adaptation step, we used the official dataset OntoNotes Release 5.0\footnote{\url{http://catalog.ldc.upenn.edu/LDC2013T19}} which is provided by CoNLL-2012 shared task, to carry out our experiments. The CoNLL-2012 shared task dataset consists of three parts: a training set, a development set and a test set. The datasets are made up of 6 different domains, namely Broadcast News (BN), Newswires (NW), Broadcast Conversations (BC), Telephone Conversations (TC), Web Blogs (WB), and Magazines (MZ). We closely follow the experimental settings as \citep{kong2010,chen2014,chen2015,chen2016}, where we treat the training set for training and the development set for testing, because only the training and development set are annotated with ZPs. The statistics of training and testing data is shown in Table 1 and 2 respectively. \subsection{Neural Network Setups} Training details of our neural network models are listed as follows. \begin{itemize} \item Embedding: We use randomly initialized embedding matrix with uniformed distribution in the interval [-0.1,0.1], and set units number as 256. No pre-trained word embeddings are used. \item Hidden Layer: We use GRU with 256 units, and initialize the internal matrix by random orthogonal matrices \cite{saxe2013exact}. As GRU still suffers from the gradient exploding problem, we set gradient clipping threshold to 10. \item Vocabulary: As the whole vocabulary is very large (over 800K), we set a shortlist of 100K according to the word frequency and unknown words are mapped to 20 $\langle unkX \rangle$ using the proposed method. \item Optimization: We used ADAM update rule \cite{kingma2014adam} with an initial learning rate of 0.001, and used negative log-likelihood as the training objective. The batch size is set to 32. \end{itemize} All models are trained on Tesla K40 GPU. Our model is implemented with Theano \cite{theano2016} and Keras \cite{chollet2015keras}. \subsection{Experimental results} Same to the previous researches that are related to zero pronoun resolution, we evaluate our system performance in terms of F-score (F). We focus on AZP resolution process, where we assume that gold AZPs and gold parse trees are given\footnote{All gold information are provided by the CoNLL-2012 shared task dataset}. The same experimental setting is utilized in \cite{chen2014,chen2015,chen2016}. The overall results are shown in Table~\ref{result}, where the performances of each domain are listed in detail and overall performance is also shown in the last column. \subsubsection*{$\bullet$~~ Overall Performance} We employ four Chinese ZP resolution baseline systems on OntoNotes 5.0 dataset. As we can see that our model significantly outperforms the previous state-of-the-art system \cite{chen2016} by 3.1\% in overall F-score, and substantially outperform the other systems by a large margin. When observing the performances of different domains, our approach also gives relatively consistent improvements among various domains, except for BN and TC with a slight drop. All these results approve that our proposed approach is effective and achieves significant improvements in AZP resolution. In our quantitative analysis, we investigated the reasons of the declines in the BN and TC domain. A primary observation is that the word distributions in these domains are fairly different from others. The average document length of BN and TC are quite longer than other domains, which suggest that there is a bigger chance to have unknown words than other domains, and add difficulties to the model training. Also, we have found that in the BN and TC domains, the texts are often in oral form, which means that there are many irregular expressions in the context. Such expressions add noise to the model, and it is difficult for the model to extract useful information in these contexts. These phenomena indicate that further improvements can be obtained by filtering stop words in contexts, or increasing the size of task-specific data, while we leave this in the future work. \subsubsection*{$\bullet$~~ Effect of UNK processing} As we have mentioned in the previous section, traditional unknown word replacing methods are vulnerable to the real word test. To alleviate this issue, we proposed the UNK processing mechanism to recover the UNK tokens to the real words. In Table \ref{unk-result}, we compared the performance that with and without the proposed UNK processing, to show whether the proposed UNK processing method is effective. As we can see that, by applying our UNK processing mechanism, the model do learned the positional features in these low-frequency words, and brings over 3\% improvements in F-score, which indicated the effectiveness of our UNK processing approach. \subsubsection*{$\bullet$~~ Effect of Domain Adaptation} We also tested out whether our domain adaptation method is effective. In this experiments, we used three different types of training data: only pseudo training data, only task-specific data, and our adaptation method, i.e. using pseudo training data in the pre-training step and task-specific data for domain adaptation step. The results are given in Table \ref{da-result}. As we can see that, using either pseudo training data or task-specific data alone can not bring inspiring result. By adopting our domain adaptation method, the model could give significant improvements over the other models, which demonstrate the effectiveness of our proposed two-step training approach. An intuition behind this phenomenon is that though pseudo training data is fairly big enough to train a reliable model parameters, there is still a gap to the real zero pronoun resolution tasks. On the contrary, though task-specific training data is exactly the same type as the real test, the quantity is not enough to train a reasonable model (such as word embedding). So it is better to make use of both to take the full advantage. However, as the original task-specific data is fairly small compared to pseudo training data, we also wondered if the large-scale pseudo training data is only providing rich word embedding information. So we use the large-scale pseudo training data for embedding training using GloVe toolkit \citep{pennington-etal-2014}, and initialize the word embeddings in the ``only task-specific data'' system. From the result we can see that the pseudo training data provide more information than word embeddings, because though we used GloVe embeddings in ``only task-specific data'', it still can not outperform the system that uses domain adaptation which supports our claim. \section{Error Analysis} To better evaluate our proposed approach, we performed a qualitative analysis of errors, where two major errors are revealed by our analysis, as discussed below. \subsection{Effect of Unknown Words} Our approach does not do well when there are lots of $\langle unk \rangle$s in the context of ZPs, especially when the $\langle unk \rangle$s appears near the ZP. An example is given below, where words with $\#$ are regarded as $\langle unk \rangle$s in our model. \begin{quote}\small {\bf $\phi$ }\ 登上$^\#$\ 太平山$^\#$ \ 顶 \ , \ 将 \ 香港岛$^\#$ \ 和 \ 维多利亚港$^\#$ \ 的 \ 美景 \ 尽收眼底\ 。\\ {\bf $\phi$ } Successfully climbed$^\#$ the peak of [Taiping Mountain]$^\#$, to have a panoramic view of the beauty of [Hong Kong Island]$^\#$ and [Victoria Harbour]$^\#$. \end{quote} In this case, the words ``登上/climbed'' and ``太平山/Taiping Mountain'' that appears immediately after the ZP ``$\phi$'' are all regarded as $\langle unk \rangle$s in our model. As we model the sequence of words by RNN, the $\langle unk \rangle$s make the model more difficult to capture the semantic information of the sentence, which in turn influence the overall performance. Especially for the words that are near the ZP, which play important roles when modeling context information for the ZP. By looking at the word ``顶/peak'', it is hard to comprehend the context information, due to the several surrounding $\langle unk \rangle$s. Though our proposed unknown words processing method is effective in empirical evaluation, we think that more advanced method for unknown words processing would be of a great help in improving comprehension of the context. \subsection{Long Distance Antecedents} Also, our model makes incorrect decisions when the correct antecedents of ZPs are in long distance. As our model chooses answer from words in the context, if there are lots of words between the ZP and its antecedent, more noise information are introduced, and adds more difficulty in choosing the right answer. For example: \begin{quote}\small 我\ 帮\ 不\ 了\ 那个\ 人\ ...\ ...\ 那\ 天\ 结束\ 后\ {\bf $\phi$ }\ 回到\ 家中\ 。\\ I can't help that guy ... ... After that day, {\bf $\phi$ } return home. \end{quote} In this case, the correct antecedent of ZP ``$\phi$'' is the NP candidate ``我/I''. By seeing the contexts, we observe that there are over 30 words between the ZP and its antecedent. Although our model does not intend to fill the ZP gap only with the words near the ZP, as most of the antecedents appear just a few words before the ZPs, our model prefers the nearer words as correct antecedents. Hence, once there are lots of words between ZP and its nearest antecedent, our model can sometimes make wrong decisions. To correctly handle such cases, our model should learn how to filter the useless words and enhance the learning of long-term dependency. \section{Related Work} \subsection{Zero pronoun resolution} For Chinese zero pronoun (ZP) resolution, early studies employed heuristic rules to Chinese ZP resolution. \citet{converse2006} proposes a rule-based method to resolve the zero pronouns, by utilizing Hobbs algorithm \cite{hobbs1978resolving} in the CTB documents. Then, supervised approaches to this task have been vastly explored. \citet{zhao2007} first present a supervised machine learning approach to the identification and resolution of Chinese ZPs. \citet{kong2010} develop a tree-kernel based approach for Chinese ZP resolution. More recently, unsupervised approaches have been proposed. \citet{chen2014} develop an unsupervised language-independent approach, utilizing the integer linear programming to using ten overt pronouns. \citet{chen2015} propose an end-to-end unsupervised probabilistic model for Chinese ZP resolution, using a salience model to capture discourse information. Also, there have been many works on ZP resolution for other languages. These studies can be divided into rule-based and supervised machine learning approaches. \citet{ferrandez2000} proposed a set of hand-crafted rules for Spanish ZP resolution. Recently, supervised approaches have been exploited for ZP resolution in Korean \cite{han2006korean} and Japanese \cite{isozaki2003japanese,iida2006,iida2007zero,sasano2011}. \citet{iida2011cross} developed a cross-lingual approach for Japanese and Italian ZPs where an ILP-based model was employed to zero anaphora detection and resolution. In sum, most recent researches on ZP resolution are supervised approaches, which means that their performance highly relies on large-scale annotated data. Even for the unsupervised approach \cite{chen2014}, they also utilize a supervised pronoun resolver to resolve ZPs. Therefore, the advantage of our proposed approach is obvious. We are able to generate large-scale pseudo training data for ZP resolution, and also we can benefit from the task-specific data for fine-tuning via the proposed two-step training approach. \subsection{Cloze-style Reading Comprehension} Our neural network model is mainly motivated by the recent researches on cloze-style reading comprehension tasks, which aims to predict one-word answer given the document and query. These models can be seen as a general model of mining the relations between the document and query, so it is promising to combine these models to the specific domain. A representative work of cloze-style reading comprehension is done by \citet{hermann-etal-2015}. They proposed a methodology for obtaining large quantities of $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$ triples. By using this method, a large number of training data can be obtained without much human intervention, and make it possible to train a reliable neural network. They used attention-based neural networks for this task. Evaluation on CNN/DailyMail datasets showed that their approach is much effective than traditional baseline systems. While our work is similar to \citet{hermann-etal-2015}, there are several differences which can be illustrated as follows. Firstly, though we both utilize the large-scale corpus, they require that the document should accompany with a brief summary of it, while this is not always available in most of the document, and it may place an obstacle in generating limitless training data. In our work, we do not assume any prerequisite of the training data, and directly extract queries from the document, which makes it easy to generate large-scale training data. Secondly, their work mainly focuses on reading comprehension in the general domain. We are able to exploit large-scale training data for solving problems in the specific domain, and we proposed two-step training method which can be easily adapted to other domains as well. \section{Conclusion} In this study, we propose an effective way to generate and exploit large-scale pseudo training data for zero pronoun resolution task. The main idea behind our approach is to automatically generate large-scale pseudo training data and then utilize an attention-based neural network model to resolve zero pronouns. For training purpose, two-step training approach is employed, i.e. a {\bf pre-training} and {\bf adaptation} step, and this can be also easily applied to other tasks as well. The experimental results on OntoNotes 5.0 corpus are encouraging, showing that the proposed model and accompanying approaches significantly outperforms the state-of-the-art systems. The future work will be carried out on two main aspects: First, as experimental results show that the unknown words processing is a critical part in comprehending context, we will explore more effective way to handle the UNK issue. Second, we will develop other neural network architecture to make it more appropriate for zero pronoun resolution task. \section*{Acknowledgements} We would like to thank the anonymous reviewers for their thorough reviewing and proposing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015407, Key Projects of National Natural Science Foundation of China via grant 61632011, and National Natural Science Youth Foundation of China via grant 61502120. \bibliographystyle{acl_natbib} \end{CJK*} \end{document}
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution
1606.01603
Table 5: Performance comparison of using different training data.
[ "[EMPTY]", "F-score" ]
[ [ "Only Pseudo Training Data", "41.1" ], [ "Only Task-Specific Data", "44.2" ], [ "Only Task-Specific Data + GloVe", "50.9" ], [ "Domain Adaptation", "[BOLD] 55.3" ] ]
S3SS3SSSx3 ∙ Effect of Domain Adaptation We also tested out whether our domain adaptation method is effective. In this experiments, we used three different types of training data: only pseudo training data, only task-specific data, and our adaptation method, i.e. using pseudo training data in the pre-training step and task-specific data for domain adaptation step. As we can see that, using either pseudo training data or task-specific data alone can not bring inspiring result. By adopting our domain adaptation method, the model could give significant improvements over the other models, which demonstrate the effectiveness of our proposed two-step training approach. An intuition behind this phenomenon is that though pseudo training data is fairly big enough to train a reliable model parameters, there is still a gap to the real zero pronoun resolution tasks. On the contrary, though task-specific training data is exactly the same type as the real test, the quantity is not enough to train a reasonable model (such as word embedding). So it is better to make use of both to take the full advantage.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{19} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \title{Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution} \author{Ting Liu$^\dag$, Yiming Cui$^\ddag$, Qingyu Yin$^\dag$, Weinan Zhang$^\dag$, Shijin Wang$^\ddag$ \and Guoping Hu$^\ddag$\\ {$^\dag$Research Center for Social Computing and Information Retrieval,}\\ {Harbin Institute of Technology, Harbin, China}\\ {$^\ddag$iFLYTEK Research, Beijing, China}\\ {$^\dag$\tt\{tliu,qyyin,wnzhang\}@ir.hit.edu.cn}\\ {$^\ddag$\tt\{ymcui,sjwang3,gphu\}@iflytek.com}\\ } \date{} \begin{document} \begin{CJK*}{UTF8}{gbsn} \maketitle \begin{abstract} Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1\% F-score on OntoNotes 5.0 data. \end{abstract} \section{Introduction}\label{introduction} Previous works on zero pronoun (ZP) resolution mainly focused on the supervised learning approaches~\cite{han2006korean,zhao2007,iida2007zero,kong2010,iida2011cross,chen2013}. However, a major obstacle for training the supervised learning models for ZP resolution is the lack of annotated data. An important step is to organize the shared task on anaphora and coreference resolution, such as the ACE evaluations, SemEval-2010 shared task on Coreference Resolution in Multiple Languages~\cite{w8} and CoNLL-2012 shared task on Modeling Multilingual Unrestricted Coreference in OntoNotes~\cite{w6}. Following these shared tasks, the annotated evaluation data can be released for the following researches. Despite the success and contributions of these shared tasks, it still faces the challenge of spending manpower on labeling the extended data for better training performance and domain adaptation. To address the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Inspired by data generation on cloze-style reading comprehension, we can treat the zero pronoun resolution task as a special case of reading comprehension problem. So we can adopt similar data generation methods of reading comprehension to the zero pronoun resolution task. For the noun or pronoun in the document, which has the frequency equal to or greater than 2, we randomly choose one position where the noun or pronoun is located on, and replace it with a specific symbol $\langle blank \rangle$. Let query $\mathcal{Q}$ and answer $\mathcal{A}$ denote the sentence that contains a $\langle blank \rangle$, and the noun or pronoun which is replaced by the $\langle blank \rangle$, respectively. Thus, a pseudo training sample can be represented as a triple: \begin{equation} \nonumber \langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle \end{equation} For the zero pronoun resolution task, a $\langle blank \rangle$ represents a zero pronoun (ZP) in query $\mathcal{Q}$, and $\mathcal{A}$ indicates the corresponding antecedent of the ZP. In this way, tremendous pseudo training samples can be generated from the various documents, such as news corpus. Towards the shortcomings of the previous approaches that are based on feature engineering, we propose a neural network architecture, which is an attention-based neural network model, for zero pronoun resolution. Also we propose a two-step training method, which benefit from both large-scale pseudo training data and task-specific data, showing promising performance. To sum up, the contributions of this paper are listed as follows. \begin{itemize} \item To our knowledge, this is the first time that utilizing reading comprehension neural network model into zero pronoun resolution task. \item We propose a two-step training approach, namely pre-training-then-adaptation, which benefits from both the large-scale automatically generated pseudo training data and task-specific data. \item Towards the shortcomings of the feature engineering approaches, we first propose an attention-based neural network model for zero pronoun resolution. \end{itemize} \section{The Proposed Approach} In this section, we will describe our approach in detail. First, we will describe our method of generating large-scale pseudo training data for zero pronoun resolution. Then we will introduce two-step training approach to alleviate the gaps between pseudo and real training data. Finally, the attention-based neural network model as well as associated unknown words processing techniques will be described. \subsection{Generating Pseudo Training Data}\label{pseudo} In order to get large quantities of training data for neural network model, we propose an approach, which is inspired by~\cite{hermann-etal-2015}, to automatically generate large-scale pseudo training data for zero pronoun resolution. However, our approach is much more simple and general than that of~\cite{hermann-etal-2015}. We will introduce the details of generating the pseudo training data for zero pronoun resolution as follows. First, we collect a large number of documents that are relevant (or homogenous in some sense) to the released OntoNote 5.0 data for zero pronoun resolution task in terms of its domain. In our experiments, we used large-scale news data for training. Given a certain document $\mathcal{D}$, which is composed by a set of sentences $\mathcal{D}=\{s_1,s_2,...,s_n\}$, we randomly choose an answer word $\mathcal{A}$ in the document. Note that, we restrict $\mathcal{A}$ to be either a noun or pronoun, where the part-of-speech is identified using LTP Toolkit \cite{che2010ltp}, as well as the answer word should appear at least twice in the document. Second, after the answer word $\mathcal{A}$ is chosen, the sentence that contains $\mathcal{A}$ is defined as a query $\mathcal{Q}$, in which the answer word $\mathcal{A}$ is replaced by a specific symbol $\langle blank \rangle$. In this way, given the query $\mathcal{Q}$ and document $\mathcal{D}$, the target of the prediction is to recover the answer $\mathcal{A}$. That is quite similar to the zero pronoun resolution task. Therefore, the automatically generated training samples is called \emph{\textbf{pseudo}} training data. Figure~\ref{samp} shows an example of a pseudo training sample. In this way, we can generate tremendous triples of $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$ for training neural network, without making any assumptions on the nature of the original corpus. \subsection{Two-step Training}\label{training} It should be noted that, though we have generated large-scale pseudo training data for neural network training, there is still a gap between pseudo training data and the real zero pronoun resolution task in terms of the query style. So we should do some adaptations to our model to deal with the zero pronoun resolution problems ideally. In this paper, we used an effective approach to deal with the mismatch between pseudo training data and zero pronoun resolution task-specific data. Generally speaking, in the first stage, we use a large amount of the pseudo training data to train a fundamental model, and choose the best model according to the validation accuracy. Then we continue to train from the previous best model using the zero pronoun resolution task-specific training data, which is exactly the same domain and query type as the standard zero pronoun resolution task data. The using of the combination of proposed pseudo training data and task-specific data, i.e. zero pronoun resolution task data, is far more effective than using either of them alone. Though there is a gap between these two data, they share many similar characteristics to each other as illustrated in the previous part, so it is promising to utilize these two types of data together, which will compensate to each other. The two-step training procedure can be concluded as, \begin{itemize} \item Pre-training stage: by using large-scale training data to train the neural network model, we can learn richer word embeddings, as well as relatively reasonable weights in neural networks than just training with a small amount of zero pronoun resolution task training data; \item Adaptation stage: after getting the best model that is produced in the previous step, we continue to train the model with task-specific data, which can force the previous model to adapt to the new data, without losing much knowledge that has learned in the previous stage (such as word embeddings). \end{itemize} As we will see in the experiment section that the proposed two-step training approach is effective and brings significant improvements. \subsection{Attention-based Neural Network Model}\label{model} Our model is primarily an attention-based neural network model, which is similar to {\em Attentive Reader} proposed by \cite{hermann-etal-2015}. Formally, when given a set of training triple $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$, we will construct our network in the following way. Firstly, we project one-hot representation of document $\mathcal{D}$ and query $\mathcal{Q}$ into a continuous space with the shared embedding matrix $W_e$. Then we input these embeddings into different bi-directional RNN to get their contextual representations respectively. In our model, we used the bidirectional Gated Recurrent Unit (GRU) as RNN implementation \cite{cho2014learning}. \begin{equation} e(x) = W_e \cdot x,~where~~x\in \mathcal{D},\mathcal{Q} \end{equation} \begin{equation} \overrightarrow{h_s} = \overrightarrow{GRU}(e(x)) ; \overleftarrow{h_s} = \overleftarrow{GRU}(e(x)) \end{equation} \begin{equation}h_s = [\overrightarrow{h_s}; \overleftarrow{h_s}] \end{equation} For the query representation, instead of concatenating the final forward and backward states as its representations, we directly get an averaged representations on all bi-directional RNN slices, which can be illustrated as \begin{equation} h_{query}= \frac{1}{n}\sum_{t=1}^{n}h_{query}(t) \end{equation} For the document, we place a soft attention over all words in document \cite{bahdanau2014neural}, which indicate the degree to which part of document is attended when filling the blank in the query sentence. Then we calculate a weighted sum of all document tokens to get the attended representation of document. \begin{equation} m(t) = \tanh(W \cdot h_{doc}(t) + U \cdot h_{query}) \end{equation} \begin{equation} \alpha(t) = \frac{\exp(W_s \cdot m(t))}{\sum\limits_{j=1}^{n}\exp(W_s \cdot m(j))} \end{equation} \begin{equation} h_{doc\_att}= h_{doc} \cdot \alpha \end{equation} where variable $\alpha(t)$ is the normalized attention weight at $t$th word in document, $h_{doc}$ is a matrix that concatenate all $h_{doc}(t)$ in sequence. \begin{equation} h_{doc} =concat[h_{doc}(1),h_{doc}(2),...,h_{doc}(t)] \end{equation} Then we use attended document representation and query representation to estimate the final answer, which can be illustrated as follows, where $V$ is the vocabulary, \begin{equation} r = concat [h_{doc\_att}, h_{query}] \end{equation} \begin{equation} P(\mathcal{A}|\mathcal{D},\mathcal{Q}) \propto softmax(W_r \cdot r)~~, s.t.~~\mathcal{A} \in V \end{equation} Figure~\ref{nn-arch} shows the proposed neural network architecture. Note that, for zero pronoun resolution task, antecedents of zero pronouns are always noun phrases (NPs), while our model generates only one word (a noun or a pronoun) as the result. To better adapt our model to zero pronoun resolution task, we further process the output result in the following procedure. First, for a given zero pronoun, we extract a set of NPs as its candidates utilizing the same strategy as \cite{chen2015}. Then, we use our model to generate an answer (one word) for the zero pronoun. After that, we go through all the candidates from the nearest to the far-most. For an NP candidate, if the produced answer is its head word, we then regard this NP as the antecedent of the given zero pronoun. By doing so, for a given zero pronoun, we generate an NP as the prediction of its antecedent. \subsection{Unknown Words Processing}\label{unk} Because of the restriction on both memory occupation and training time, it is usually suggested to use a shortlist of vocabulary in neural network training. However, we often replace the out-of-vocabularies to a unique special token, such as $\langle unk \rangle$. But this may place an obstacle in real world test. When the model predicts the answer as $\langle unk \rangle$, we do not know what is the exact word it represents in the document, as there may have many $\langle unk \rangle$s in the document. In this paper, we propose to use a simple but effective way to handle unknown words issue. The idea is straightforward, which can be illustrated as follows. \begin{itemize} \item Identify all unknown words inside of each $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$; \item Instead of replacing all these unknown words into one unique token $\langle unk \rangle$, we make a hash table to project these unique unknown words to numbered tokens, such as $\langle unk1 \rangle, \langle unk2 \rangle, ..., \langle unkN \rangle$ in terms of its occurrence order in the document. Note that, the same words are projected to the same unknown word tokens, and all these projections are only valid inside of current sample. For example, $\langle unk1 \rangle$ indicate the first unknown word, say ``apple'', in the current sample, but in another sample the $\langle unk1 \rangle$ may indicate the unknown word ``orange''. That is, the unknown word labels are indicating position features rather than the exact word; \item Insert these unknown marks in the vocabulary. These marks may only take up dozens of slots, which is negligible to the size of shortlists (usually 30K $\sim$ 100K). \end{itemize} We take one sentence ``The weather of today is not as pleasant as the weather of yesterday.'' as an example to show our unknown word processing method, which is shown in Figure 3. If we do not discriminate the unknown words and assign different unknown words with the same token $\langle unk \rangle$, it would be impossible for us to know what is the exact word that $\langle unk \rangle$ represents for in the real test. However, when using our proposed unknown word processing method, if the model predicts a $\langle unkX \rangle$ as the answer, we can simply scan through the original document and identify its position according to its unknown word number $X$ and replace the $\langle unkX \rangle$ with the real word. For example, in Figure \ref{unk-proc}, if we adopt original unknown words processing method, we could not know whether the $\langle unk \rangle$ is the word ``weather'' or ``pleasant''. However, when using our approach, if the model predicts an answer as $\langle unk1 \rangle$, from the original text, we can know that $\langle unk1 \rangle$ represents the word ``weather''. \section{Experiments} \subsection{Data} In our experiments, we choose a selection of public news data to generate large-scale pseudo training data for pre-training our neural network model (pre-training step)\footnote{The news data is available at \url{http://www.sogou.com/labs/dl/cs.html}}. In the adaptation step, we used the official dataset OntoNotes Release 5.0\footnote{\url{http://catalog.ldc.upenn.edu/LDC2013T19}} which is provided by CoNLL-2012 shared task, to carry out our experiments. The CoNLL-2012 shared task dataset consists of three parts: a training set, a development set and a test set. The datasets are made up of 6 different domains, namely Broadcast News (BN), Newswires (NW), Broadcast Conversations (BC), Telephone Conversations (TC), Web Blogs (WB), and Magazines (MZ). We closely follow the experimental settings as \citep{kong2010,chen2014,chen2015,chen2016}, where we treat the training set for training and the development set for testing, because only the training and development set are annotated with ZPs. The statistics of training and testing data is shown in Table 1 and 2 respectively. \subsection{Neural Network Setups} Training details of our neural network models are listed as follows. \begin{itemize} \item Embedding: We use randomly initialized embedding matrix with uniformed distribution in the interval [-0.1,0.1], and set units number as 256. No pre-trained word embeddings are used. \item Hidden Layer: We use GRU with 256 units, and initialize the internal matrix by random orthogonal matrices \cite{saxe2013exact}. As GRU still suffers from the gradient exploding problem, we set gradient clipping threshold to 10. \item Vocabulary: As the whole vocabulary is very large (over 800K), we set a shortlist of 100K according to the word frequency and unknown words are mapped to 20 $\langle unkX \rangle$ using the proposed method. \item Optimization: We used ADAM update rule \cite{kingma2014adam} with an initial learning rate of 0.001, and used negative log-likelihood as the training objective. The batch size is set to 32. \end{itemize} All models are trained on Tesla K40 GPU. Our model is implemented with Theano \cite{theano2016} and Keras \cite{chollet2015keras}. \subsection{Experimental results} Same to the previous researches that are related to zero pronoun resolution, we evaluate our system performance in terms of F-score (F). We focus on AZP resolution process, where we assume that gold AZPs and gold parse trees are given\footnote{All gold information are provided by the CoNLL-2012 shared task dataset}. The same experimental setting is utilized in \cite{chen2014,chen2015,chen2016}. The overall results are shown in Table~\ref{result}, where the performances of each domain are listed in detail and overall performance is also shown in the last column. \subsubsection*{$\bullet$~~ Overall Performance} We employ four Chinese ZP resolution baseline systems on OntoNotes 5.0 dataset. As we can see that our model significantly outperforms the previous state-of-the-art system \cite{chen2016} by 3.1\% in overall F-score, and substantially outperform the other systems by a large margin. When observing the performances of different domains, our approach also gives relatively consistent improvements among various domains, except for BN and TC with a slight drop. All these results approve that our proposed approach is effective and achieves significant improvements in AZP resolution. In our quantitative analysis, we investigated the reasons of the declines in the BN and TC domain. A primary observation is that the word distributions in these domains are fairly different from others. The average document length of BN and TC are quite longer than other domains, which suggest that there is a bigger chance to have unknown words than other domains, and add difficulties to the model training. Also, we have found that in the BN and TC domains, the texts are often in oral form, which means that there are many irregular expressions in the context. Such expressions add noise to the model, and it is difficult for the model to extract useful information in these contexts. These phenomena indicate that further improvements can be obtained by filtering stop words in contexts, or increasing the size of task-specific data, while we leave this in the future work. \subsubsection*{$\bullet$~~ Effect of UNK processing} As we have mentioned in the previous section, traditional unknown word replacing methods are vulnerable to the real word test. To alleviate this issue, we proposed the UNK processing mechanism to recover the UNK tokens to the real words. In Table \ref{unk-result}, we compared the performance that with and without the proposed UNK processing, to show whether the proposed UNK processing method is effective. As we can see that, by applying our UNK processing mechanism, the model do learned the positional features in these low-frequency words, and brings over 3\% improvements in F-score, which indicated the effectiveness of our UNK processing approach. \subsubsection*{$\bullet$~~ Effect of Domain Adaptation} We also tested out whether our domain adaptation method is effective. In this experiments, we used three different types of training data: only pseudo training data, only task-specific data, and our adaptation method, i.e. using pseudo training data in the pre-training step and task-specific data for domain adaptation step. The results are given in Table \ref{da-result}. As we can see that, using either pseudo training data or task-specific data alone can not bring inspiring result. By adopting our domain adaptation method, the model could give significant improvements over the other models, which demonstrate the effectiveness of our proposed two-step training approach. An intuition behind this phenomenon is that though pseudo training data is fairly big enough to train a reliable model parameters, there is still a gap to the real zero pronoun resolution tasks. On the contrary, though task-specific training data is exactly the same type as the real test, the quantity is not enough to train a reasonable model (such as word embedding). So it is better to make use of both to take the full advantage. However, as the original task-specific data is fairly small compared to pseudo training data, we also wondered if the large-scale pseudo training data is only providing rich word embedding information. So we use the large-scale pseudo training data for embedding training using GloVe toolkit \citep{pennington-etal-2014}, and initialize the word embeddings in the ``only task-specific data'' system. From the result we can see that the pseudo training data provide more information than word embeddings, because though we used GloVe embeddings in ``only task-specific data'', it still can not outperform the system that uses domain adaptation which supports our claim. \section{Error Analysis} To better evaluate our proposed approach, we performed a qualitative analysis of errors, where two major errors are revealed by our analysis, as discussed below. \subsection{Effect of Unknown Words} Our approach does not do well when there are lots of $\langle unk \rangle$s in the context of ZPs, especially when the $\langle unk \rangle$s appears near the ZP. An example is given below, where words with $\#$ are regarded as $\langle unk \rangle$s in our model. \begin{quote}\small {\bf $\phi$ }\ 登上$^\#$\ 太平山$^\#$ \ 顶 \ , \ 将 \ 香港岛$^\#$ \ 和 \ 维多利亚港$^\#$ \ 的 \ 美景 \ 尽收眼底\ 。\\ {\bf $\phi$ } Successfully climbed$^\#$ the peak of [Taiping Mountain]$^\#$, to have a panoramic view of the beauty of [Hong Kong Island]$^\#$ and [Victoria Harbour]$^\#$. \end{quote} In this case, the words ``登上/climbed'' and ``太平山/Taiping Mountain'' that appears immediately after the ZP ``$\phi$'' are all regarded as $\langle unk \rangle$s in our model. As we model the sequence of words by RNN, the $\langle unk \rangle$s make the model more difficult to capture the semantic information of the sentence, which in turn influence the overall performance. Especially for the words that are near the ZP, which play important roles when modeling context information for the ZP. By looking at the word ``顶/peak'', it is hard to comprehend the context information, due to the several surrounding $\langle unk \rangle$s. Though our proposed unknown words processing method is effective in empirical evaluation, we think that more advanced method for unknown words processing would be of a great help in improving comprehension of the context. \subsection{Long Distance Antecedents} Also, our model makes incorrect decisions when the correct antecedents of ZPs are in long distance. As our model chooses answer from words in the context, if there are lots of words between the ZP and its antecedent, more noise information are introduced, and adds more difficulty in choosing the right answer. For example: \begin{quote}\small 我\ 帮\ 不\ 了\ 那个\ 人\ ...\ ...\ 那\ 天\ 结束\ 后\ {\bf $\phi$ }\ 回到\ 家中\ 。\\ I can't help that guy ... ... After that day, {\bf $\phi$ } return home. \end{quote} In this case, the correct antecedent of ZP ``$\phi$'' is the NP candidate ``我/I''. By seeing the contexts, we observe that there are over 30 words between the ZP and its antecedent. Although our model does not intend to fill the ZP gap only with the words near the ZP, as most of the antecedents appear just a few words before the ZPs, our model prefers the nearer words as correct antecedents. Hence, once there are lots of words between ZP and its nearest antecedent, our model can sometimes make wrong decisions. To correctly handle such cases, our model should learn how to filter the useless words and enhance the learning of long-term dependency. \section{Related Work} \subsection{Zero pronoun resolution} For Chinese zero pronoun (ZP) resolution, early studies employed heuristic rules to Chinese ZP resolution. \citet{converse2006} proposes a rule-based method to resolve the zero pronouns, by utilizing Hobbs algorithm \cite{hobbs1978resolving} in the CTB documents. Then, supervised approaches to this task have been vastly explored. \citet{zhao2007} first present a supervised machine learning approach to the identification and resolution of Chinese ZPs. \citet{kong2010} develop a tree-kernel based approach for Chinese ZP resolution. More recently, unsupervised approaches have been proposed. \citet{chen2014} develop an unsupervised language-independent approach, utilizing the integer linear programming to using ten overt pronouns. \citet{chen2015} propose an end-to-end unsupervised probabilistic model for Chinese ZP resolution, using a salience model to capture discourse information. Also, there have been many works on ZP resolution for other languages. These studies can be divided into rule-based and supervised machine learning approaches. \citet{ferrandez2000} proposed a set of hand-crafted rules for Spanish ZP resolution. Recently, supervised approaches have been exploited for ZP resolution in Korean \cite{han2006korean} and Japanese \cite{isozaki2003japanese,iida2006,iida2007zero,sasano2011}. \citet{iida2011cross} developed a cross-lingual approach for Japanese and Italian ZPs where an ILP-based model was employed to zero anaphora detection and resolution. In sum, most recent researches on ZP resolution are supervised approaches, which means that their performance highly relies on large-scale annotated data. Even for the unsupervised approach \cite{chen2014}, they also utilize a supervised pronoun resolver to resolve ZPs. Therefore, the advantage of our proposed approach is obvious. We are able to generate large-scale pseudo training data for ZP resolution, and also we can benefit from the task-specific data for fine-tuning via the proposed two-step training approach. \subsection{Cloze-style Reading Comprehension} Our neural network model is mainly motivated by the recent researches on cloze-style reading comprehension tasks, which aims to predict one-word answer given the document and query. These models can be seen as a general model of mining the relations between the document and query, so it is promising to combine these models to the specific domain. A representative work of cloze-style reading comprehension is done by \citet{hermann-etal-2015}. They proposed a methodology for obtaining large quantities of $\langle \mathcal{D}, \mathcal{Q}, \mathcal{A} \rangle$ triples. By using this method, a large number of training data can be obtained without much human intervention, and make it possible to train a reliable neural network. They used attention-based neural networks for this task. Evaluation on CNN/DailyMail datasets showed that their approach is much effective than traditional baseline systems. While our work is similar to \citet{hermann-etal-2015}, there are several differences which can be illustrated as follows. Firstly, though we both utilize the large-scale corpus, they require that the document should accompany with a brief summary of it, while this is not always available in most of the document, and it may place an obstacle in generating limitless training data. In our work, we do not assume any prerequisite of the training data, and directly extract queries from the document, which makes it easy to generate large-scale training data. Secondly, their work mainly focuses on reading comprehension in the general domain. We are able to exploit large-scale training data for solving problems in the specific domain, and we proposed two-step training method which can be easily adapted to other domains as well. \section{Conclusion} In this study, we propose an effective way to generate and exploit large-scale pseudo training data for zero pronoun resolution task. The main idea behind our approach is to automatically generate large-scale pseudo training data and then utilize an attention-based neural network model to resolve zero pronouns. For training purpose, two-step training approach is employed, i.e. a {\bf pre-training} and {\bf adaptation} step, and this can be also easily applied to other tasks as well. The experimental results on OntoNotes 5.0 corpus are encouraging, showing that the proposed model and accompanying approaches significantly outperforms the state-of-the-art systems. The future work will be carried out on two main aspects: First, as experimental results show that the unknown words processing is a critical part in comprehending context, we will explore more effective way to handle the UNK issue. Second, we will develop other neural network architecture to make it more appropriate for zero pronoun resolution task. \section*{Acknowledgements} We would like to thank the anonymous reviewers for their thorough reviewing and proposing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015407, Key Projects of National Natural Science Foundation of China via grant 61632011, and National Natural Science Youth Foundation of China via grant 61502120. \bibliographystyle{acl_natbib} \end{CJK*} \end{document}
From Characters to Words to in Between: Do We Capture Morphology?
1704.08352
Table 10: Average perplexities of words that occur after reduplicated words in the test set.
[ "Model", "all", "frequent", "rare" ]
[ [ "word", "101.71", "91.71", "156.98" ], [ "characters", "[BOLD] 99.21", "[BOLD] 91.35", "[BOLD] 137.42" ], [ "BPE", "117.2", "108.86", "156.81" ] ]
In contrast with the overall results, the BPE bi-LSTM model has the worst perplexities, while character bi-LSTM has the best, suggesting that these models are more effective for reduplication.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\specialcell}[2][c]{ \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}} \newcommand{\hlt}[1]{{\sethlcolor{pink}\hl{#1}}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{477} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand\anonymize[1]{} \title{From Characters to Words to in Between: Do We Capture Morphology?} \author{Clara Vania \and Adam Lopez \\ Institute for Language, Cognition and Computation \\ School of Informatics \\ University of Edinburgh \\ {\tt c.vania@ed.ac.uk, alopez@inf.ed.ac.uk}} \date{} \begin{document} \maketitle \begin{abstract} Words can be represented by composing the representations of subword units such as word segments, characters, and/or character n-grams. While such representations are effective and may capture the morphological regularities of words, they have not been systematically compared, and it is not understood how they interact with different morphological typologies. On a language modeling task, we present experiments that systematically vary (1) the basic unit of representation, (2) the composition of these representations, and (3) the morphological typology of the language modeled. Our results extend previous findings that character representations are effective across typologies, and we find that a previously unstudied combination of character trigram representations composed with bi-LSTMs outperforms most others. But we also find room for improvement: none of the character-level models match the predictive accuracy of a model with access to true morphological analyses, even when learned from an order of magnitude more data. \end{abstract} \section{Introduction} \label{sec:intro} Continuous representations of words learned by neural networks are central to many NLP tasks \cite{cho-EtAl:2014:EMNLP2014,chen-manning:2014:EMNLP2014,dyer-EtAl:2015:ACL-IJCNLP}. However, directly mapping a finite set of word types to a continuous representation has well-known limitations. First, it makes a closed vocabulary assumption, enabling only generic out-of-vocabulary handling. Second, it cannot exploit systematic functional relationships in learning. For example, \emph{cat} and \emph{cats} stand in the same relationship as \emph{dog} and \emph{dogs}. While this relationship might be discovered for these specific frequent words, it does not help us learn that the same relationship also holds for the much rarer words \emph{sloth} and \emph{sloths}. These functional relationships reflect the fact that words are composed from smaller units of meaning, or morphemes. For instance, \emph{cats} consists of two morphemes, \emph{cat} and \emph{-s}, with the latter shared by the words \emph{dogs} and \emph{tarsiers}. Modeling this effect is crucial for languages with rich morphology, where vocabulary sizes are larger, many more words are rare, and many more such functional relationships exist. Hence, some models produce word representations as a function of subword units obtained from morphological segmentation or analysis \cite{luong2013,DBLP:journals/corr/BothaB14,cotterell-schutze:2015:NAACL-HLT}. A downside of these models is that they depend on morphological segmenters or analyzers. Morphemes typically have similar orthographic representations across words. For example, the morpheme \emph{-s} is realized as \emph{-es} in \emph{finches}. Since this variation is limited, the general relationship between morphology and orthography can be exploited by composing the representations of characters \cite{ling2015,kim2015}, character n-grams \cite{sperr-niehues-waibel:2013:CVSC,wieting-EtAl:2016:EMNLP2016,DBLP:journals/corr/BojanowskiGJM16,DBLP:journals/corr/BothaB14}, bytes \cite{plank-sogaard-goldberg:2016:P16-2,gillick-EtAl:2016:N16-1}, or combinations thereof \cite{santos14,qiu-EtAl:2014:Coling1}. These models are compact, can represent rare and unknown words, and do not require morphological analyzers. They raise a provocative question: Does NLP benefit from models of morphology, or can they be replaced entirely by models of characters? The relative merits of word, subword. and character-level models are not fully understood because each new model has been compared on different tasks and datasets, and often compared against word-level models. A number of questions remain open: \begin{enumerate} \item How do representations based on morphemes compare with those based on characters? \item What is the best way to compose subword representations? \item Do character-level models capture morphology in terms of predictive utility? \item How do different representations interact with languages of different morphological typologies? \end{enumerate} The last question is raised by \newcite{Bender:13}: languages are typologically diverse, and the behavior of a model on one language may not generalize to others. Character-level models implicitly assume concatenative morphology, but many widely-spoken languages feature non-concatenative morphology, and it is unclear how such models will behave on these languages. To answer these questions, we performed a systematic comparison across different models for the simple and ubiquitous task of language modeling. We present experiments that vary (1) the type of subword unit; (2) the composition function; and (3) morphological typology. To understand the extent to which character-level models capture true morphological regularities, we present oracle experiments using human morphological annotations instead of automatic morphological segments. Our results show that: \begin{enumerate} \item For most languages, character-level representations outperform the standard word representations. Most interestingly, a previously unstudied combination of character trigrams composed with bi-LSTMs performs best on the majority of languages. \item Bi-LSTMs and CNNs are more effective composition functions than addition. \item Character-level models learn functional relationships between orthographically similar words, but don't (yet) match the predictive accuracy of models with access to true morphological analyses. \item Character-level models are effective across a range of morphological typologies, but orthography influences their effectiveness. \end{enumerate} \section{Morphological Typology} \label{sec:morph} A \textbf{morpheme} is the smallest unit of meaning in a word. Some morphemes express core meaning (\textbf{roots}), while others express one or more dependent \textbf{features} of the core meaning, such as person, gender, or aspect. A \textbf{morphological analysis} identifies the lemma and features of a word. A \textbf{morph} is the surface realization of a morpheme \cite{Morley2000-MORSIF}, which may vary from word to word. These distinctions are shown in Table \ref{tab:ex-morphs}. Morphological typology classifies languages based on the processes by which morphemes are composed to form words. While most languages will exhibit a variety of such processes, for any given language, some processes are much more frequent than others, and we will broadly identify our experimental languages with these processes. When morphemes are combined sequentially, the morphology is \textbf{concatenative}. However, morphemes can also be composed by \textbf{non-concatenative} processes. We consider four broad categories of both concatenative and non-concatenative processes in our experiments. \begin{asparadesc} \item[Fusional languages] realize multiple features in a single concatenated morpheme. For example, English verbs can express number, person, and tense in a single morpheme: \begin{center} \textit{wanted} (English) \\ \textit{want} + \textit{ed} \\ \textit{want} + \texttt{VB+1st+SG+Past} \end{center} \item[Agglutinative languages] assign one feature per morpheme. Morphemes are concatenated to form a word and the morpheme boundaries are clear. For example \cite{Haspelmath02}: \begin{center} \textit{okursam} (Turkish) \\ \textit{oku+r+sa+m} \\ ``read''+\texttt{AOR+COND+1SG} \\ \end{center} \item[Root and Pattern Morphology] forms words by inserting consonants and vowels of dependent morphemes into a consonantal root based on a given pattern. For example, the Arabic root \textit{ktb} (``write") produces \cite{Roark2007}: \begin{center} \textit{\textbf{k}a\textbf{t}a\textbf{b}} ``wrote" (Arabic) \\ \textit{ta\textbf{k}aa\textbf{t}a\textbf{b}} ``wrote to each other" (Arabic) \end{center} \item[Reduplication] is a process where a word form is produced by repeating part or all of the root to express new features. For example: \begin{center} \textit{anak} ``child" (Indonesian) \\ \textit{anak-anak} ``children" (Indonesian) \\ \textit{buah} ``fruit" (Indonesian) \\ \textit{buah-buahan} ``various fruits" (Indonesian) \end{center} \end{asparadesc} \section{Representation Models} \label{sec:rep-model} We compare ten different models, varying subword units and composition functions that have commonly been used in recent work, but evaluated on various different tasks (Table \ref{tab:prev-work}). Given word $w$, we compute its representation $\textbf{w}$ as: \begin{align} \label{eq:word-emb} \textbf{w} = f(\textbf{W}_s, \sigma(w)) \end{align} where $\sigma$ is a deterministic function that returns a sequence of subword units; $\textbf{W}_s$ is a parameter matrix of representations for the vocabulary of subword units; and $f$ is a composition function which takes $\sigma(w)$ and $\textbf{W}_s$ as input and returns $\textbf{w}$. All of the representations that we consider take this form, varying only in $f$ and $\sigma$. \subsection{Subword Units} We consider four variants of $\sigma$ in Equation~\ref{eq:word-emb}, each returning a different type of subword unit: character, character trigram, or one of two types of morph. Morphs are obtained from Morfessor \cite{smit-EtAl:2014:Demos} or a word segmentation based on Byte Pair Encoding (BPE; \newcite{Gage:1994:NAD:177910.177914}), which has been shown to be effective for handling rare words in neural machine translation \cite{DBLP:journals/corr/SennrichHB15}. BPE works by iteratively replacing frequent pairs of characters with a single unused character. For Morfessor, we use default parameters while for BPE we set the number of merge operations to 10,000.\footnote{BPE takes a single parameter: the number of merge operations. We tried different parameter values (1k, 10k, 100k) and manually examined the resulting segmentation on the English dataset. Qualitatively, 10k gave the most plausible segmentation and we used this setting across all languages.} When we segment into character trigrams, we consider all trigrams in the word, including those covering notional beginning and end of word characters, as in \newcite{sperr-niehues-waibel:2013:CVSC}. Example output of $\sigma$ is shown in Table \ref{tab:rep-unit}. \subsection{Composition Functions} We use three variants of $f$ in Eq. \ref{eq:word-emb}. The first constructs the representation $\mathbf{w}$ of word $w$ by adding the representations of its subwords $s_1, \dots, s_n = \sigma(w)$, where the representation of $s_i$ is vector $\mathbf{s}_i$. \begin{align} \textbf{w} = \sum\limits_{i=1}^n \textbf{s}_i \end{align} The only subword unit that we don't compose by addition is characters, since this will produce the same representation for many different words. Our second composition function is a bidirectional long-short-term memory (\textbf{bi-LSTM}), which we adapt based on its use in the character-level model of \newcite{ling2015} and its widespread use in NLP generally. Given $\textbf{s}_i$ and the previous LSTM hidden state $\textbf{h}_{i-1}$, an LSTM \cite{Hochreiter:1997:LSTM} computes the following outputs for the subword at position $i$: \begin{align} \textbf{h}_i = LSTM(\textbf{s}_i, \textbf{h}_{i-1}) \\ \hat{s}_{i+1} = g(\textbf{V}^T \cdot \textbf{h}_i) \end{align} where $\hat{s}_{i+1}$ is the predicted target subword, $g$ is the softmax function and $\textbf{V}$ is a weight matrix. A bi-LSTM \cite{Graves:2005:BLN:1986079.1986220} combines the final state of an LSTM over the input sequence with one over the reversed input sequence. Given the hidden state produced from the final input of the forward LSTM, $\textbf{h}_n^{fw}$ and the hidden state produced from the final input of the backward LSTM $\textbf{h}_0^{bw}$, we compute the word representation as: \begin{align} \textbf{w}_t = \textbf{W}_f \cdot \textbf{h}_n^{fw} + \textbf{W}_b \cdot \textbf{h}_0^{bw} + \textbf{b} \end{align} where $\textbf{W}_f$, $\textbf{W}_b$, and $\textbf{b}$ are parameter matrices and $\textbf{h}_n^{fw}$ and $\textbf{h}_0^{bw}$ are forward and backward LSTM states, respectively. The third composition function is a convolutional neural network (\textbf{CNN}) with highway layers, as in \newcite{kim2015}. Let $c_1,\dots,c_k$ be the sequence of characters of word $w$. The character embedding matrix is $\mathbf{C} \in \mathbb{R}^{d \times k}$, where the $i$-th column corresponds to the embeddings of $c_i$. We first apply a narrow convolution between $\mathbf{C}$ and a filter $\mathbf{F} \in \mathbb{R}^{d \times n}$ of width $n$ to obtain a feature map $\mathbf{f} \in \mathbf{R}^{k-n+1}$. In particular, the computation of the $j$-th element of $\mathbf{f}$ is defined as \begin{align} \mathbf{f}[j] = tanh(\langle \mathbf{C}[*,j:j+n-1], \mathbf{F} \rangle + b) \end{align} where $\langle A,B \rangle = \mathtt{Tr}(\mathbf{AB}^T)$ is the Frobenius inner product and $b$ is a bias. The CNN model applies filters of varying width, representing features of character n-grams. We then calculate the max-over-time of each feature map. \begin{align} y_j = \max_j \mathbf{f}[j] \end{align} and concatenate them to derive the word representation $\textbf{w}_t = [y_1,\dots,y_m]$, where $m$ is the number of filters applied. Highway layers allow some dimensions of $\textbf{w}_t$ to be carried or transformed. Since it can learn character n-grams directly, we only use the CNN with character input. \subsection{Language Model} We use language models (LM) because they are simple and fundamental to many NLP applications. Given a sequence of text $s=w_1,\dots,w_T$, our LM computes the probability of $s$ as: \begin{align}\label{eq:lm} P(w_1,\dots,w_T) = \prod\limits_{t=1}^T P(y_t|w_1,\dots,w_{t-1}) \end{align} where $y_t = w_t$ if $w_t$ is in the output vocabulary and $y_t = $ \texttt{UNK} otherwise. Our language model is an LSTM variant of recurrent neural network language (RNN) LM \cite{Mikolov2010}. At time step $t$, it receives input $w_t$ and predicts $y_{t+1}$. Using Eq. \ref{eq:word-emb}, it first computes representation $\textbf{w}_t$ of $w_t$. Given this representation and previous state $\textbf{h}_{t-1}$, it produces a new state $\textbf{h}_t$ and predicts $y_{t+1}$: \begin{align} \textbf{h}_t = LSTM(\textbf{w}_t, \textbf{h}_{t-1}) \\ \hat{y}_{t+1} = g(\textbf{V}^T \cdot \textbf{h}_t) \end{align} where $g$ is a softmax function over the vocabulary yielding the probability in Equation~\ref{eq:lm}. Note that this design means that we can \emph{predict} only words from a finite output vocabulary, so our models differ only in their representation of context words. This design makes it possible to compare language models using perplexity, since they have the same event space, though open vocabulary word prediction is an interesting direction for future work. The complete architecture of our system is shown in Figure~\ref{fig:lstm-lm}, showing segmentation function $\sigma$ and composition function $f$ from Equation~\ref{eq:word-emb}. \section{Experiments} We perform experiments on ten languages (Table \ref{tab:corpus-stat}). We use datasets from \newcite{ling2015} for English and Turkish. For Czech and Russian we use Universal Dependencies (UD) v1.3 \cite{UDT-nivre}. For other languages, we use preprocessed Wikipedia data \cite{polyglot:2013:ACL-CoNLL}.\footnote{The Arabic and Hebrew dataset are unvocalized. Japanese mixes Kanji, Katakana, Hiragana, and Latin characters (for foreign words). Hence, a Japanese character can correspond to a character, syllable, or word. The preprocessed dataset is already word-segmented.} For each dataset, we use approximately 1.2M tokens to train, and approximately 150K tokens each for development and testing. Preprocessing involves lowercasing (except for character models) and removing hyperlinks. To ensure that we compared models and not implementations, we reimplemented all models in a single framework using Tensorflow \cite{tensorflow2015-whitepaper}.\footnote{Our implementation of these models can be found at https://github.com/claravania/subword-lstm-lm} We use a common setup for all experiments based on that of \newcite{ling2015}, \newcite{kim2015}, and \newcite{miyamoto-cho:2016:EMNLP2016}. In preliminary experiments, we confirmed that our models produced similar patterns of perplexities for the reimplemented word and character LSTM models of \newcite{ling2015}. Even following detailed discussion with Ling (p.c.), we were unable to reproduce their perplexities exactly---our English reimplementation gives lower perplexities; our Turkish higher---but we do reproduce their general result that character bi-LSTMs outperform word models. We suspect that different preprocessing and the stochastic learning explains differences in perplexities. Our final model with bi-LSTMs composition follows \newcite{miyamoto-cho:2016:EMNLP2016} as it gives us the same perplexity results for our preliminary experiments on the Penn Treebank dataset \cite{Marcus:1993:BLA:972470.972475}, preprocessed by \newcite{Mikolov2010}. \subsection{Training and Evaluation} Our LSTM-LM uses two hidden layers with 200 hidden units and representation vectors for words, characters, and morphs all have dimension 200. All parameters are initialized uniformly at random from -0.1 to 0.1, trained by stochastic gradient descent with mini-batch size of 32, time steps of 20, for 50 epochs. To avoid overfitting, we apply dropout with probability 0.5 on the input-to-hidden layer and all of the LSTM cells (including those in the bi-LSTM, if used). For all models which do not use bi-LSTM composition, we start with a learning rate of 1.0 and decrease it by half if the validation perplexity does not decrease by 0.1 after 3 epochs. For models with bi-LSTMs composition, we use a constant learning rate of 0.2 and stop training when validation perplexity does not improve after 3 epochs. For the character CNN model, we use the same settings as the \emph{small model} of \newcite{kim2015}. To make our results comparable to \newcite{ling2015}, for each language we limit the output vocabulary to the most frequent 5,000 training words plus an unknown word token. To learn to predict unknown words, we follow \newcite{ling2015}: in training, words that occur only once are stochastically replaced with the unknown token with probability 0.5. To evaluate the models, we compute perplexity on the test data. \section{Results and Analysis} Table \ref{tab:lm-results} presents our main results. In six of ten languages, character-trigram representations composed with bi-LSTMs achieve the lowest perplexities. As far as we know, this particular model has not been tested before, though it is similar to (but more general than) the model of \newcite{sperr-niehues-waibel:2013:CVSC}. We can see that the performance of character, character trigrams, and BPE are very competitive. Composition by bi-LSTMs or CNN is more effective than addition, except for Turkish. We also observe that BPE always outperforms Morfessor, even for the agglutinative languages. We now turn to a more detailed analysis by morphological typology. \begin{asparadesc} \item[Fusional languages.] For these languages, character trigrams composed with bi-LSTMs outperformed all other models, particularly for Czech and Russian (up to 20\%), which is unsurprising since both are morphologically richer than English. \item[Agglutinative languages.] We observe different results for each language. For Finnish, character trigrams composed with bi-LSTMs achieves the best perplexity. Surprisingly, for Turkish character trigrams composed via addition is best and addition also performs quite well for other representations, potentially useful since the addition function is simpler and faster than bi-LSTMs. We suspect that this is due to the fact that Turkish morphemes are reasonably short, hence well-approximated by character trigrams. For Japanese, we improvements from character models are more modest than in other languages. \item[Root and Pattern.] For these languages, character trigrams composed with bi-LSTMs also achieve the best perplexity. We had wondered whether CNNs would be more effective for root-and-pattern morphology, but since these data are unvocalized, it is more likely that non-concatenative effects are minimized, though we do still find morphological variants with consonantal inflections that behave more like concatenation. For example, \textit{maktab} (root:\textit{ktb}) is written as \textit{mktb}. We suspect this makes character trigrams quite effective since they match the tri-consonantal root patterns among words which share the same root. \item[Reduplication.] For Indonesian, BPE morphs composed with bi-LSTMs model obtain the best perplexity. For Malay, the character CNN outperforms other models. However, these improvements are small compared to other languages. This likely reflects that Indonesian and Malay are only moderately inflected, where inflection involves both concatenative and non-concatenative processes. \end{asparadesc} \setcounter{topnumber}{3} \subsection{Effects of Morphological Analysis} In the experiments above, we used unsupervised morphological \emph{segmentation} as a proxy for morphological \emph{analysis} (Table \ref{tab:rep-unit}). However, as discussed in Section~\ref{sec:morph}, this is quite approximate, so it is natural to wonder what would happen if we had the true morphological analysis. If character-level models are powerful enough to capture the effects of morphology, then they should have the predictive accuracy of a model with access to this analysis. To find out, we conducted an oracle experiment using the human-annotated morphological analyses provided in the UD datasets for Czech and Russian, the only languages in our set for which these analyses were available. In these experiments we treat the lemma and each morphological feature as a subword unit. The results (Table~\ref{tab:oracle-res}) show that bi-LSTM composition of these representations outperforms all other models for both languages. These results demonstrate that neither character representations nor unsupervised segmentation is a perfect replacement for manual morphological analysis, at least in terms of predictive accuracy. In light of character-level results, they imply that current unsupervised morphological analyzers are poor substitutes for real morphological analysis. However, we can obtain much more unannotated than annotated data, and we might guess that the character-level models would outperform those based on morphological analyses if trained on larger data. To test this, we ran experiments that varied the training data size on three representation models: word, character-trigram bi-LSTM, and character CNN. Since we want to see how much training data is needed to reach perplexity obtained using annotated data, we use the same output vocabulary derived from the original training. While this makes it possible to compare perplexities across models, it is unfavorable to the models trained on larger data, which may focus on other words. This is a limitation of our experimental setup, but does allow us to draw some tentative conclusions. As shown in Table \ref{tab:corpus-exp}, a character-level model trained on an order of magnitude more data still does not match the predictive accuracy of a model with access to morphological analysis. \subsection{Automatic Morphological Analysis} The oracle experiments show promising results if we have annotated data. But these annotations are expensive, so we also investigated the use of automatic morphological analysis. We obtained analyses for Arabic with the MADAMIRA \cite{Pasha-MADAMIRA}.\footnote{We only experimented with Arabic since MADAMIRA disambiguates words in contexts; most other analyzers we found did not do this, and would require additional work to add disambiguation.} As in the experiment using annotations, we treated each morphological feature as a subword unit. The resulting perplexities of \textbf{71.94} and \textbf{42.85} for addition and bi-LSTMs, respectively, are worse than those obtained with character trigrams (\textbf{39.87}), though it approaches the best perplexities. \subsection{Targeted Perplexity Results} \label{sec:word-perplexity} A difficulty in interpreting the results of Table~\ref{tab:lm-results} with respect to specific morphological processes is that perplexity is measured for all words. But these processes do not apply to all words, so it may be that the effects of specific morphological processes are washed out. To get a clearer picture, we measured perplexity for only specific subsets of words in our test data: specifically, given target word $w_i$, we measure perplexity of word $w_{i+1}$. In other words, we analyze the perplexities \emph{when the inflected words of interest are in the most recent history}, exploiting the recency bias of our LSTM-LM. This is the perplexity most likely to be strongly affected by different representations, since we do not vary representations of the predicted word itself. We look at several cases: nouns and verbs in Czech and Russian, where word classes can be identified from annotations, and reduplication in Indonesian, which we can identify mostly automatically. For each analysis, we also distinguish between \textit{frequent} cases, where the inflected word occurs more than ten times in the training data, and \textit{rare} cases, where it occurs fewer than ten times. We compare only bi-LSTM models. For Czech and Russian, we again use the UD annotation to identify words of interest. The results (Table \ref{tab:word-pp-noun-verb}), show that manual morphological analysis uniformly outperforms other subword models, with an especially strong effect for Czech nouns, suggesting that other models do not capture useful predictive properties of a morphological analysis. We do however note that character trigrams achieve low perplexities in most cases, similar to overall results (Table \ref{tab:lm-results}). We also observe that the subword models are more effective for rare words. For Indonesian, we exploit the fact that the hyphen symbol `-' typically separates the first and second occurrence of a reduplicated morpheme, as in the examples of Section~\ref{sec:morph}. We use the presence of word tokens containing hyphens to estimate the percentage of those exhibiting reduplication. As shown in Table \ref{tab:redup-stat}, the numbers are quite low. Table \ref{tab:word-pp-redup} shows results for reduplication. In contrast with the overall results, the BPE bi-LSTM model has the worst perplexities, while character bi-LSTM has the best, suggesting that these models are more effective for reduplication. Looking more closely at BPE segmentation of reduplicated words, we found that only 6 of 252 reduplicated words have a correct word segmentation, with the reduplicated morpheme often combining differently with the notional start-of-word or hyphen character. One the other hand BPE correctly learns 8 out of 9 Indonesian prefixes and 4 out of 7 Indonesian suffixes.\footnote{We use Indonesian affixes listed in \newcite{Larasati2011}} This analysis supports our intuition that the improvement from BPE might come from its modeling of concatenative morphology. \subsection{Qualitative Analysis} \label{sec:qual-analysis} Table \ref{tab:topNN} presents nearest neighbors under cosine similarity for in-vocabulary, rare, and out-of-vocabulary (OOV) words.\footnote{https://radimrehurek.com/gensim/} For frequent words, standard word embeddings are clearly superior for lexical meaning. Character and morph representations tend to find words that are orthographically similar, suggesting that they are better at modeling dependent than root morphemes. The same pattern holds for rare and OOV words. We suspect that the subword models outperform words on language modeling because they exploit affixes to signal word class. We also noticed similar patterns in Japanese. We analyze reduplication by querying reduplicated words to find their nearest neighbors using the BPE bi-LSTM model. If the model were sensitive to reduplication, we would expect to see morphological variants of the query word among its nearest neighbors. However, from Table \ref{tab:redup-analysis}, this is not so. With the partially reduplicated query \textit{berlembah-lembah}, we do not find the lemma \emph{lembah}. \section{Conclusion} We presented a systematic comparison of word representation models with different levels of morphological awareness, across languages with different morphological typologies. Our results confirm previous findings that character-level models are effective for many languages, but these models do not match the predictive accuracy of model with explicit knowledge of morphology, even after we increase the training data size by ten times. Moreover, our qualitative analysis suggests that they learn orthographic similarity of affixes, and lose the meaning of root morphemes. Although morphological analyses are available in limited quantities, our results suggest that there might be utility in semi-supervised learning from partially annotated data. Across languages with different typologies, our experiments show that the subword unit models are most effective on agglutinative languages. However, these results do not generalize to all languages, since factors such as morphology and orthography affect the utility of these representations. We plan to explore these effects in future work. \section*{Acknowledgments} Clara Vania is supported by the Indonesian Endowment Fund for Education (LPDP), the Centre for Doctoral Training in Data Science, funded by the UK EPSRC (grant EP/L016427/1), and the University of Edinburgh. We thank Sameer Bansal, Toms Bergmanis, Marco Damonte, Federico Fancellu, Sorcha Gilroy, Sharon Goldwater, Frank Keller, Mirella Lapata, Felicia Liu, Jonathan Mallinson, Joana Ribeiro, Naomi Saphra, Ida Szubert, and the anonymous reviewers for helpful discussion of this work and comments on previous drafts of the paper. \bibliographystyle{acl_natbib} \end{document}
From Characters to Words to in Between: Do We Capture Morphology?
1704.08352
Table 7: Perplexity results on the Czech development data, varying training data size. Perplexity using ~1M tokens annotated data is 28.83.
[ "#tokens", "word", "char trigram", "char" ]
[ [ "#tokens", "word", "bi-LSTM", "CNN" ], [ "1M", "39.69", "32.34", "35.15" ], [ "2M", "37.59", "36.44", "35.58" ], [ "3M", "36.71", "35.60", "35.75" ], [ "4M", "35.89", "32.68", "35.93" ], [ "5M", "35.20", "34.80", "37.02" ], [ "10M", "35.60", "35.82", "39.09" ] ]
However, we can obtain much more unannotated than annotated data, and we might guess that the character-level models would outperform those based on morphological analyses if trained on larger data. To test this, we ran experiments that varied the training data size on three representation models: word, character-trigram bi-LSTM, and character CNN. Since we want to see how much training data is needed to reach perplexity obtained using annotated data, we use the same output vocabulary derived from the original training. While this makes it possible to compare perplexities across models, it is unfavorable to the models trained on larger data, which may focus on other words. This is a limitation of our experimental setup, but does allow us to draw some tentative conclusions.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \newcommand{\specialcell}[2][c]{ \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}} \newcommand{\hlt}[1]{{\sethlcolor{pink}\hl{#1}}} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{477} % Enter the acl Paper ID here \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand\anonymize[1]{} \title{From Characters to Words to in Between: Do We Capture Morphology?} \author{Clara Vania \and Adam Lopez \\ Institute for Language, Cognition and Computation \\ School of Informatics \\ University of Edinburgh \\ {\tt c.vania@ed.ac.uk, alopez@inf.ed.ac.uk}} \date{} \begin{document} \maketitle \begin{abstract} Words can be represented by composing the representations of subword units such as word segments, characters, and/or character n-grams. While such representations are effective and may capture the morphological regularities of words, they have not been systematically compared, and it is not understood how they interact with different morphological typologies. On a language modeling task, we present experiments that systematically vary (1) the basic unit of representation, (2) the composition of these representations, and (3) the morphological typology of the language modeled. Our results extend previous findings that character representations are effective across typologies, and we find that a previously unstudied combination of character trigram representations composed with bi-LSTMs outperforms most others. But we also find room for improvement: none of the character-level models match the predictive accuracy of a model with access to true morphological analyses, even when learned from an order of magnitude more data. \end{abstract} \section{Introduction} \label{sec:intro} Continuous representations of words learned by neural networks are central to many NLP tasks \cite{cho-EtAl:2014:EMNLP2014,chen-manning:2014:EMNLP2014,dyer-EtAl:2015:ACL-IJCNLP}. However, directly mapping a finite set of word types to a continuous representation has well-known limitations. First, it makes a closed vocabulary assumption, enabling only generic out-of-vocabulary handling. Second, it cannot exploit systematic functional relationships in learning. For example, \emph{cat} and \emph{cats} stand in the same relationship as \emph{dog} and \emph{dogs}. While this relationship might be discovered for these specific frequent words, it does not help us learn that the same relationship also holds for the much rarer words \emph{sloth} and \emph{sloths}. These functional relationships reflect the fact that words are composed from smaller units of meaning, or morphemes. For instance, \emph{cats} consists of two morphemes, \emph{cat} and \emph{-s}, with the latter shared by the words \emph{dogs} and \emph{tarsiers}. Modeling this effect is crucial for languages with rich morphology, where vocabulary sizes are larger, many more words are rare, and many more such functional relationships exist. Hence, some models produce word representations as a function of subword units obtained from morphological segmentation or analysis \cite{luong2013,DBLP:journals/corr/BothaB14,cotterell-schutze:2015:NAACL-HLT}. A downside of these models is that they depend on morphological segmenters or analyzers. Morphemes typically have similar orthographic representations across words. For example, the morpheme \emph{-s} is realized as \emph{-es} in \emph{finches}. Since this variation is limited, the general relationship between morphology and orthography can be exploited by composing the representations of characters \cite{ling2015,kim2015}, character n-grams \cite{sperr-niehues-waibel:2013:CVSC,wieting-EtAl:2016:EMNLP2016,DBLP:journals/corr/BojanowskiGJM16,DBLP:journals/corr/BothaB14}, bytes \cite{plank-sogaard-goldberg:2016:P16-2,gillick-EtAl:2016:N16-1}, or combinations thereof \cite{santos14,qiu-EtAl:2014:Coling1}. These models are compact, can represent rare and unknown words, and do not require morphological analyzers. They raise a provocative question: Does NLP benefit from models of morphology, or can they be replaced entirely by models of characters? The relative merits of word, subword. and character-level models are not fully understood because each new model has been compared on different tasks and datasets, and often compared against word-level models. A number of questions remain open: \begin{enumerate} \item How do representations based on morphemes compare with those based on characters? \item What is the best way to compose subword representations? \item Do character-level models capture morphology in terms of predictive utility? \item How do different representations interact with languages of different morphological typologies? \end{enumerate} The last question is raised by \newcite{Bender:13}: languages are typologically diverse, and the behavior of a model on one language may not generalize to others. Character-level models implicitly assume concatenative morphology, but many widely-spoken languages feature non-concatenative morphology, and it is unclear how such models will behave on these languages. To answer these questions, we performed a systematic comparison across different models for the simple and ubiquitous task of language modeling. We present experiments that vary (1) the type of subword unit; (2) the composition function; and (3) morphological typology. To understand the extent to which character-level models capture true morphological regularities, we present oracle experiments using human morphological annotations instead of automatic morphological segments. Our results show that: \begin{enumerate} \item For most languages, character-level representations outperform the standard word representations. Most interestingly, a previously unstudied combination of character trigrams composed with bi-LSTMs performs best on the majority of languages. \item Bi-LSTMs and CNNs are more effective composition functions than addition. \item Character-level models learn functional relationships between orthographically similar words, but don't (yet) match the predictive accuracy of models with access to true morphological analyses. \item Character-level models are effective across a range of morphological typologies, but orthography influences their effectiveness. \end{enumerate} \section{Morphological Typology} \label{sec:morph} A \textbf{morpheme} is the smallest unit of meaning in a word. Some morphemes express core meaning (\textbf{roots}), while others express one or more dependent \textbf{features} of the core meaning, such as person, gender, or aspect. A \textbf{morphological analysis} identifies the lemma and features of a word. A \textbf{morph} is the surface realization of a morpheme \cite{Morley2000-MORSIF}, which may vary from word to word. These distinctions are shown in Table \ref{tab:ex-morphs}. Morphological typology classifies languages based on the processes by which morphemes are composed to form words. While most languages will exhibit a variety of such processes, for any given language, some processes are much more frequent than others, and we will broadly identify our experimental languages with these processes. When morphemes are combined sequentially, the morphology is \textbf{concatenative}. However, morphemes can also be composed by \textbf{non-concatenative} processes. We consider four broad categories of both concatenative and non-concatenative processes in our experiments. \begin{asparadesc} \item[Fusional languages] realize multiple features in a single concatenated morpheme. For example, English verbs can express number, person, and tense in a single morpheme: \begin{center} \textit{wanted} (English) \\ \textit{want} + \textit{ed} \\ \textit{want} + \texttt{VB+1st+SG+Past} \end{center} \item[Agglutinative languages] assign one feature per morpheme. Morphemes are concatenated to form a word and the morpheme boundaries are clear. For example \cite{Haspelmath02}: \begin{center} \textit{okursam} (Turkish) \\ \textit{oku+r+sa+m} \\ ``read''+\texttt{AOR+COND+1SG} \\ \end{center} \item[Root and Pattern Morphology] forms words by inserting consonants and vowels of dependent morphemes into a consonantal root based on a given pattern. For example, the Arabic root \textit{ktb} (``write") produces \cite{Roark2007}: \begin{center} \textit{\textbf{k}a\textbf{t}a\textbf{b}} ``wrote" (Arabic) \\ \textit{ta\textbf{k}aa\textbf{t}a\textbf{b}} ``wrote to each other" (Arabic) \end{center} \item[Reduplication] is a process where a word form is produced by repeating part or all of the root to express new features. For example: \begin{center} \textit{anak} ``child" (Indonesian) \\ \textit{anak-anak} ``children" (Indonesian) \\ \textit{buah} ``fruit" (Indonesian) \\ \textit{buah-buahan} ``various fruits" (Indonesian) \end{center} \end{asparadesc} \section{Representation Models} \label{sec:rep-model} We compare ten different models, varying subword units and composition functions that have commonly been used in recent work, but evaluated on various different tasks (Table \ref{tab:prev-work}). Given word $w$, we compute its representation $\textbf{w}$ as: \begin{align} \label{eq:word-emb} \textbf{w} = f(\textbf{W}_s, \sigma(w)) \end{align} where $\sigma$ is a deterministic function that returns a sequence of subword units; $\textbf{W}_s$ is a parameter matrix of representations for the vocabulary of subword units; and $f$ is a composition function which takes $\sigma(w)$ and $\textbf{W}_s$ as input and returns $\textbf{w}$. All of the representations that we consider take this form, varying only in $f$ and $\sigma$. \subsection{Subword Units} We consider four variants of $\sigma$ in Equation~\ref{eq:word-emb}, each returning a different type of subword unit: character, character trigram, or one of two types of morph. Morphs are obtained from Morfessor \cite{smit-EtAl:2014:Demos} or a word segmentation based on Byte Pair Encoding (BPE; \newcite{Gage:1994:NAD:177910.177914}), which has been shown to be effective for handling rare words in neural machine translation \cite{DBLP:journals/corr/SennrichHB15}. BPE works by iteratively replacing frequent pairs of characters with a single unused character. For Morfessor, we use default parameters while for BPE we set the number of merge operations to 10,000.\footnote{BPE takes a single parameter: the number of merge operations. We tried different parameter values (1k, 10k, 100k) and manually examined the resulting segmentation on the English dataset. Qualitatively, 10k gave the most plausible segmentation and we used this setting across all languages.} When we segment into character trigrams, we consider all trigrams in the word, including those covering notional beginning and end of word characters, as in \newcite{sperr-niehues-waibel:2013:CVSC}. Example output of $\sigma$ is shown in Table \ref{tab:rep-unit}. \subsection{Composition Functions} We use three variants of $f$ in Eq. \ref{eq:word-emb}. The first constructs the representation $\mathbf{w}$ of word $w$ by adding the representations of its subwords $s_1, \dots, s_n = \sigma(w)$, where the representation of $s_i$ is vector $\mathbf{s}_i$. \begin{align} \textbf{w} = \sum\limits_{i=1}^n \textbf{s}_i \end{align} The only subword unit that we don't compose by addition is characters, since this will produce the same representation for many different words. Our second composition function is a bidirectional long-short-term memory (\textbf{bi-LSTM}), which we adapt based on its use in the character-level model of \newcite{ling2015} and its widespread use in NLP generally. Given $\textbf{s}_i$ and the previous LSTM hidden state $\textbf{h}_{i-1}$, an LSTM \cite{Hochreiter:1997:LSTM} computes the following outputs for the subword at position $i$: \begin{align} \textbf{h}_i = LSTM(\textbf{s}_i, \textbf{h}_{i-1}) \\ \hat{s}_{i+1} = g(\textbf{V}^T \cdot \textbf{h}_i) \end{align} where $\hat{s}_{i+1}$ is the predicted target subword, $g$ is the softmax function and $\textbf{V}$ is a weight matrix. A bi-LSTM \cite{Graves:2005:BLN:1986079.1986220} combines the final state of an LSTM over the input sequence with one over the reversed input sequence. Given the hidden state produced from the final input of the forward LSTM, $\textbf{h}_n^{fw}$ and the hidden state produced from the final input of the backward LSTM $\textbf{h}_0^{bw}$, we compute the word representation as: \begin{align} \textbf{w}_t = \textbf{W}_f \cdot \textbf{h}_n^{fw} + \textbf{W}_b \cdot \textbf{h}_0^{bw} + \textbf{b} \end{align} where $\textbf{W}_f$, $\textbf{W}_b$, and $\textbf{b}$ are parameter matrices and $\textbf{h}_n^{fw}$ and $\textbf{h}_0^{bw}$ are forward and backward LSTM states, respectively. The third composition function is a convolutional neural network (\textbf{CNN}) with highway layers, as in \newcite{kim2015}. Let $c_1,\dots,c_k$ be the sequence of characters of word $w$. The character embedding matrix is $\mathbf{C} \in \mathbb{R}^{d \times k}$, where the $i$-th column corresponds to the embeddings of $c_i$. We first apply a narrow convolution between $\mathbf{C}$ and a filter $\mathbf{F} \in \mathbb{R}^{d \times n}$ of width $n$ to obtain a feature map $\mathbf{f} \in \mathbf{R}^{k-n+1}$. In particular, the computation of the $j$-th element of $\mathbf{f}$ is defined as \begin{align} \mathbf{f}[j] = tanh(\langle \mathbf{C}[*,j:j+n-1], \mathbf{F} \rangle + b) \end{align} where $\langle A,B \rangle = \mathtt{Tr}(\mathbf{AB}^T)$ is the Frobenius inner product and $b$ is a bias. The CNN model applies filters of varying width, representing features of character n-grams. We then calculate the max-over-time of each feature map. \begin{align} y_j = \max_j \mathbf{f}[j] \end{align} and concatenate them to derive the word representation $\textbf{w}_t = [y_1,\dots,y_m]$, where $m$ is the number of filters applied. Highway layers allow some dimensions of $\textbf{w}_t$ to be carried or transformed. Since it can learn character n-grams directly, we only use the CNN with character input. \subsection{Language Model} We use language models (LM) because they are simple and fundamental to many NLP applications. Given a sequence of text $s=w_1,\dots,w_T$, our LM computes the probability of $s$ as: \begin{align}\label{eq:lm} P(w_1,\dots,w_T) = \prod\limits_{t=1}^T P(y_t|w_1,\dots,w_{t-1}) \end{align} where $y_t = w_t$ if $w_t$ is in the output vocabulary and $y_t = $ \texttt{UNK} otherwise. Our language model is an LSTM variant of recurrent neural network language (RNN) LM \cite{Mikolov2010}. At time step $t$, it receives input $w_t$ and predicts $y_{t+1}$. Using Eq. \ref{eq:word-emb}, it first computes representation $\textbf{w}_t$ of $w_t$. Given this representation and previous state $\textbf{h}_{t-1}$, it produces a new state $\textbf{h}_t$ and predicts $y_{t+1}$: \begin{align} \textbf{h}_t = LSTM(\textbf{w}_t, \textbf{h}_{t-1}) \\ \hat{y}_{t+1} = g(\textbf{V}^T \cdot \textbf{h}_t) \end{align} where $g$ is a softmax function over the vocabulary yielding the probability in Equation~\ref{eq:lm}. Note that this design means that we can \emph{predict} only words from a finite output vocabulary, so our models differ only in their representation of context words. This design makes it possible to compare language models using perplexity, since they have the same event space, though open vocabulary word prediction is an interesting direction for future work. The complete architecture of our system is shown in Figure~\ref{fig:lstm-lm}, showing segmentation function $\sigma$ and composition function $f$ from Equation~\ref{eq:word-emb}. \section{Experiments} We perform experiments on ten languages (Table \ref{tab:corpus-stat}). We use datasets from \newcite{ling2015} for English and Turkish. For Czech and Russian we use Universal Dependencies (UD) v1.3 \cite{UDT-nivre}. For other languages, we use preprocessed Wikipedia data \cite{polyglot:2013:ACL-CoNLL}.\footnote{The Arabic and Hebrew dataset are unvocalized. Japanese mixes Kanji, Katakana, Hiragana, and Latin characters (for foreign words). Hence, a Japanese character can correspond to a character, syllable, or word. The preprocessed dataset is already word-segmented.} For each dataset, we use approximately 1.2M tokens to train, and approximately 150K tokens each for development and testing. Preprocessing involves lowercasing (except for character models) and removing hyperlinks. To ensure that we compared models and not implementations, we reimplemented all models in a single framework using Tensorflow \cite{tensorflow2015-whitepaper}.\footnote{Our implementation of these models can be found at https://github.com/claravania/subword-lstm-lm} We use a common setup for all experiments based on that of \newcite{ling2015}, \newcite{kim2015}, and \newcite{miyamoto-cho:2016:EMNLP2016}. In preliminary experiments, we confirmed that our models produced similar patterns of perplexities for the reimplemented word and character LSTM models of \newcite{ling2015}. Even following detailed discussion with Ling (p.c.), we were unable to reproduce their perplexities exactly---our English reimplementation gives lower perplexities; our Turkish higher---but we do reproduce their general result that character bi-LSTMs outperform word models. We suspect that different preprocessing and the stochastic learning explains differences in perplexities. Our final model with bi-LSTMs composition follows \newcite{miyamoto-cho:2016:EMNLP2016} as it gives us the same perplexity results for our preliminary experiments on the Penn Treebank dataset \cite{Marcus:1993:BLA:972470.972475}, preprocessed by \newcite{Mikolov2010}. \subsection{Training and Evaluation} Our LSTM-LM uses two hidden layers with 200 hidden units and representation vectors for words, characters, and morphs all have dimension 200. All parameters are initialized uniformly at random from -0.1 to 0.1, trained by stochastic gradient descent with mini-batch size of 32, time steps of 20, for 50 epochs. To avoid overfitting, we apply dropout with probability 0.5 on the input-to-hidden layer and all of the LSTM cells (including those in the bi-LSTM, if used). For all models which do not use bi-LSTM composition, we start with a learning rate of 1.0 and decrease it by half if the validation perplexity does not decrease by 0.1 after 3 epochs. For models with bi-LSTMs composition, we use a constant learning rate of 0.2 and stop training when validation perplexity does not improve after 3 epochs. For the character CNN model, we use the same settings as the \emph{small model} of \newcite{kim2015}. To make our results comparable to \newcite{ling2015}, for each language we limit the output vocabulary to the most frequent 5,000 training words plus an unknown word token. To learn to predict unknown words, we follow \newcite{ling2015}: in training, words that occur only once are stochastically replaced with the unknown token with probability 0.5. To evaluate the models, we compute perplexity on the test data. \section{Results and Analysis} Table \ref{tab:lm-results} presents our main results. In six of ten languages, character-trigram representations composed with bi-LSTMs achieve the lowest perplexities. As far as we know, this particular model has not been tested before, though it is similar to (but more general than) the model of \newcite{sperr-niehues-waibel:2013:CVSC}. We can see that the performance of character, character trigrams, and BPE are very competitive. Composition by bi-LSTMs or CNN is more effective than addition, except for Turkish. We also observe that BPE always outperforms Morfessor, even for the agglutinative languages. We now turn to a more detailed analysis by morphological typology. \begin{asparadesc} \item[Fusional languages.] For these languages, character trigrams composed with bi-LSTMs outperformed all other models, particularly for Czech and Russian (up to 20\%), which is unsurprising since both are morphologically richer than English. \item[Agglutinative languages.] We observe different results for each language. For Finnish, character trigrams composed with bi-LSTMs achieves the best perplexity. Surprisingly, for Turkish character trigrams composed via addition is best and addition also performs quite well for other representations, potentially useful since the addition function is simpler and faster than bi-LSTMs. We suspect that this is due to the fact that Turkish morphemes are reasonably short, hence well-approximated by character trigrams. For Japanese, we improvements from character models are more modest than in other languages. \item[Root and Pattern.] For these languages, character trigrams composed with bi-LSTMs also achieve the best perplexity. We had wondered whether CNNs would be more effective for root-and-pattern morphology, but since these data are unvocalized, it is more likely that non-concatenative effects are minimized, though we do still find morphological variants with consonantal inflections that behave more like concatenation. For example, \textit{maktab} (root:\textit{ktb}) is written as \textit{mktb}. We suspect this makes character trigrams quite effective since they match the tri-consonantal root patterns among words which share the same root. \item[Reduplication.] For Indonesian, BPE morphs composed with bi-LSTMs model obtain the best perplexity. For Malay, the character CNN outperforms other models. However, these improvements are small compared to other languages. This likely reflects that Indonesian and Malay are only moderately inflected, where inflection involves both concatenative and non-concatenative processes. \end{asparadesc} \setcounter{topnumber}{3} \subsection{Effects of Morphological Analysis} In the experiments above, we used unsupervised morphological \emph{segmentation} as a proxy for morphological \emph{analysis} (Table \ref{tab:rep-unit}). However, as discussed in Section~\ref{sec:morph}, this is quite approximate, so it is natural to wonder what would happen if we had the true morphological analysis. If character-level models are powerful enough to capture the effects of morphology, then they should have the predictive accuracy of a model with access to this analysis. To find out, we conducted an oracle experiment using the human-annotated morphological analyses provided in the UD datasets for Czech and Russian, the only languages in our set for which these analyses were available. In these experiments we treat the lemma and each morphological feature as a subword unit. The results (Table~\ref{tab:oracle-res}) show that bi-LSTM composition of these representations outperforms all other models for both languages. These results demonstrate that neither character representations nor unsupervised segmentation is a perfect replacement for manual morphological analysis, at least in terms of predictive accuracy. In light of character-level results, they imply that current unsupervised morphological analyzers are poor substitutes for real morphological analysis. However, we can obtain much more unannotated than annotated data, and we might guess that the character-level models would outperform those based on morphological analyses if trained on larger data. To test this, we ran experiments that varied the training data size on three representation models: word, character-trigram bi-LSTM, and character CNN. Since we want to see how much training data is needed to reach perplexity obtained using annotated data, we use the same output vocabulary derived from the original training. While this makes it possible to compare perplexities across models, it is unfavorable to the models trained on larger data, which may focus on other words. This is a limitation of our experimental setup, but does allow us to draw some tentative conclusions. As shown in Table \ref{tab:corpus-exp}, a character-level model trained on an order of magnitude more data still does not match the predictive accuracy of a model with access to morphological analysis. \subsection{Automatic Morphological Analysis} The oracle experiments show promising results if we have annotated data. But these annotations are expensive, so we also investigated the use of automatic morphological analysis. We obtained analyses for Arabic with the MADAMIRA \cite{Pasha-MADAMIRA}.\footnote{We only experimented with Arabic since MADAMIRA disambiguates words in contexts; most other analyzers we found did not do this, and would require additional work to add disambiguation.} As in the experiment using annotations, we treated each morphological feature as a subword unit. The resulting perplexities of \textbf{71.94} and \textbf{42.85} for addition and bi-LSTMs, respectively, are worse than those obtained with character trigrams (\textbf{39.87}), though it approaches the best perplexities. \subsection{Targeted Perplexity Results} \label{sec:word-perplexity} A difficulty in interpreting the results of Table~\ref{tab:lm-results} with respect to specific morphological processes is that perplexity is measured for all words. But these processes do not apply to all words, so it may be that the effects of specific morphological processes are washed out. To get a clearer picture, we measured perplexity for only specific subsets of words in our test data: specifically, given target word $w_i$, we measure perplexity of word $w_{i+1}$. In other words, we analyze the perplexities \emph{when the inflected words of interest are in the most recent history}, exploiting the recency bias of our LSTM-LM. This is the perplexity most likely to be strongly affected by different representations, since we do not vary representations of the predicted word itself. We look at several cases: nouns and verbs in Czech and Russian, where word classes can be identified from annotations, and reduplication in Indonesian, which we can identify mostly automatically. For each analysis, we also distinguish between \textit{frequent} cases, where the inflected word occurs more than ten times in the training data, and \textit{rare} cases, where it occurs fewer than ten times. We compare only bi-LSTM models. For Czech and Russian, we again use the UD annotation to identify words of interest. The results (Table \ref{tab:word-pp-noun-verb}), show that manual morphological analysis uniformly outperforms other subword models, with an especially strong effect for Czech nouns, suggesting that other models do not capture useful predictive properties of a morphological analysis. We do however note that character trigrams achieve low perplexities in most cases, similar to overall results (Table \ref{tab:lm-results}). We also observe that the subword models are more effective for rare words. For Indonesian, we exploit the fact that the hyphen symbol `-' typically separates the first and second occurrence of a reduplicated morpheme, as in the examples of Section~\ref{sec:morph}. We use the presence of word tokens containing hyphens to estimate the percentage of those exhibiting reduplication. As shown in Table \ref{tab:redup-stat}, the numbers are quite low. Table \ref{tab:word-pp-redup} shows results for reduplication. In contrast with the overall results, the BPE bi-LSTM model has the worst perplexities, while character bi-LSTM has the best, suggesting that these models are more effective for reduplication. Looking more closely at BPE segmentation of reduplicated words, we found that only 6 of 252 reduplicated words have a correct word segmentation, with the reduplicated morpheme often combining differently with the notional start-of-word or hyphen character. One the other hand BPE correctly learns 8 out of 9 Indonesian prefixes and 4 out of 7 Indonesian suffixes.\footnote{We use Indonesian affixes listed in \newcite{Larasati2011}} This analysis supports our intuition that the improvement from BPE might come from its modeling of concatenative morphology. \subsection{Qualitative Analysis} \label{sec:qual-analysis} Table \ref{tab:topNN} presents nearest neighbors under cosine similarity for in-vocabulary, rare, and out-of-vocabulary (OOV) words.\footnote{https://radimrehurek.com/gensim/} For frequent words, standard word embeddings are clearly superior for lexical meaning. Character and morph representations tend to find words that are orthographically similar, suggesting that they are better at modeling dependent than root morphemes. The same pattern holds for rare and OOV words. We suspect that the subword models outperform words on language modeling because they exploit affixes to signal word class. We also noticed similar patterns in Japanese. We analyze reduplication by querying reduplicated words to find their nearest neighbors using the BPE bi-LSTM model. If the model were sensitive to reduplication, we would expect to see morphological variants of the query word among its nearest neighbors. However, from Table \ref{tab:redup-analysis}, this is not so. With the partially reduplicated query \textit{berlembah-lembah}, we do not find the lemma \emph{lembah}. \section{Conclusion} We presented a systematic comparison of word representation models with different levels of morphological awareness, across languages with different morphological typologies. Our results confirm previous findings that character-level models are effective for many languages, but these models do not match the predictive accuracy of model with explicit knowledge of morphology, even after we increase the training data size by ten times. Moreover, our qualitative analysis suggests that they learn orthographic similarity of affixes, and lose the meaning of root morphemes. Although morphological analyses are available in limited quantities, our results suggest that there might be utility in semi-supervised learning from partially annotated data. Across languages with different typologies, our experiments show that the subword unit models are most effective on agglutinative languages. However, these results do not generalize to all languages, since factors such as morphology and orthography affect the utility of these representations. We plan to explore these effects in future work. \section*{Acknowledgments} Clara Vania is supported by the Indonesian Endowment Fund for Education (LPDP), the Centre for Doctoral Training in Data Science, funded by the UK EPSRC (grant EP/L016427/1), and the University of Edinburgh. We thank Sameer Bansal, Toms Bergmanis, Marco Damonte, Federico Fancellu, Sorcha Gilroy, Sharon Goldwater, Frank Keller, Mirella Lapata, Felicia Liu, Jonathan Mallinson, Joana Ribeiro, Naomi Saphra, Ida Szubert, and the anonymous reviewers for helpful discussion of this work and comments on previous drafts of the paper. \bibliographystyle{acl_natbib} \end{document}
Dialogue Learning With Human-in-the-Loop
1611.09823
Table 5: Fully Supervised (Imitation Learning) Results on Human Questions
[ "Train data size", "1k", "5k", "10k", "20k", "60k" ]
[ [ "Supervised MemN2N", "0.333", "0.429", "0.476", "0.526", "0.599" ] ]
For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning.
These are the instructions given for the textual feedback mechanical turk task (we also constructed a separate task to collect the initial questions, not described here):\\ Title: Write brief responses to given dialogue exchanges (about 15 min)\\ Description: Write a brief response to a student's answer to a teacher's question, providing feedback to the student on their answer. Instructions:\\ Each task consists of the following triplets: \begin{enumerate} \item a question by the teacher \item the correct answer(s) to the question (separated by ``OR'') \item a proposed answer in reply to the question from the student \end{enumerate} Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response giving feedback to the student about their answer. The correct answers are provided so that you know whether the student was correct or not. For example, given 1) question: ``what is a color in the united states flag?''; 2) correct answer: ``white, blue, red''; 3) student reply: ``red'', your response could be something like ``that's right!''; for 3) reply: ``green'', you might say ``no that's not right'' or ``nope, a correct answer is actually white''. Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or overused, we'll reject the HIT. Avoid naming the student or addressing ``the class'' directly. We will consider bonuses for higher quality responses during review. Experiments are first conducted using our simulator, and then using Amazon Mechanical Turk with real human subjects taking the role of the teacher\footnote{ Code and data are available at \tiny{\url{https://github.com/facebook/MemNN/tree/master/HITL}}. }. \subsection{Simulator} \paragraph{Online Experiments} \label{sec:online_exp} In our first experiments, we considered both the bAbI and WikiMovies tasks and varied batch size, random exploration rate $\epsilon$, and type of model. Figure~\ref{fig:online-babi-task6} and Figure~\ref{fig:online-movieqa-task6} shows (Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported in the appendix. Overall, we obtain the following conclusions: \begin{itemize} \item In general RBI and FP do work in a reinforcement learning setting, but can perform better with random exploration. \item In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail. \item REINFORCE obtains similar performance to RBI with optimal $\epsilon$. %, see figure~\ref{fig:online-comparison-rbi-fp-rf}. \item FP with balancing or with exploration via $\epsilon$ both outperform FP alone. \item For both RBI and FP, performance is largely independent of online batch size. \end{itemize} \if 0 \fi \paragraph{Dataset Batch Size Experiments} Given that larger online batch sizes appear to work well, and that this could be important in a real-world data collection setup where the same model is deployed to gather a large amount of feedback from humans, we conducted further experiments where the batch size is exactly equal to the dataset size and for each batch training is completed to convergence. After the model has been trained on the dataset, it is deployed to collect a new dataset of questions and answers, and the process is repeated. Table~\ref{table:dataset-batch-babi} reports test error at each iteration of training, using the bAbI Task $6$ as the case study (see the appendix for results on other tasks). The following conclusions can be made for this setting: \begin{itemize} \item RBI improves in performance as we iterate. Unlike in the online case, RBI does not need random exploration. We believe this is because the first batch, which is collected with a randomly initialized model, contains enough variety of examples with positive rewards that the model does not get stuck predicting a subset of labels. \item FP is not stable in this setting. %, % whereas it was in earlier online experiments. This is because once the model gets very good at making predictions (at the third iteration), it is not exposed to a sufficient number of negative responses anymore. From that point on, learning degenerates and performance drops as the model always predicts the same responses. At the next iteration, it will recover again since it has a more balanced training set, but then it will collapse again in an oscillating behavior. \item FP does work if extended with balancing or random exploration with sufficiently large $\epsilon$. \item RBI+FP also works well and helps with the instability of FP, alleviating the need for random exploration and data balancing. \end{itemize} Overall, our simulation results indicate that while a bot can be effectively trained fully online from bot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterate experiments over) is also a viable approach. We hence pursue the latter approach in our next set of experiments. \paragraph{Relation to experiments in \cite{weston2016dialog}} As described in detail in Section \ref{sec:related} the datasets we use in our experiments were introduced in \citep{weston2015towards}. However, that work involved constructing pre-built fixed policies (and hence, datasets), rather than training the learner in a reinforcement/interactive learning using a simulator, as in our work. They achieved this by choosing an omniscient (but deliberately imperfect) labeler that gets $\pi_{acc}$ examples always correct (the paper looked at values 1\%, 10\% and 50\%). In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, as we do here. Nevertheless, it is possible to compare our {\em learnt} policies to those results because we use the same train/valid/test splits. The clearest comparison comparison is via Table \ref{table:dataset-batch-babi}, where the policy is learnt using batch iterations of the dataset, updating the policy on each iteration. \cite{weston2015towards} can be viewed as training only one iteration, with a pre-built policy, as explained above, where 59\%, 81\% and 99\% accuracy was obtained for RBI for $\pi_{acc}$ with 1\%, 10\% and 50\% respectively\footnote{Note, this is not the same as a randomly initialized neural network policy, because due to the synthetic construction with an omniscient labeler the labels will be balanced. In our work, we learn the policy from randomly initialized weights which are updated as we learn the policy.}. While $\pi_{acc}$ of 50\% is good enough to solve the task, lower values are not. In this work a random policy begins with 74\% accuracy on the first iteration, but importantly on each iteration the policy is updated and improves, with values of 87\%, 90\% on iterations 2 and 3 respectively, and 98\% on iteration 6. This is a key differentiator to the work of \citep{weston2015towards} where such improvement was not shown. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (as long as balancing or random exploration is performed sufficiently). The final performance outperforms most values of $\pi_{acc}$ from \cite{weston2015towards} unless $\pi$ is so large that the task is already solved. This is a key contribution of our work. Similar conclusions can be made for Figures \ref{fig:online-babi-task6} and \ref{fig:online-movieqa-task6}. Despite our initial random policy starting at close to 0\% accuracy, if random exploration $\epsilon \ge 0.2$ is employed then after a number of epochs the performance is better than most values of $\pi_{acc}$ from \cite{weston2015towards}, e.g. compare the accuracies given in the previous paragraph (59\%, 81\% and 99\%) to Figure \ref{fig:online-babi-task6}, top left. \subsection{Human Feedback} \label{sec:mturkexp} We employed Turkers to both ask questions and then give textual feedback on the bot's answers, as described in Section \ref{sec:data-mturk}. Our experimental protocol was as follows. We first trained a MemN2N using supervised (i.e., imitation) learning on a training set of 1000 questions produced by Turkers and using the known correct answers provided by the original dataset (and no textual feedback). Next, using the trained policy, we collected textual feedback for the responses of the bot for an additional 10,000 questions. Examples from the collected dataset are given in Figure \ref{fig:mturk_data}. Given this dataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers to the additional questions, we can assign a positive reward to questions the bot got correct. We hence measure the impact of the sparseness of this reward signal, where a fraction $r$ of additional examples have rewards. The models are tested on a test set of $\sim$8,000 questions (produced by Turkers), and hyperparameters are tuned on a similarly sized validation set. Note this is a harder task than the WikiMovies task in the simulator due to the use natural language from Turkers, hence lower test performance is expected. Results are given in Table \ref{table:mturk-res}. They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 examples when $r=0$. As FP does not use numericalrewards at all, it is invariant to the parameter $r$. The combination of FP and RBI outperforms either alone. We also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fully supervised case which are given in Appendix \ref{sec:appendix-mturk}. They show that the results with human feedback are competitive with these approaches. A good conversational agent (which we sometimes refer to as a learner or bot\footnote{In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learning algorithm) as the student, and their more knowledgeable dialogue partner as the teacher.}) should have the ability to learn from the online feedback from a teacher: adapting its model when making mistakes and reinforcing the model when the teacher's feedback is positive. This is particularly important in the situation where the bot is initially trained in a supervised way on a fixed synthetic, domain-specific or pre-built dataset before release, but will be exposed to a different environment after release (e.g., more diverse natural language utterance usage when talking with real humans, different distributions, special cases, etc.). Most recent research has focused on training a bot from fixed training sets of labeled data but seldom on how the bot can improve through online interaction with humans. Human (rather than machine) language learning happens during communication \citep{bassiri2011interactional,werts1995instructive}, and not from labeled datasets, hence making this an important subject to study. In this work, we explore this direction by training a bot through interaction with teachers in an online fashion. The task is formalized under the general framework of reinforcement learning via the teacher's (dialogue partner's) feedback to the dialogue actions from the bot. The dialogue takes place in the context of question-answering tasks and the bot has to, given either a short story or a set of facts, answer a set of questions from the teacher. We consider two types of feedback: explicit numerical rewards as in conventional reinforcement learning, and textual feedback which is more natural in human dialogue, following \citep{weston2016dialog}. We consider two online training scenarios: (i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability of experiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk. We explore important issues involved in online learning such as how a bot can be most efficiently trained using a minimal amount of teacher's feedback, how a bot can harness different types of feedback signal, how to avoid pitfalls such as instability during online learing with different types of feedback via data balancing and exploration, and how to make learning with real humans feasible via data batching. Our findings indicate that it is feasible to build a pipeline that starts from a model trained with fixed data and then learns from interactions with humans to improve itself. %, obtaining accuracy close to a fully supervised dataset. We studied dialogue learning of end-to-end models using textual feedback and numerical rewards. Both fully online and iterative batch settings are viable approaches to policy learning, as long as possible instabilities in the learning algorithms are taken into account. Secondly, we showed for the first time that the recently introduced FP method can work in both an online setting and on real human feedback. Overall, our results indicate that it is feasible to build a practical pipeline that starts with a model trained on an initial fixed dataset, which then learns from interactions with humans in a (semi-)online fashion to improve itself. Future research should work towards doing this in a never-ending learning setup. The tasks in \cite{weston2016dialog} were specifically:\\ - {\bf Task 1}: The teacher tells the student exactly what they should have said (supervised baseline).\\ - {\bf Task 2}: The teacher replies with positive textual feedback and reward, or negative textual feedback. \\ - {\bf Task 3}: The teacher gives textual feedback containing the answer when the bot is wrong.\\ - {\bf Task 4}: The teacher provides a hint by providing the class of the correct answer, e.g., ``No it's a movie" for the question ``which movie did Forest Gump star in?".\\ - {\bf Task 5}: The teacher provides a reason why the student's answer is wrong by pointing out the relevant supporting fact from the knowledge base.\\ - {\bf Task 6}: The teacher gives positive reward only 50\% of the time. \\ - {\bf Task 7}: Rewards are missing and the teacher only gives natural language feedback.\\ - {\bf Task 8}: Combines Tasks 1 and 2 to see whether a learner can learn successfully from both forms of supervision at once.\\ - {\bf Task 9}: The bot asks questions of the teacher about what it has done wrong.\\ - {\bf Task 10}: The bot will receive a hint rather than the correct answer after asking for help. We refer the readers to \citep{weston2016dialog} for more detailed descriptions and the motivation behind these tasks. The difference in our system is that the model can be trained on-the-fly via the simulator: after receiving feedback and/or rewards, the model can update itself and apply its learning to the next episode. We present results on Tasks 2, 3 and 4 in this appendix \documentclass{article} % For LaTeX2e \newcommand{\taskTwo}{{\it Question Clarification-Verification}} \title{Dialogue Learning With Human-in-the-Loop} \author{Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston \\ Facebook AI Research, \\ New York, USA \\ \texttt{\{jiwel,ahm,spchopra,ranzato,jase\}@fb.com} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \newcommand{\MINUS}{} \begin{document} \maketitle \begin{abstract} An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach. \end{abstract} \section{Introduction} \input{intro} \section{Related Work} \label{sec:related} \input{related} \section{Dataset and Tasks} \input{data} \section{Methods} \subsection{Model Architecture} \input{model} \subsection{Reinforcement Learning} \label{sec:rl} % Algorithms} \input{methods} \section{Experiments} \input{exp} \section{Conclusion} \input{conc} \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Further Simulator Task Details} \label{sec:extra_data} \input{extra_data} \section{Instructions given to Turkers} \label{sec:mturk} \input{mturk} \newpage \section{Additional Experiments} \input{extra_exp} \end{document} We begin by describing the data setup we use. In our first set of experiments we build a simulator as a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk to provide real human teachers giving feedback. \subsection{Simulator} The simulator adapts two existing fixed datasets to our online setting. Following \cite{weston2016dialog}, we use (i) the single supporting fact problem from the bAbI datasets \citep{weston2015towards} which consists of 1000 short stories from a simulated world interspersed with questions; and (ii) the WikiMovies dataset \citep{weston2015towards} which consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb). Each dialogue takes place between a teacher, scripted by the simulation, and a bot. The communication protocol is as follows: (1) the teacher first asks a question from the fixed set of questions existing in the dataset, (2) the bot answers the question, and finally (3) the teacher gives feedback on the bot's answer. We follow the paradigm defined in \citep{weston2016dialog} where the teacher's feedback takes the form of either textual feedback, a numerical reward, or both, depending on the task. For each dataset, there are ten tasks, which are further described in Sec.~\ref{sec:extra_data} and illustrated in Figure~\ref{Tasks} of the appendix. We also refer the readers to \citep{weston2016dialog} for more detailed descriptions and the motivation behind these tasks. In the main text of this paper we only consider %Tasks 3 (`answers supplied by teacher') Task 6 (``partial feedback''): % from \cite{weston2016dialog}: the teacher replies with positive textual feedback (6 possible templates) when the bot answers correctly, and positive reward is given only 50\% of the time. When the bot is wrong, the teacher gives textual feedback containing the answer. Descriptions and experiments on the other tasks are detailed in the appendix. Example dialogues are given in Figure \ref{fig:simulator-examples}. The difference between our simulation and the original fixed tasks of~\cite{weston2016dialog} is that models are trained on-the-fly. % via the simulator. After receiving feedback and/or rewards, we update the model (policy) and then deploy it to collect teacher's feedback in the next episode or batch. This means the model's policy affects the data which is used to train it, which was not the case in the previous work. \definecolor{dred}{rgb}{0.7,0.0,0.0} \newcommand{\PLUS}{{\textcolor{blue}{(+)}}} \newcommand{\SPACE}{~~~~~~~~~~~~~~~~~~~~~~} \subsection{Mechanical Turk Experiments} \label{sec:data-mturk} Finally, we extended WikiMovies using Mechanical Turk so that real human teachers are giving feedback rather than using a simulation. As both the questions and feedback are templated in the simulation, they are now both replaced with natural human utterances. Rather than having a set of simulated tasks, we have only one task, and we gave instructions to the teachers that they could give feedback as they see fit. The exact instructions given to the Turkers is given in Appendix \ref{sec:mturk}. In general, each independent response contains feedback like (i) positive or negative sentences; or (ii) a phrase containing the answer or (iii) a hint, which are similar to setups defined in the simulator. However, some human responses cannot be so easily categorized, and the lexical variability is much larger in human responses. Some examples of the collected data are given in Figure~\ref{fig:mturk_data}. Reinforcement learning has been widely applied to dialogue, especially in slot filling to solve domain-specific tasks \citep{walker2000application,schatzmann2006survey,singh2000empirical,singh2002optimizing}. Efforts include Markov Decision Processes (MDPs) \citep{levin1997learning,levin2000stochastic,walker2003trainable,pieraccini2009we}, POMDP models \citep{young2010hidden,young2013pomdp,gavsic2013pomdp,gavsic2014incremental} and policy learning \citep{su2016continuously}. Such a line of research focuses mainly on frames with slots to fill, where the bot will use reinforcement learning to model a state transition pattern, generating dialogue utterances to prompt the appropriate user responses to put in the desired slots. This goal is different from ours, where we study end-to-end learning systems and also consider non-reward based setups via textual feedback. Our work is related to the line of research that focuses on supervised learning for question answering (QA) from dialogues \citep{dodge2015evaluating,weston2016dialog}, either given a database of knowledge \citep{bordes2015large,miller2016key} or short texts \citep{weston2015towards,hermann2015teaching,rajpurkar2016squad}. In our work, the discourse includes the statements made in the past, the question and answer, and crucially the response from the teacher. The latter is what makes the setting different from the standard QA setting, i.e. we use methods that leverage this response also, not just answering questions. Further, QA works only consider fixed datasets with gold annotations, i.e. they do not consider a reinforcement learning setting. Our work is closely related to a recent work from \cite{weston2016dialog} that learns through conducting conversations where supervision is given naturally in the response during the conversation. That work introduced the use of forward prediction that learns by predicting the teacher's feedback, in addition to using reward-based learning of correct answers. However, two important issues were not addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built datasets with fixed policies given in advance; and (ii) experiments used only simulated and no real language data. Hence, models that can learn policies from real online communication were not investigated. To make the differences with our work clear, we will now detail these points further. The experiments in \citep{weston2016dialog} involve constructing pre-built fixed datasets, rather than training the learner within a simulator, as in our work. Pre-built datasets can only be made by fixing a prior in advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler that gets $\pi_{acc}$ examples always correct (the paper looked at values 50\%, 10\% and 1\%). Again, this was not learned, and was fixed to generate the datasets. Note that the paper refers to these answers as coming from ``the learner'' (which should be the model), but since the policy is fixed it actually does not depend on the model. In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, so their setting was not practically viable. In our work, when policy training is viewed as batch learning over iterations of the dataset, updating the policy on each iteration, \citep{weston2016dialog} can be viewed as training only one iteration, whereas we perform multiple iterations. This is explained further in Sections \ref{sec:rl} and \ref{sec:online_exp}. We show in our experiments that performance improves over the iterations, i.e. it is better than the first iteration. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (under certain conditions which are detailed). This is a key contribution of our work. Finally, \citep{weston2016dialog} only conducted experiments on synthetic or templated language, and not real language, especially the feedback from the teacher was scripted. While we believe that synthetic datasets are very important for developing understanding (hence we develop a simulator and conduct experiments also with synthetic data), for a new method to gain traction it must be shown to work on real data. We hence employ Mechanical Turk to collect real language data for the questions and importantly for the teacher feedback and construct experiments in this real setting. \if 0 Let's consider Table 1, which reports test accuracy for the dataset batch size case over several iterations (1 to 6). On each iteration the policy that generated the predictions is fixed, but is updated on the next iteration after learning. On the first iteration you have to start with some kind of policy so we start with a random one. There exists a \pi_acc\% policy from Weston'16 that would obtain the same error rate as that chosen random policy on iteration 1. Values of acc\% higher would be getting better accuracy and lower acc\% lower accuracy. However, in Weston'16 the policy is never updated while training, this is like stopping after the first iteration (column 1, Table 1) and that is the final error rate you get (which is why in Weston'16 on page 5 it is stated explicitly “Note that because the policies are fixed the experiments in this paper are not in a reinforcement learning setting.”). However, in the setting in *this * paper we do update the policy and you get iterations 2, 3 and so on. What we want to show is that the accuracy gets *better* on subsequent iterations. And that is indeed the case, see Table 1, 2nd column (RBI), the accuracy goes from 0.74 to 0.87 to 0.90 to 0.96 and so on. Hence, our approaches are doing better than the original policy they started with. So if you started with one of the fake labelers from Weston'16, regardless of the value of the initial \pi_acc, you would improve over them as well. The point is that real random guessing isn't stronger, not initially, but after training and updating/learning the policy one would hope for it to be stronger. Our experiments showed this was the case, which is a positive result. This is a key contribution of the work. \fi \FloatBarrier \subsection{Additional Experiments For Mechanical Turk Setup}\label{sec:appendix-mturk} In the experiment in Section 5.2 %\ref{sec:mturkexp} we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50\% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The results are given in Table \ref{table:mturk-res-synth}. The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difficulty of dealing with its lexical and semantic variability. The real data is better than using Task 2 synthetic data. For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). The results are given in Table \ref{table:mturk-supervised}. They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning. \subsection{Second Iteration of Feedback} We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model from the previous training, using RBI+FP with $r=1$ which previously gave a test accuracy of 0.478 (see Table \ref{table:mturk-res-synth}). Using that model as a predictor, we collected an additional 10,000 training examples. We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying $r$ on the additional collected set. We also report the performance from varying $\epsilon$, the proportion of random exploration of predictions on the new set. The results are reported in Table \ref{table:mturk-2nd-it}. Overall, performance is improved in the second iteration, with slightly better performance for large $r$ and $\epsilon=0.5$. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case. In this section, we present the algorithms we used to train MemN2N in an online fashion. Our learning setup can be cast as a particular form of Reinforcement Learning. The policy is implemented by the MemN2N model. The state is the dialogue history. %deterministic and it is set by the simulator. The action space corresponds to the set of answers the MemN2N has to choose from to answer the teacher's question. In our setting, the policy chooses only one action for each episode. The reward is either $1$ (a reward from the teacher when the bot answers correctly) or $0$ otherwise. Note that in our experiments, a reward equal to $0$ might mean that the answer is incorrect or that the positive reward is simply missing. The overall setup is closest to standard contextual bandits, except that the reward is binary. When working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier to set up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to a single one. The latter would be more difficult to manage and scale up since it would require some form of synchronization between the model replicas interacting with each human. This is comparable to the real world situation where a teacher can either ask a student a single question and give feedback right away, or set up a test that contains many questions and grade all of them at once. Only after the learner completes all questions, it can hear feedback from the teacher. We use {\it batch size} to refer to how many dialogue episodes the current model is used to collect feedback before updating its parameters. In the Reinforcement Learning literature, batch size is related to {\em off-policy} learning since the MemN2N policy is trained using episodes collected with a stale version of the model. Our experiments show that our model and base algorithms are very robust to the choice of batch size, alleviating the need for correction terms in the learning algorithm~\citep{bottou-13}. We consider two strategies: (i) online batch size, whereby the target policy is updated after doing a single pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii) dataset-sized batch, whereby training is continued to convergence on the batch which is the size of the dataset, and then the target policy is updated with the new model, and a new batch is drawn and the procedure iterates. These strategies can be applied to all the methods we use, described below. Next, we discuss the learning algorithms we considered in this work. \subsubsection{Reward-Based Imitation (RBI)} The simplest algorithm we first consider is the one employed in \cite{weston2016dialog}. RBI relies on positive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner, i.e., learning to predict the correct answers (with reward 1) at training time and disregarding the other ones. This is implemented by using a {MemN2N} that maps a dialogue input to a prediction, i.e. using the cross entropy criterion on the positively rewarded subset of the data. In order to make this work in the online setting which requires exploration to find the correct answer, we employ an $\epsilon$-greedy strategy: the learner makes a prediction using its own model (the answer assigned the highest probability) with probability $1-\epsilon$, otherwise it picks a random answer with probability $\epsilon$. The teacher will then give a reward of $+1$ if the answer is correct, otherwise $0$. The bot will then learn to imitate the correct answers: predicting the correct answers while ignoring the incorrect ones. \subsubsection{REINFORCE} The second algorithm we use is the REINFORCE algorithm \citep{williams1992simple}, which maximizes the expected cumulative reward of the episode, in our case the expected reward provided by the teacher. The expectation is approximated by sampling an answer from the model distribution. Let $a$ denote the answer that the learner gives, $p(a)$ denote the probability that current model assigns to $a$, $r$ denote the teacher's reward, and $J(\theta)$ denote the expectation of the reward. We have: \begin{equation} \nabla J(\theta)\approx\nabla\log p(a) [r-b] \end{equation} where $b$ is the baseline value, which is estimated using a linear regression model that takes as input the output of the memory network after the last hop, and outputs a scalar $b$ denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward $b$ and actual reward $r$, $||r-b||^2$. We refer the readers to \citep{ranzato2015sequence,zaremba2015reinforcement} for more details. The baseline estimator model is independent from the policy model, and its error is not backpropagated through the policy model. The major difference between RBI and REINFORCE is that (i) the learner only tries to imitate correct behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) the learner explores using an $\epsilon$-greedy strategy in RBI while in REINFORCE it uses the distribution over actions produced by the model itself. \subsubsection{Forward Prediction (FP)} FP \citep{weston2016dialog} handles the situation where a numerical reward for a bot's answer is not available, meaning that there are no +1 or 0 labels available after a student's utterance. Instead, the model assumes the teacher gives textual feedback $t$ to the bot's answer, taking the form of a dialogue utterance, and the model tries to predict this instead. Suppose that $x$ denotes the teacher's question and $C$=$c_1$, $c_2$, ..., $c_N$ denotes the dialogue history as before. In {\it FP}, the model first maps the teacher's initial question $x$ and dialogue history $C$ to a vector representation $u$ using a memory network with multiple hops. Then the model will perform another hop of attention over all possible student's answers in $\mathbb{A}$, with an additional part that incorporates the information of which candidate (i.e., $a$) was actually selected in the dialogue: \begin{equation} p_{\hat{a}}=\texttt{softmax}(u^T y_{\hat{a}})~~~~~o=\sum_{\hat{a}\in \mathbb{A}} p_{\hat{a}} (y_{\hat{a}}+\beta\cdot {\bf 1}[\hat{a}=a] ) \end{equation} where $y_{\hat{a}}$ denotes the vector representation for the student's answer candidate $\hat{a}$. $\beta$ is a (learned) d-dimensional vector to signify the actual action $a$ that the student chooses. $o$ is then combined with $u$ to predict the teacher's feedback $t$ using a softmax: \begin{equation} u_1=o+u ~~~~ t=\texttt{softmax} (u_1^T x_{r_1}, u_1^T x_{r_2}, ..., u_1^T x_{r_N}) \end{equation} where $x_{r_{i}}$ denotes the embedding for the $i^{th}$ response. In the online setting, the teacher will give textual feedback, and the learner needs to update its model using the feedback. It was shown in \cite{weston2016dialog} that in an off-line setting this procedure can work either on its own, or in conjunction with a method that uses numerical rewards as well for improved performance. In the online setting, we consider two simple extensions: \begin{itemize} \item $\epsilon$-greedy exploration: with probability $\epsilon$ the student will give a random answer, and with probability $1-\epsilon$ it will give the answer that its model assigns the largest probability. This method enables the model to explore the space of actions and to potentially discover correct answers. \item data balancing: cluster the set of teacher responses $t$ and then balance training across the clusters equally.\footnote{In the simulated data, because the responses are templates, this can be implemented by first randomly sampling the response, and then randomly sampling a story with that response; we keep the history of all stories seen from which we sample. For real data slightly more sophisticated clustering should be used.} This is a type of experience replay \citep{mnih2013playing} but sampling with an evened distribution. Balancing stops part of the distribution dominating the learning. For example, if the model is not exposed to sufficient positive and negative feedback, and one class overly dominates, the learning process degenerates to a model that always predicts the same output regardless of its input. \end{itemize} In our experiments, we used variants of the End-to-End Memory Network (MemN2N) model \citep{sukhbaatar2015end} %as a backbone as our underlying architecture for learning from dialogue. % interactions. The input to MemN2N is the last utterance of the dialogue history $x$ as well as a set of memories (context) $C$=$c_1$, $c_2$, ..., $c_N$. The memory $C$ encodes both short-term memory, e.g., dialogue histories between the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bot has access to. Given the input $x$ and $C$, the goal is to produce an output/label $a$. In the first step, the query $x$ is transformed to a vector representation $u_0$ by summing up its constituent word embeddings: $u_0=Ax$. The input $x$ is a bag-of-words vector and $A$ is the $d\times V$ word embedding matrix where $d$ denotes the emebbding dimension and $V$\ denotes the vocabulary size. Each memory $c_i$ is similarly transformed to a vector $m_{i}$. The model will read information from the memory by comparing input representation $u_0$ with memory vectors $m_{i}$ using softmax weights: \begin{equation} o_1=\sum_{i}p_i^1 m_{i} ~~~~~~~~~~p_i^1=\texttt{softmax}(u_0^T m_i) \end{equation} This process selects memories relevant to the last utterance $x$, i.e., the memories with large values of $p_i^1$. The returned memory vector $o_1$ is the weighted sum of memory vectors. This process can be repeated to query the memory N times (so called ``hops'') by adding $o_n$ to the original input, $u_1=o_1+u_0$, or to the previous state, $u_n=o_n+u_{n-1}$, and then using $u_n$ to query the memories again. In the end, $u_N$ is input to a softmax function for the final prediction: \begin{equation}\label{eq:a} a=\texttt{softmax} (u_N^T y_1,u_N^T y_2,...,u_N^T y_L) \end{equation} where $y_1, \dots, y_L$ denote the set of candidate answers. If the answer is a word, $y_i$ is the corresponding word embedding. If the answer is a sentence, $y_i$ is the embedding for the sentence achieved in the same way that we obtain embeddings for query $x$ and memory $C$. The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs, which we refer to as supervised or imitation learning. As our work is in a reinforcement learning setup where our model must make predictions to learn, this procedure will not work, so we instead consider reinforcement learning algorithms which we describe next.
Dialogue Learning With Human-in-the-Loop
1611.09823
Table 2: Incorporating Feedback From Humans via Mechanical Turk. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). Forward Prediction and Reward-based Imitation are both useful, with their combination performing best.
[ "Model", "[ITALIC] r=0", "[ITALIC] r=0.1", "[ITALIC] r=0.5", "[ITALIC] r=1" ]
[ [ "Reward Based Imitation (RBI)", "0.333", "0.340", "0.365", "0.375" ], [ "Forward Prediction (FP)", "0.358", "0.358", "0.358", "0.358" ], [ "RBI+FP", "0.431", "0.438", "0.443", "0.441" ] ]
They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 examples when r=0. As FP does not use numericalrewards at all, it is invariant to the parameter r. The combination of FP and RBI outperforms either alone.
These are the instructions given for the textual feedback mechanical turk task (we also constructed a separate task to collect the initial questions, not described here):\\ Title: Write brief responses to given dialogue exchanges (about 15 min)\\ Description: Write a brief response to a student's answer to a teacher's question, providing feedback to the student on their answer. Instructions:\\ Each task consists of the following triplets: \begin{enumerate} \item a question by the teacher \item the correct answer(s) to the question (separated by ``OR'') \item a proposed answer in reply to the question from the student \end{enumerate} Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response giving feedback to the student about their answer. The correct answers are provided so that you know whether the student was correct or not. For example, given 1) question: ``what is a color in the united states flag?''; 2) correct answer: ``white, blue, red''; 3) student reply: ``red'', your response could be something like ``that's right!''; for 3) reply: ``green'', you might say ``no that's not right'' or ``nope, a correct answer is actually white''. Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or overused, we'll reject the HIT. Avoid naming the student or addressing ``the class'' directly. We will consider bonuses for higher quality responses during review. Experiments are first conducted using our simulator, and then using Amazon Mechanical Turk with real human subjects taking the role of the teacher\footnote{ Code and data are available at \tiny{\url{https://github.com/facebook/MemNN/tree/master/HITL}}. }. \subsection{Simulator} \paragraph{Online Experiments} \label{sec:online_exp} In our first experiments, we considered both the bAbI and WikiMovies tasks and varied batch size, random exploration rate $\epsilon$, and type of model. Figure~\ref{fig:online-babi-task6} and Figure~\ref{fig:online-movieqa-task6} shows (Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported in the appendix. Overall, we obtain the following conclusions: \begin{itemize} \item In general RBI and FP do work in a reinforcement learning setting, but can perform better with random exploration. \item In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail. \item REINFORCE obtains similar performance to RBI with optimal $\epsilon$. %, see figure~\ref{fig:online-comparison-rbi-fp-rf}. \item FP with balancing or with exploration via $\epsilon$ both outperform FP alone. \item For both RBI and FP, performance is largely independent of online batch size. \end{itemize} \if 0 \fi \paragraph{Dataset Batch Size Experiments} Given that larger online batch sizes appear to work well, and that this could be important in a real-world data collection setup where the same model is deployed to gather a large amount of feedback from humans, we conducted further experiments where the batch size is exactly equal to the dataset size and for each batch training is completed to convergence. After the model has been trained on the dataset, it is deployed to collect a new dataset of questions and answers, and the process is repeated. Table~\ref{table:dataset-batch-babi} reports test error at each iteration of training, using the bAbI Task $6$ as the case study (see the appendix for results on other tasks). The following conclusions can be made for this setting: \begin{itemize} \item RBI improves in performance as we iterate. Unlike in the online case, RBI does not need random exploration. We believe this is because the first batch, which is collected with a randomly initialized model, contains enough variety of examples with positive rewards that the model does not get stuck predicting a subset of labels. \item FP is not stable in this setting. %, % whereas it was in earlier online experiments. This is because once the model gets very good at making predictions (at the third iteration), it is not exposed to a sufficient number of negative responses anymore. From that point on, learning degenerates and performance drops as the model always predicts the same responses. At the next iteration, it will recover again since it has a more balanced training set, but then it will collapse again in an oscillating behavior. \item FP does work if extended with balancing or random exploration with sufficiently large $\epsilon$. \item RBI+FP also works well and helps with the instability of FP, alleviating the need for random exploration and data balancing. \end{itemize} Overall, our simulation results indicate that while a bot can be effectively trained fully online from bot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterate experiments over) is also a viable approach. We hence pursue the latter approach in our next set of experiments. \paragraph{Relation to experiments in \cite{weston2016dialog}} As described in detail in Section \ref{sec:related} the datasets we use in our experiments were introduced in \citep{weston2015towards}. However, that work involved constructing pre-built fixed policies (and hence, datasets), rather than training the learner in a reinforcement/interactive learning using a simulator, as in our work. They achieved this by choosing an omniscient (but deliberately imperfect) labeler that gets $\pi_{acc}$ examples always correct (the paper looked at values 1\%, 10\% and 50\%). In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, as we do here. Nevertheless, it is possible to compare our {\em learnt} policies to those results because we use the same train/valid/test splits. The clearest comparison comparison is via Table \ref{table:dataset-batch-babi}, where the policy is learnt using batch iterations of the dataset, updating the policy on each iteration. \cite{weston2015towards} can be viewed as training only one iteration, with a pre-built policy, as explained above, where 59\%, 81\% and 99\% accuracy was obtained for RBI for $\pi_{acc}$ with 1\%, 10\% and 50\% respectively\footnote{Note, this is not the same as a randomly initialized neural network policy, because due to the synthetic construction with an omniscient labeler the labels will be balanced. In our work, we learn the policy from randomly initialized weights which are updated as we learn the policy.}. While $\pi_{acc}$ of 50\% is good enough to solve the task, lower values are not. In this work a random policy begins with 74\% accuracy on the first iteration, but importantly on each iteration the policy is updated and improves, with values of 87\%, 90\% on iterations 2 and 3 respectively, and 98\% on iteration 6. This is a key differentiator to the work of \citep{weston2015towards} where such improvement was not shown. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (as long as balancing or random exploration is performed sufficiently). The final performance outperforms most values of $\pi_{acc}$ from \cite{weston2015towards} unless $\pi$ is so large that the task is already solved. This is a key contribution of our work. Similar conclusions can be made for Figures \ref{fig:online-babi-task6} and \ref{fig:online-movieqa-task6}. Despite our initial random policy starting at close to 0\% accuracy, if random exploration $\epsilon \ge 0.2$ is employed then after a number of epochs the performance is better than most values of $\pi_{acc}$ from \cite{weston2015towards}, e.g. compare the accuracies given in the previous paragraph (59\%, 81\% and 99\%) to Figure \ref{fig:online-babi-task6}, top left. \subsection{Human Feedback} \label{sec:mturkexp} We employed Turkers to both ask questions and then give textual feedback on the bot's answers, as described in Section \ref{sec:data-mturk}. Our experimental protocol was as follows. We first trained a MemN2N using supervised (i.e., imitation) learning on a training set of 1000 questions produced by Turkers and using the known correct answers provided by the original dataset (and no textual feedback). Next, using the trained policy, we collected textual feedback for the responses of the bot for an additional 10,000 questions. Examples from the collected dataset are given in Figure \ref{fig:mturk_data}. Given this dataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers to the additional questions, we can assign a positive reward to questions the bot got correct. We hence measure the impact of the sparseness of this reward signal, where a fraction $r$ of additional examples have rewards. The models are tested on a test set of $\sim$8,000 questions (produced by Turkers), and hyperparameters are tuned on a similarly sized validation set. Note this is a harder task than the WikiMovies task in the simulator due to the use natural language from Turkers, hence lower test performance is expected. Results are given in Table \ref{table:mturk-res}. They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 examples when $r=0$. As FP does not use numericalrewards at all, it is invariant to the parameter $r$. The combination of FP and RBI outperforms either alone. We also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fully supervised case which are given in Appendix \ref{sec:appendix-mturk}. They show that the results with human feedback are competitive with these approaches. A good conversational agent (which we sometimes refer to as a learner or bot\footnote{In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learning algorithm) as the student, and their more knowledgeable dialogue partner as the teacher.}) should have the ability to learn from the online feedback from a teacher: adapting its model when making mistakes and reinforcing the model when the teacher's feedback is positive. This is particularly important in the situation where the bot is initially trained in a supervised way on a fixed synthetic, domain-specific or pre-built dataset before release, but will be exposed to a different environment after release (e.g., more diverse natural language utterance usage when talking with real humans, different distributions, special cases, etc.). Most recent research has focused on training a bot from fixed training sets of labeled data but seldom on how the bot can improve through online interaction with humans. Human (rather than machine) language learning happens during communication \citep{bassiri2011interactional,werts1995instructive}, and not from labeled datasets, hence making this an important subject to study. In this work, we explore this direction by training a bot through interaction with teachers in an online fashion. The task is formalized under the general framework of reinforcement learning via the teacher's (dialogue partner's) feedback to the dialogue actions from the bot. The dialogue takes place in the context of question-answering tasks and the bot has to, given either a short story or a set of facts, answer a set of questions from the teacher. We consider two types of feedback: explicit numerical rewards as in conventional reinforcement learning, and textual feedback which is more natural in human dialogue, following \citep{weston2016dialog}. We consider two online training scenarios: (i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability of experiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk. We explore important issues involved in online learning such as how a bot can be most efficiently trained using a minimal amount of teacher's feedback, how a bot can harness different types of feedback signal, how to avoid pitfalls such as instability during online learing with different types of feedback via data balancing and exploration, and how to make learning with real humans feasible via data batching. Our findings indicate that it is feasible to build a pipeline that starts from a model trained with fixed data and then learns from interactions with humans to improve itself. %, obtaining accuracy close to a fully supervised dataset. We studied dialogue learning of end-to-end models using textual feedback and numerical rewards. Both fully online and iterative batch settings are viable approaches to policy learning, as long as possible instabilities in the learning algorithms are taken into account. Secondly, we showed for the first time that the recently introduced FP method can work in both an online setting and on real human feedback. Overall, our results indicate that it is feasible to build a practical pipeline that starts with a model trained on an initial fixed dataset, which then learns from interactions with humans in a (semi-)online fashion to improve itself. Future research should work towards doing this in a never-ending learning setup. The tasks in \cite{weston2016dialog} were specifically:\\ - {\bf Task 1}: The teacher tells the student exactly what they should have said (supervised baseline).\\ - {\bf Task 2}: The teacher replies with positive textual feedback and reward, or negative textual feedback. \\ - {\bf Task 3}: The teacher gives textual feedback containing the answer when the bot is wrong.\\ - {\bf Task 4}: The teacher provides a hint by providing the class of the correct answer, e.g., ``No it's a movie" for the question ``which movie did Forest Gump star in?".\\ - {\bf Task 5}: The teacher provides a reason why the student's answer is wrong by pointing out the relevant supporting fact from the knowledge base.\\ - {\bf Task 6}: The teacher gives positive reward only 50\% of the time. \\ - {\bf Task 7}: Rewards are missing and the teacher only gives natural language feedback.\\ - {\bf Task 8}: Combines Tasks 1 and 2 to see whether a learner can learn successfully from both forms of supervision at once.\\ - {\bf Task 9}: The bot asks questions of the teacher about what it has done wrong.\\ - {\bf Task 10}: The bot will receive a hint rather than the correct answer after asking for help. We refer the readers to \citep{weston2016dialog} for more detailed descriptions and the motivation behind these tasks. The difference in our system is that the model can be trained on-the-fly via the simulator: after receiving feedback and/or rewards, the model can update itself and apply its learning to the next episode. We present results on Tasks 2, 3 and 4 in this appendix \documentclass{article} % For LaTeX2e \newcommand{\taskTwo}{{\it Question Clarification-Verification}} \title{Dialogue Learning With Human-in-the-Loop} \author{Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston \\ Facebook AI Research, \\ New York, USA \\ \texttt{\{jiwel,ahm,spchopra,ranzato,jase\}@fb.com} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \newcommand{\MINUS}{} \begin{document} \maketitle \begin{abstract} An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach. \end{abstract} \section{Introduction} \input{intro} \section{Related Work} \label{sec:related} \input{related} \section{Dataset and Tasks} \input{data} \section{Methods} \subsection{Model Architecture} \input{model} \subsection{Reinforcement Learning} \label{sec:rl} % Algorithms} \input{methods} \section{Experiments} \input{exp} \section{Conclusion} \input{conc} \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Further Simulator Task Details} \label{sec:extra_data} \input{extra_data} \section{Instructions given to Turkers} \label{sec:mturk} \input{mturk} \newpage \section{Additional Experiments} \input{extra_exp} \end{document} We begin by describing the data setup we use. In our first set of experiments we build a simulator as a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk to provide real human teachers giving feedback. \subsection{Simulator} The simulator adapts two existing fixed datasets to our online setting. Following \cite{weston2016dialog}, we use (i) the single supporting fact problem from the bAbI datasets \citep{weston2015towards} which consists of 1000 short stories from a simulated world interspersed with questions; and (ii) the WikiMovies dataset \citep{weston2015towards} which consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb). Each dialogue takes place between a teacher, scripted by the simulation, and a bot. The communication protocol is as follows: (1) the teacher first asks a question from the fixed set of questions existing in the dataset, (2) the bot answers the question, and finally (3) the teacher gives feedback on the bot's answer. We follow the paradigm defined in \citep{weston2016dialog} where the teacher's feedback takes the form of either textual feedback, a numerical reward, or both, depending on the task. For each dataset, there are ten tasks, which are further described in Sec.~\ref{sec:extra_data} and illustrated in Figure~\ref{Tasks} of the appendix. We also refer the readers to \citep{weston2016dialog} for more detailed descriptions and the motivation behind these tasks. In the main text of this paper we only consider %Tasks 3 (`answers supplied by teacher') Task 6 (``partial feedback''): % from \cite{weston2016dialog}: the teacher replies with positive textual feedback (6 possible templates) when the bot answers correctly, and positive reward is given only 50\% of the time. When the bot is wrong, the teacher gives textual feedback containing the answer. Descriptions and experiments on the other tasks are detailed in the appendix. Example dialogues are given in Figure \ref{fig:simulator-examples}. The difference between our simulation and the original fixed tasks of~\cite{weston2016dialog} is that models are trained on-the-fly. % via the simulator. After receiving feedback and/or rewards, we update the model (policy) and then deploy it to collect teacher's feedback in the next episode or batch. This means the model's policy affects the data which is used to train it, which was not the case in the previous work. \definecolor{dred}{rgb}{0.7,0.0,0.0} \newcommand{\PLUS}{{\textcolor{blue}{(+)}}} \newcommand{\SPACE}{~~~~~~~~~~~~~~~~~~~~~~} \subsection{Mechanical Turk Experiments} \label{sec:data-mturk} Finally, we extended WikiMovies using Mechanical Turk so that real human teachers are giving feedback rather than using a simulation. As both the questions and feedback are templated in the simulation, they are now both replaced with natural human utterances. Rather than having a set of simulated tasks, we have only one task, and we gave instructions to the teachers that they could give feedback as they see fit. The exact instructions given to the Turkers is given in Appendix \ref{sec:mturk}. In general, each independent response contains feedback like (i) positive or negative sentences; or (ii) a phrase containing the answer or (iii) a hint, which are similar to setups defined in the simulator. However, some human responses cannot be so easily categorized, and the lexical variability is much larger in human responses. Some examples of the collected data are given in Figure~\ref{fig:mturk_data}. Reinforcement learning has been widely applied to dialogue, especially in slot filling to solve domain-specific tasks \citep{walker2000application,schatzmann2006survey,singh2000empirical,singh2002optimizing}. Efforts include Markov Decision Processes (MDPs) \citep{levin1997learning,levin2000stochastic,walker2003trainable,pieraccini2009we}, POMDP models \citep{young2010hidden,young2013pomdp,gavsic2013pomdp,gavsic2014incremental} and policy learning \citep{su2016continuously}. Such a line of research focuses mainly on frames with slots to fill, where the bot will use reinforcement learning to model a state transition pattern, generating dialogue utterances to prompt the appropriate user responses to put in the desired slots. This goal is different from ours, where we study end-to-end learning systems and also consider non-reward based setups via textual feedback. Our work is related to the line of research that focuses on supervised learning for question answering (QA) from dialogues \citep{dodge2015evaluating,weston2016dialog}, either given a database of knowledge \citep{bordes2015large,miller2016key} or short texts \citep{weston2015towards,hermann2015teaching,rajpurkar2016squad}. In our work, the discourse includes the statements made in the past, the question and answer, and crucially the response from the teacher. The latter is what makes the setting different from the standard QA setting, i.e. we use methods that leverage this response also, not just answering questions. Further, QA works only consider fixed datasets with gold annotations, i.e. they do not consider a reinforcement learning setting. Our work is closely related to a recent work from \cite{weston2016dialog} that learns through conducting conversations where supervision is given naturally in the response during the conversation. That work introduced the use of forward prediction that learns by predicting the teacher's feedback, in addition to using reward-based learning of correct answers. However, two important issues were not addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built datasets with fixed policies given in advance; and (ii) experiments used only simulated and no real language data. Hence, models that can learn policies from real online communication were not investigated. To make the differences with our work clear, we will now detail these points further. The experiments in \citep{weston2016dialog} involve constructing pre-built fixed datasets, rather than training the learner within a simulator, as in our work. Pre-built datasets can only be made by fixing a prior in advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler that gets $\pi_{acc}$ examples always correct (the paper looked at values 50\%, 10\% and 1\%). Again, this was not learned, and was fixed to generate the datasets. Note that the paper refers to these answers as coming from ``the learner'' (which should be the model), but since the policy is fixed it actually does not depend on the model. In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, so their setting was not practically viable. In our work, when policy training is viewed as batch learning over iterations of the dataset, updating the policy on each iteration, \citep{weston2016dialog} can be viewed as training only one iteration, whereas we perform multiple iterations. This is explained further in Sections \ref{sec:rl} and \ref{sec:online_exp}. We show in our experiments that performance improves over the iterations, i.e. it is better than the first iteration. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (under certain conditions which are detailed). This is a key contribution of our work. Finally, \citep{weston2016dialog} only conducted experiments on synthetic or templated language, and not real language, especially the feedback from the teacher was scripted. While we believe that synthetic datasets are very important for developing understanding (hence we develop a simulator and conduct experiments also with synthetic data), for a new method to gain traction it must be shown to work on real data. We hence employ Mechanical Turk to collect real language data for the questions and importantly for the teacher feedback and construct experiments in this real setting. \if 0 Let's consider Table 1, which reports test accuracy for the dataset batch size case over several iterations (1 to 6). On each iteration the policy that generated the predictions is fixed, but is updated on the next iteration after learning. On the first iteration you have to start with some kind of policy so we start with a random one. There exists a \pi_acc\% policy from Weston'16 that would obtain the same error rate as that chosen random policy on iteration 1. Values of acc\% higher would be getting better accuracy and lower acc\% lower accuracy. However, in Weston'16 the policy is never updated while training, this is like stopping after the first iteration (column 1, Table 1) and that is the final error rate you get (which is why in Weston'16 on page 5 it is stated explicitly “Note that because the policies are fixed the experiments in this paper are not in a reinforcement learning setting.”). However, in the setting in *this * paper we do update the policy and you get iterations 2, 3 and so on. What we want to show is that the accuracy gets *better* on subsequent iterations. And that is indeed the case, see Table 1, 2nd column (RBI), the accuracy goes from 0.74 to 0.87 to 0.90 to 0.96 and so on. Hence, our approaches are doing better than the original policy they started with. So if you started with one of the fake labelers from Weston'16, regardless of the value of the initial \pi_acc, you would improve over them as well. The point is that real random guessing isn't stronger, not initially, but after training and updating/learning the policy one would hope for it to be stronger. Our experiments showed this was the case, which is a positive result. This is a key contribution of the work. \fi \FloatBarrier \subsection{Additional Experiments For Mechanical Turk Setup}\label{sec:appendix-mturk} In the experiment in Section 5.2 %\ref{sec:mturkexp} we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50\% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The results are given in Table \ref{table:mturk-res-synth}. The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difficulty of dealing with its lexical and semantic variability. The real data is better than using Task 2 synthetic data. For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). The results are given in Table \ref{table:mturk-supervised}. They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning. \subsection{Second Iteration of Feedback} We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model from the previous training, using RBI+FP with $r=1$ which previously gave a test accuracy of 0.478 (see Table \ref{table:mturk-res-synth}). Using that model as a predictor, we collected an additional 10,000 training examples. We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying $r$ on the additional collected set. We also report the performance from varying $\epsilon$, the proportion of random exploration of predictions on the new set. The results are reported in Table \ref{table:mturk-2nd-it}. Overall, performance is improved in the second iteration, with slightly better performance for large $r$ and $\epsilon=0.5$. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case. In this section, we present the algorithms we used to train MemN2N in an online fashion. Our learning setup can be cast as a particular form of Reinforcement Learning. The policy is implemented by the MemN2N model. The state is the dialogue history. %deterministic and it is set by the simulator. The action space corresponds to the set of answers the MemN2N has to choose from to answer the teacher's question. In our setting, the policy chooses only one action for each episode. The reward is either $1$ (a reward from the teacher when the bot answers correctly) or $0$ otherwise. Note that in our experiments, a reward equal to $0$ might mean that the answer is incorrect or that the positive reward is simply missing. The overall setup is closest to standard contextual bandits, except that the reward is binary. When working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier to set up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to a single one. The latter would be more difficult to manage and scale up since it would require some form of synchronization between the model replicas interacting with each human. This is comparable to the real world situation where a teacher can either ask a student a single question and give feedback right away, or set up a test that contains many questions and grade all of them at once. Only after the learner completes all questions, it can hear feedback from the teacher. We use {\it batch size} to refer to how many dialogue episodes the current model is used to collect feedback before updating its parameters. In the Reinforcement Learning literature, batch size is related to {\em off-policy} learning since the MemN2N policy is trained using episodes collected with a stale version of the model. Our experiments show that our model and base algorithms are very robust to the choice of batch size, alleviating the need for correction terms in the learning algorithm~\citep{bottou-13}. We consider two strategies: (i) online batch size, whereby the target policy is updated after doing a single pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii) dataset-sized batch, whereby training is continued to convergence on the batch which is the size of the dataset, and then the target policy is updated with the new model, and a new batch is drawn and the procedure iterates. These strategies can be applied to all the methods we use, described below. Next, we discuss the learning algorithms we considered in this work. \subsubsection{Reward-Based Imitation (RBI)} The simplest algorithm we first consider is the one employed in \cite{weston2016dialog}. RBI relies on positive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner, i.e., learning to predict the correct answers (with reward 1) at training time and disregarding the other ones. This is implemented by using a {MemN2N} that maps a dialogue input to a prediction, i.e. using the cross entropy criterion on the positively rewarded subset of the data. In order to make this work in the online setting which requires exploration to find the correct answer, we employ an $\epsilon$-greedy strategy: the learner makes a prediction using its own model (the answer assigned the highest probability) with probability $1-\epsilon$, otherwise it picks a random answer with probability $\epsilon$. The teacher will then give a reward of $+1$ if the answer is correct, otherwise $0$. The bot will then learn to imitate the correct answers: predicting the correct answers while ignoring the incorrect ones. \subsubsection{REINFORCE} The second algorithm we use is the REINFORCE algorithm \citep{williams1992simple}, which maximizes the expected cumulative reward of the episode, in our case the expected reward provided by the teacher. The expectation is approximated by sampling an answer from the model distribution. Let $a$ denote the answer that the learner gives, $p(a)$ denote the probability that current model assigns to $a$, $r$ denote the teacher's reward, and $J(\theta)$ denote the expectation of the reward. We have: \begin{equation} \nabla J(\theta)\approx\nabla\log p(a) [r-b] \end{equation} where $b$ is the baseline value, which is estimated using a linear regression model that takes as input the output of the memory network after the last hop, and outputs a scalar $b$ denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward $b$ and actual reward $r$, $||r-b||^2$. We refer the readers to \citep{ranzato2015sequence,zaremba2015reinforcement} for more details. The baseline estimator model is independent from the policy model, and its error is not backpropagated through the policy model. The major difference between RBI and REINFORCE is that (i) the learner only tries to imitate correct behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) the learner explores using an $\epsilon$-greedy strategy in RBI while in REINFORCE it uses the distribution over actions produced by the model itself. \subsubsection{Forward Prediction (FP)} FP \citep{weston2016dialog} handles the situation where a numerical reward for a bot's answer is not available, meaning that there are no +1 or 0 labels available after a student's utterance. Instead, the model assumes the teacher gives textual feedback $t$ to the bot's answer, taking the form of a dialogue utterance, and the model tries to predict this instead. Suppose that $x$ denotes the teacher's question and $C$=$c_1$, $c_2$, ..., $c_N$ denotes the dialogue history as before. In {\it FP}, the model first maps the teacher's initial question $x$ and dialogue history $C$ to a vector representation $u$ using a memory network with multiple hops. Then the model will perform another hop of attention over all possible student's answers in $\mathbb{A}$, with an additional part that incorporates the information of which candidate (i.e., $a$) was actually selected in the dialogue: \begin{equation} p_{\hat{a}}=\texttt{softmax}(u^T y_{\hat{a}})~~~~~o=\sum_{\hat{a}\in \mathbb{A}} p_{\hat{a}} (y_{\hat{a}}+\beta\cdot {\bf 1}[\hat{a}=a] ) \end{equation} where $y_{\hat{a}}$ denotes the vector representation for the student's answer candidate $\hat{a}$. $\beta$ is a (learned) d-dimensional vector to signify the actual action $a$ that the student chooses. $o$ is then combined with $u$ to predict the teacher's feedback $t$ using a softmax: \begin{equation} u_1=o+u ~~~~ t=\texttt{softmax} (u_1^T x_{r_1}, u_1^T x_{r_2}, ..., u_1^T x_{r_N}) \end{equation} where $x_{r_{i}}$ denotes the embedding for the $i^{th}$ response. In the online setting, the teacher will give textual feedback, and the learner needs to update its model using the feedback. It was shown in \cite{weston2016dialog} that in an off-line setting this procedure can work either on its own, or in conjunction with a method that uses numerical rewards as well for improved performance. In the online setting, we consider two simple extensions: \begin{itemize} \item $\epsilon$-greedy exploration: with probability $\epsilon$ the student will give a random answer, and with probability $1-\epsilon$ it will give the answer that its model assigns the largest probability. This method enables the model to explore the space of actions and to potentially discover correct answers. \item data balancing: cluster the set of teacher responses $t$ and then balance training across the clusters equally.\footnote{In the simulated data, because the responses are templates, this can be implemented by first randomly sampling the response, and then randomly sampling a story with that response; we keep the history of all stories seen from which we sample. For real data slightly more sophisticated clustering should be used.} This is a type of experience replay \citep{mnih2013playing} but sampling with an evened distribution. Balancing stops part of the distribution dominating the learning. For example, if the model is not exposed to sufficient positive and negative feedback, and one class overly dominates, the learning process degenerates to a model that always predicts the same output regardless of its input. \end{itemize} In our experiments, we used variants of the End-to-End Memory Network (MemN2N) model \citep{sukhbaatar2015end} %as a backbone as our underlying architecture for learning from dialogue. % interactions. The input to MemN2N is the last utterance of the dialogue history $x$ as well as a set of memories (context) $C$=$c_1$, $c_2$, ..., $c_N$. The memory $C$ encodes both short-term memory, e.g., dialogue histories between the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bot has access to. Given the input $x$ and $C$, the goal is to produce an output/label $a$. In the first step, the query $x$ is transformed to a vector representation $u_0$ by summing up its constituent word embeddings: $u_0=Ax$. The input $x$ is a bag-of-words vector and $A$ is the $d\times V$ word embedding matrix where $d$ denotes the emebbding dimension and $V$\ denotes the vocabulary size. Each memory $c_i$ is similarly transformed to a vector $m_{i}$. The model will read information from the memory by comparing input representation $u_0$ with memory vectors $m_{i}$ using softmax weights: \begin{equation} o_1=\sum_{i}p_i^1 m_{i} ~~~~~~~~~~p_i^1=\texttt{softmax}(u_0^T m_i) \end{equation} This process selects memories relevant to the last utterance $x$, i.e., the memories with large values of $p_i^1$. The returned memory vector $o_1$ is the weighted sum of memory vectors. This process can be repeated to query the memory N times (so called ``hops'') by adding $o_n$ to the original input, $u_1=o_1+u_0$, or to the previous state, $u_n=o_n+u_{n-1}$, and then using $u_n$ to query the memories again. In the end, $u_N$ is input to a softmax function for the final prediction: \begin{equation}\label{eq:a} a=\texttt{softmax} (u_N^T y_1,u_N^T y_2,...,u_N^T y_L) \end{equation} where $y_1, \dots, y_L$ denote the set of candidate answers. If the answer is a word, $y_i$ is the corresponding word embedding. If the answer is a sentence, $y_i$ is the embedding for the sentence achieved in the same way that we obtain embeddings for query $x$ and memory $C$. The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs, which we refer to as supervised or imitation learning. As our work is in a reinforcement learning setup where our model must make predictions to learn, this procedure will not work, so we instead consider reinforcement learning algorithms which we describe next.
Dialogue Learning With Human-in-the-Loop
1611.09823
Table 4: Incorporating Feedback From Humans via Mechanical Turk: comparing real human feedback to synthetic feedback. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). We compare real feedback (rows 2 and 3) to synthetic feedback when using FP or RBI+FP (rows 4 and 5).
[ "Model", "[ITALIC] r=0", "[ITALIC] r=0.1", "[ITALIC] r=0.5", "[ITALIC] r=1" ]
[ [ "Reward Based Imitation (RBI)", "0.333", "0.340", "0.365", "0.375" ], [ "Forward Prediction (FP) [real]", "0.358", "0.358", "0.358", "0.358" ], [ "RBI+FP [real]", "0.431", "0.438", "0.443", "0.441" ], [ "Forward Prediction (FP) [synthetic Task 2]", "0.188", "0.188", "0.188", "0.188" ], [ "Forward Prediction (FP) [synthetic Task 2+3]", "0.328", "0.328", "0.328", "0.328" ], [ "Forward Prediction (FP) [synthetic Task 3]", "0.361", "0.361", "0.361", "0.361" ], [ "RBI+FP [synthetic Task 2]", "0.382", "0.383", "0.407", "0.408" ], [ "RBI+FP [synthetic Task 2+3]", "0.459", "0.465", "0.464", "0.478" ], [ "RBI+FP [synthetic Task 3]", "0.473", "0.486", "0.490", "0.494" ] ]
In the experiment in Section 5.2 we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difficulty of dealing with its lexical and semantic variability. The real data is better than using Task 2 synthetic data. We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. Using that model as a predictor, we collected an additional 10,000 training examples. We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying r on the additional collected set. We also report the performance from varying ϵ, the proportion of random exploration of predictions on the new set. Overall, performance is improved in the second iteration, with slightly better performance for large r and ϵ=0.5. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case.
These are the instructions given for the textual feedback mechanical turk task (we also constructed a separate task to collect the initial questions, not described here):\\ Title: Write brief responses to given dialogue exchanges (about 15 min)\\ Description: Write a brief response to a student's answer to a teacher's question, providing feedback to the student on their answer. Instructions:\\ Each task consists of the following triplets: \begin{enumerate} \item a question by the teacher \item the correct answer(s) to the question (separated by ``OR'') \item a proposed answer in reply to the question from the student \end{enumerate} Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response giving feedback to the student about their answer. The correct answers are provided so that you know whether the student was correct or not. For example, given 1) question: ``what is a color in the united states flag?''; 2) correct answer: ``white, blue, red''; 3) student reply: ``red'', your response could be something like ``that's right!''; for 3) reply: ``green'', you might say ``no that's not right'' or ``nope, a correct answer is actually white''. Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or overused, we'll reject the HIT. Avoid naming the student or addressing ``the class'' directly. We will consider bonuses for higher quality responses during review. Experiments are first conducted using our simulator, and then using Amazon Mechanical Turk with real human subjects taking the role of the teacher\footnote{ Code and data are available at \tiny{\url{https://github.com/facebook/MemNN/tree/master/HITL}}. }. \subsection{Simulator} \paragraph{Online Experiments} \label{sec:online_exp} In our first experiments, we considered both the bAbI and WikiMovies tasks and varied batch size, random exploration rate $\epsilon$, and type of model. Figure~\ref{fig:online-babi-task6} and Figure~\ref{fig:online-movieqa-task6} shows (Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported in the appendix. Overall, we obtain the following conclusions: \begin{itemize} \item In general RBI and FP do work in a reinforcement learning setting, but can perform better with random exploration. \item In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail. \item REINFORCE obtains similar performance to RBI with optimal $\epsilon$. %, see figure~\ref{fig:online-comparison-rbi-fp-rf}. \item FP with balancing or with exploration via $\epsilon$ both outperform FP alone. \item For both RBI and FP, performance is largely independent of online batch size. \end{itemize} \if 0 \fi \paragraph{Dataset Batch Size Experiments} Given that larger online batch sizes appear to work well, and that this could be important in a real-world data collection setup where the same model is deployed to gather a large amount of feedback from humans, we conducted further experiments where the batch size is exactly equal to the dataset size and for each batch training is completed to convergence. After the model has been trained on the dataset, it is deployed to collect a new dataset of questions and answers, and the process is repeated. Table~\ref{table:dataset-batch-babi} reports test error at each iteration of training, using the bAbI Task $6$ as the case study (see the appendix for results on other tasks). The following conclusions can be made for this setting: \begin{itemize} \item RBI improves in performance as we iterate. Unlike in the online case, RBI does not need random exploration. We believe this is because the first batch, which is collected with a randomly initialized model, contains enough variety of examples with positive rewards that the model does not get stuck predicting a subset of labels. \item FP is not stable in this setting. %, % whereas it was in earlier online experiments. This is because once the model gets very good at making predictions (at the third iteration), it is not exposed to a sufficient number of negative responses anymore. From that point on, learning degenerates and performance drops as the model always predicts the same responses. At the next iteration, it will recover again since it has a more balanced training set, but then it will collapse again in an oscillating behavior. \item FP does work if extended with balancing or random exploration with sufficiently large $\epsilon$. \item RBI+FP also works well and helps with the instability of FP, alleviating the need for random exploration and data balancing. \end{itemize} Overall, our simulation results indicate that while a bot can be effectively trained fully online from bot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterate experiments over) is also a viable approach. We hence pursue the latter approach in our next set of experiments. \paragraph{Relation to experiments in \cite{weston2016dialog}} As described in detail in Section \ref{sec:related} the datasets we use in our experiments were introduced in \citep{weston2015towards}. However, that work involved constructing pre-built fixed policies (and hence, datasets), rather than training the learner in a reinforcement/interactive learning using a simulator, as in our work. They achieved this by choosing an omniscient (but deliberately imperfect) labeler that gets $\pi_{acc}$ examples always correct (the paper looked at values 1\%, 10\% and 50\%). In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, as we do here. Nevertheless, it is possible to compare our {\em learnt} policies to those results because we use the same train/valid/test splits. The clearest comparison comparison is via Table \ref{table:dataset-batch-babi}, where the policy is learnt using batch iterations of the dataset, updating the policy on each iteration. \cite{weston2015towards} can be viewed as training only one iteration, with a pre-built policy, as explained above, where 59\%, 81\% and 99\% accuracy was obtained for RBI for $\pi_{acc}$ with 1\%, 10\% and 50\% respectively\footnote{Note, this is not the same as a randomly initialized neural network policy, because due to the synthetic construction with an omniscient labeler the labels will be balanced. In our work, we learn the policy from randomly initialized weights which are updated as we learn the policy.}. While $\pi_{acc}$ of 50\% is good enough to solve the task, lower values are not. In this work a random policy begins with 74\% accuracy on the first iteration, but importantly on each iteration the policy is updated and improves, with values of 87\%, 90\% on iterations 2 and 3 respectively, and 98\% on iteration 6. This is a key differentiator to the work of \citep{weston2015towards} where such improvement was not shown. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (as long as balancing or random exploration is performed sufficiently). The final performance outperforms most values of $\pi_{acc}$ from \cite{weston2015towards} unless $\pi$ is so large that the task is already solved. This is a key contribution of our work. Similar conclusions can be made for Figures \ref{fig:online-babi-task6} and \ref{fig:online-movieqa-task6}. Despite our initial random policy starting at close to 0\% accuracy, if random exploration $\epsilon \ge 0.2$ is employed then after a number of epochs the performance is better than most values of $\pi_{acc}$ from \cite{weston2015towards}, e.g. compare the accuracies given in the previous paragraph (59\%, 81\% and 99\%) to Figure \ref{fig:online-babi-task6}, top left. \subsection{Human Feedback} \label{sec:mturkexp} We employed Turkers to both ask questions and then give textual feedback on the bot's answers, as described in Section \ref{sec:data-mturk}. Our experimental protocol was as follows. We first trained a MemN2N using supervised (i.e., imitation) learning on a training set of 1000 questions produced by Turkers and using the known correct answers provided by the original dataset (and no textual feedback). Next, using the trained policy, we collected textual feedback for the responses of the bot for an additional 10,000 questions. Examples from the collected dataset are given in Figure \ref{fig:mturk_data}. Given this dataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers to the additional questions, we can assign a positive reward to questions the bot got correct. We hence measure the impact of the sparseness of this reward signal, where a fraction $r$ of additional examples have rewards. The models are tested on a test set of $\sim$8,000 questions (produced by Turkers), and hyperparameters are tuned on a similarly sized validation set. Note this is a harder task than the WikiMovies task in the simulator due to the use natural language from Turkers, hence lower test performance is expected. Results are given in Table \ref{table:mturk-res}. They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 examples when $r=0$. As FP does not use numericalrewards at all, it is invariant to the parameter $r$. The combination of FP and RBI outperforms either alone. We also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fully supervised case which are given in Appendix \ref{sec:appendix-mturk}. They show that the results with human feedback are competitive with these approaches. A good conversational agent (which we sometimes refer to as a learner or bot\footnote{In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learning algorithm) as the student, and their more knowledgeable dialogue partner as the teacher.}) should have the ability to learn from the online feedback from a teacher: adapting its model when making mistakes and reinforcing the model when the teacher's feedback is positive. This is particularly important in the situation where the bot is initially trained in a supervised way on a fixed synthetic, domain-specific or pre-built dataset before release, but will be exposed to a different environment after release (e.g., more diverse natural language utterance usage when talking with real humans, different distributions, special cases, etc.). Most recent research has focused on training a bot from fixed training sets of labeled data but seldom on how the bot can improve through online interaction with humans. Human (rather than machine) language learning happens during communication \citep{bassiri2011interactional,werts1995instructive}, and not from labeled datasets, hence making this an important subject to study. In this work, we explore this direction by training a bot through interaction with teachers in an online fashion. The task is formalized under the general framework of reinforcement learning via the teacher's (dialogue partner's) feedback to the dialogue actions from the bot. The dialogue takes place in the context of question-answering tasks and the bot has to, given either a short story or a set of facts, answer a set of questions from the teacher. We consider two types of feedback: explicit numerical rewards as in conventional reinforcement learning, and textual feedback which is more natural in human dialogue, following \citep{weston2016dialog}. We consider two online training scenarios: (i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability of experiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk. We explore important issues involved in online learning such as how a bot can be most efficiently trained using a minimal amount of teacher's feedback, how a bot can harness different types of feedback signal, how to avoid pitfalls such as instability during online learing with different types of feedback via data balancing and exploration, and how to make learning with real humans feasible via data batching. Our findings indicate that it is feasible to build a pipeline that starts from a model trained with fixed data and then learns from interactions with humans to improve itself. %, obtaining accuracy close to a fully supervised dataset. We studied dialogue learning of end-to-end models using textual feedback and numerical rewards. Both fully online and iterative batch settings are viable approaches to policy learning, as long as possible instabilities in the learning algorithms are taken into account. Secondly, we showed for the first time that the recently introduced FP method can work in both an online setting and on real human feedback. Overall, our results indicate that it is feasible to build a practical pipeline that starts with a model trained on an initial fixed dataset, which then learns from interactions with humans in a (semi-)online fashion to improve itself. Future research should work towards doing this in a never-ending learning setup. The tasks in \cite{weston2016dialog} were specifically:\\ - {\bf Task 1}: The teacher tells the student exactly what they should have said (supervised baseline).\\ - {\bf Task 2}: The teacher replies with positive textual feedback and reward, or negative textual feedback. \\ - {\bf Task 3}: The teacher gives textual feedback containing the answer when the bot is wrong.\\ - {\bf Task 4}: The teacher provides a hint by providing the class of the correct answer, e.g., ``No it's a movie" for the question ``which movie did Forest Gump star in?".\\ - {\bf Task 5}: The teacher provides a reason why the student's answer is wrong by pointing out the relevant supporting fact from the knowledge base.\\ - {\bf Task 6}: The teacher gives positive reward only 50\% of the time. \\ - {\bf Task 7}: Rewards are missing and the teacher only gives natural language feedback.\\ - {\bf Task 8}: Combines Tasks 1 and 2 to see whether a learner can learn successfully from both forms of supervision at once.\\ - {\bf Task 9}: The bot asks questions of the teacher about what it has done wrong.\\ - {\bf Task 10}: The bot will receive a hint rather than the correct answer after asking for help. We refer the readers to \citep{weston2016dialog} for more detailed descriptions and the motivation behind these tasks. The difference in our system is that the model can be trained on-the-fly via the simulator: after receiving feedback and/or rewards, the model can update itself and apply its learning to the next episode. We present results on Tasks 2, 3 and 4 in this appendix \documentclass{article} % For LaTeX2e \newcommand{\taskTwo}{{\it Question Clarification-Verification}} \title{Dialogue Learning With Human-in-the-Loop} \author{Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston \\ Facebook AI Research, \\ New York, USA \\ \texttt{\{jiwel,ahm,spchopra,ranzato,jase\}@fb.com} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \newcommand{\MINUS}{} \begin{document} \maketitle \begin{abstract} An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach. \end{abstract} \section{Introduction} \input{intro} \section{Related Work} \label{sec:related} \input{related} \section{Dataset and Tasks} \input{data} \section{Methods} \subsection{Model Architecture} \input{model} \subsection{Reinforcement Learning} \label{sec:rl} % Algorithms} \input{methods} \section{Experiments} \input{exp} \section{Conclusion} \input{conc} \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Further Simulator Task Details} \label{sec:extra_data} \input{extra_data} \section{Instructions given to Turkers} \label{sec:mturk} \input{mturk} \newpage \section{Additional Experiments} \input{extra_exp} \end{document} We begin by describing the data setup we use. In our first set of experiments we build a simulator as a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk to provide real human teachers giving feedback. \subsection{Simulator} The simulator adapts two existing fixed datasets to our online setting. Following \cite{weston2016dialog}, we use (i) the single supporting fact problem from the bAbI datasets \citep{weston2015towards} which consists of 1000 short stories from a simulated world interspersed with questions; and (ii) the WikiMovies dataset \citep{weston2015towards} which consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb). Each dialogue takes place between a teacher, scripted by the simulation, and a bot. The communication protocol is as follows: (1) the teacher first asks a question from the fixed set of questions existing in the dataset, (2) the bot answers the question, and finally (3) the teacher gives feedback on the bot's answer. We follow the paradigm defined in \citep{weston2016dialog} where the teacher's feedback takes the form of either textual feedback, a numerical reward, or both, depending on the task. For each dataset, there are ten tasks, which are further described in Sec.~\ref{sec:extra_data} and illustrated in Figure~\ref{Tasks} of the appendix. We also refer the readers to \citep{weston2016dialog} for more detailed descriptions and the motivation behind these tasks. In the main text of this paper we only consider %Tasks 3 (`answers supplied by teacher') Task 6 (``partial feedback''): % from \cite{weston2016dialog}: the teacher replies with positive textual feedback (6 possible templates) when the bot answers correctly, and positive reward is given only 50\% of the time. When the bot is wrong, the teacher gives textual feedback containing the answer. Descriptions and experiments on the other tasks are detailed in the appendix. Example dialogues are given in Figure \ref{fig:simulator-examples}. The difference between our simulation and the original fixed tasks of~\cite{weston2016dialog} is that models are trained on-the-fly. % via the simulator. After receiving feedback and/or rewards, we update the model (policy) and then deploy it to collect teacher's feedback in the next episode or batch. This means the model's policy affects the data which is used to train it, which was not the case in the previous work. \definecolor{dred}{rgb}{0.7,0.0,0.0} \newcommand{\PLUS}{{\textcolor{blue}{(+)}}} \newcommand{\SPACE}{~~~~~~~~~~~~~~~~~~~~~~} \subsection{Mechanical Turk Experiments} \label{sec:data-mturk} Finally, we extended WikiMovies using Mechanical Turk so that real human teachers are giving feedback rather than using a simulation. As both the questions and feedback are templated in the simulation, they are now both replaced with natural human utterances. Rather than having a set of simulated tasks, we have only one task, and we gave instructions to the teachers that they could give feedback as they see fit. The exact instructions given to the Turkers is given in Appendix \ref{sec:mturk}. In general, each independent response contains feedback like (i) positive or negative sentences; or (ii) a phrase containing the answer or (iii) a hint, which are similar to setups defined in the simulator. However, some human responses cannot be so easily categorized, and the lexical variability is much larger in human responses. Some examples of the collected data are given in Figure~\ref{fig:mturk_data}. Reinforcement learning has been widely applied to dialogue, especially in slot filling to solve domain-specific tasks \citep{walker2000application,schatzmann2006survey,singh2000empirical,singh2002optimizing}. Efforts include Markov Decision Processes (MDPs) \citep{levin1997learning,levin2000stochastic,walker2003trainable,pieraccini2009we}, POMDP models \citep{young2010hidden,young2013pomdp,gavsic2013pomdp,gavsic2014incremental} and policy learning \citep{su2016continuously}. Such a line of research focuses mainly on frames with slots to fill, where the bot will use reinforcement learning to model a state transition pattern, generating dialogue utterances to prompt the appropriate user responses to put in the desired slots. This goal is different from ours, where we study end-to-end learning systems and also consider non-reward based setups via textual feedback. Our work is related to the line of research that focuses on supervised learning for question answering (QA) from dialogues \citep{dodge2015evaluating,weston2016dialog}, either given a database of knowledge \citep{bordes2015large,miller2016key} or short texts \citep{weston2015towards,hermann2015teaching,rajpurkar2016squad}. In our work, the discourse includes the statements made in the past, the question and answer, and crucially the response from the teacher. The latter is what makes the setting different from the standard QA setting, i.e. we use methods that leverage this response also, not just answering questions. Further, QA works only consider fixed datasets with gold annotations, i.e. they do not consider a reinforcement learning setting. Our work is closely related to a recent work from \cite{weston2016dialog} that learns through conducting conversations where supervision is given naturally in the response during the conversation. That work introduced the use of forward prediction that learns by predicting the teacher's feedback, in addition to using reward-based learning of correct answers. However, two important issues were not addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built datasets with fixed policies given in advance; and (ii) experiments used only simulated and no real language data. Hence, models that can learn policies from real online communication were not investigated. To make the differences with our work clear, we will now detail these points further. The experiments in \citep{weston2016dialog} involve constructing pre-built fixed datasets, rather than training the learner within a simulator, as in our work. Pre-built datasets can only be made by fixing a prior in advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler that gets $\pi_{acc}$ examples always correct (the paper looked at values 50\%, 10\% and 1\%). Again, this was not learned, and was fixed to generate the datasets. Note that the paper refers to these answers as coming from ``the learner'' (which should be the model), but since the policy is fixed it actually does not depend on the model. In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, so their setting was not practically viable. In our work, when policy training is viewed as batch learning over iterations of the dataset, updating the policy on each iteration, \citep{weston2016dialog} can be viewed as training only one iteration, whereas we perform multiple iterations. This is explained further in Sections \ref{sec:rl} and \ref{sec:online_exp}. We show in our experiments that performance improves over the iterations, i.e. it is better than the first iteration. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (under certain conditions which are detailed). This is a key contribution of our work. Finally, \citep{weston2016dialog} only conducted experiments on synthetic or templated language, and not real language, especially the feedback from the teacher was scripted. While we believe that synthetic datasets are very important for developing understanding (hence we develop a simulator and conduct experiments also with synthetic data), for a new method to gain traction it must be shown to work on real data. We hence employ Mechanical Turk to collect real language data for the questions and importantly for the teacher feedback and construct experiments in this real setting. \if 0 Let's consider Table 1, which reports test accuracy for the dataset batch size case over several iterations (1 to 6). On each iteration the policy that generated the predictions is fixed, but is updated on the next iteration after learning. On the first iteration you have to start with some kind of policy so we start with a random one. There exists a \pi_acc\% policy from Weston'16 that would obtain the same error rate as that chosen random policy on iteration 1. Values of acc\% higher would be getting better accuracy and lower acc\% lower accuracy. However, in Weston'16 the policy is never updated while training, this is like stopping after the first iteration (column 1, Table 1) and that is the final error rate you get (which is why in Weston'16 on page 5 it is stated explicitly “Note that because the policies are fixed the experiments in this paper are not in a reinforcement learning setting.”). However, in the setting in *this * paper we do update the policy and you get iterations 2, 3 and so on. What we want to show is that the accuracy gets *better* on subsequent iterations. And that is indeed the case, see Table 1, 2nd column (RBI), the accuracy goes from 0.74 to 0.87 to 0.90 to 0.96 and so on. Hence, our approaches are doing better than the original policy they started with. So if you started with one of the fake labelers from Weston'16, regardless of the value of the initial \pi_acc, you would improve over them as well. The point is that real random guessing isn't stronger, not initially, but after training and updating/learning the policy one would hope for it to be stronger. Our experiments showed this was the case, which is a positive result. This is a key contribution of the work. \fi \FloatBarrier \subsection{Additional Experiments For Mechanical Turk Setup}\label{sec:appendix-mturk} In the experiment in Section 5.2 %\ref{sec:mturkexp} we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50\% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The results are given in Table \ref{table:mturk-res-synth}. The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difficulty of dealing with its lexical and semantic variability. The real data is better than using Task 2 synthetic data. For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). The results are given in Table \ref{table:mturk-supervised}. They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning. \subsection{Second Iteration of Feedback} We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model from the previous training, using RBI+FP with $r=1$ which previously gave a test accuracy of 0.478 (see Table \ref{table:mturk-res-synth}). Using that model as a predictor, we collected an additional 10,000 training examples. We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying $r$ on the additional collected set. We also report the performance from varying $\epsilon$, the proportion of random exploration of predictions on the new set. The results are reported in Table \ref{table:mturk-2nd-it}. Overall, performance is improved in the second iteration, with slightly better performance for large $r$ and $\epsilon=0.5$. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case. In this section, we present the algorithms we used to train MemN2N in an online fashion. Our learning setup can be cast as a particular form of Reinforcement Learning. The policy is implemented by the MemN2N model. The state is the dialogue history. %deterministic and it is set by the simulator. The action space corresponds to the set of answers the MemN2N has to choose from to answer the teacher's question. In our setting, the policy chooses only one action for each episode. The reward is either $1$ (a reward from the teacher when the bot answers correctly) or $0$ otherwise. Note that in our experiments, a reward equal to $0$ might mean that the answer is incorrect or that the positive reward is simply missing. The overall setup is closest to standard contextual bandits, except that the reward is binary. When working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier to set up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to a single one. The latter would be more difficult to manage and scale up since it would require some form of synchronization between the model replicas interacting with each human. This is comparable to the real world situation where a teacher can either ask a student a single question and give feedback right away, or set up a test that contains many questions and grade all of them at once. Only after the learner completes all questions, it can hear feedback from the teacher. We use {\it batch size} to refer to how many dialogue episodes the current model is used to collect feedback before updating its parameters. In the Reinforcement Learning literature, batch size is related to {\em off-policy} learning since the MemN2N policy is trained using episodes collected with a stale version of the model. Our experiments show that our model and base algorithms are very robust to the choice of batch size, alleviating the need for correction terms in the learning algorithm~\citep{bottou-13}. We consider two strategies: (i) online batch size, whereby the target policy is updated after doing a single pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii) dataset-sized batch, whereby training is continued to convergence on the batch which is the size of the dataset, and then the target policy is updated with the new model, and a new batch is drawn and the procedure iterates. These strategies can be applied to all the methods we use, described below. Next, we discuss the learning algorithms we considered in this work. \subsubsection{Reward-Based Imitation (RBI)} The simplest algorithm we first consider is the one employed in \cite{weston2016dialog}. RBI relies on positive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner, i.e., learning to predict the correct answers (with reward 1) at training time and disregarding the other ones. This is implemented by using a {MemN2N} that maps a dialogue input to a prediction, i.e. using the cross entropy criterion on the positively rewarded subset of the data. In order to make this work in the online setting which requires exploration to find the correct answer, we employ an $\epsilon$-greedy strategy: the learner makes a prediction using its own model (the answer assigned the highest probability) with probability $1-\epsilon$, otherwise it picks a random answer with probability $\epsilon$. The teacher will then give a reward of $+1$ if the answer is correct, otherwise $0$. The bot will then learn to imitate the correct answers: predicting the correct answers while ignoring the incorrect ones. \subsubsection{REINFORCE} The second algorithm we use is the REINFORCE algorithm \citep{williams1992simple}, which maximizes the expected cumulative reward of the episode, in our case the expected reward provided by the teacher. The expectation is approximated by sampling an answer from the model distribution. Let $a$ denote the answer that the learner gives, $p(a)$ denote the probability that current model assigns to $a$, $r$ denote the teacher's reward, and $J(\theta)$ denote the expectation of the reward. We have: \begin{equation} \nabla J(\theta)\approx\nabla\log p(a) [r-b] \end{equation} where $b$ is the baseline value, which is estimated using a linear regression model that takes as input the output of the memory network after the last hop, and outputs a scalar $b$ denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward $b$ and actual reward $r$, $||r-b||^2$. We refer the readers to \citep{ranzato2015sequence,zaremba2015reinforcement} for more details. The baseline estimator model is independent from the policy model, and its error is not backpropagated through the policy model. The major difference between RBI and REINFORCE is that (i) the learner only tries to imitate correct behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) the learner explores using an $\epsilon$-greedy strategy in RBI while in REINFORCE it uses the distribution over actions produced by the model itself. \subsubsection{Forward Prediction (FP)} FP \citep{weston2016dialog} handles the situation where a numerical reward for a bot's answer is not available, meaning that there are no +1 or 0 labels available after a student's utterance. Instead, the model assumes the teacher gives textual feedback $t$ to the bot's answer, taking the form of a dialogue utterance, and the model tries to predict this instead. Suppose that $x$ denotes the teacher's question and $C$=$c_1$, $c_2$, ..., $c_N$ denotes the dialogue history as before. In {\it FP}, the model first maps the teacher's initial question $x$ and dialogue history $C$ to a vector representation $u$ using a memory network with multiple hops. Then the model will perform another hop of attention over all possible student's answers in $\mathbb{A}$, with an additional part that incorporates the information of which candidate (i.e., $a$) was actually selected in the dialogue: \begin{equation} p_{\hat{a}}=\texttt{softmax}(u^T y_{\hat{a}})~~~~~o=\sum_{\hat{a}\in \mathbb{A}} p_{\hat{a}} (y_{\hat{a}}+\beta\cdot {\bf 1}[\hat{a}=a] ) \end{equation} where $y_{\hat{a}}$ denotes the vector representation for the student's answer candidate $\hat{a}$. $\beta$ is a (learned) d-dimensional vector to signify the actual action $a$ that the student chooses. $o$ is then combined with $u$ to predict the teacher's feedback $t$ using a softmax: \begin{equation} u_1=o+u ~~~~ t=\texttt{softmax} (u_1^T x_{r_1}, u_1^T x_{r_2}, ..., u_1^T x_{r_N}) \end{equation} where $x_{r_{i}}$ denotes the embedding for the $i^{th}$ response. In the online setting, the teacher will give textual feedback, and the learner needs to update its model using the feedback. It was shown in \cite{weston2016dialog} that in an off-line setting this procedure can work either on its own, or in conjunction with a method that uses numerical rewards as well for improved performance. In the online setting, we consider two simple extensions: \begin{itemize} \item $\epsilon$-greedy exploration: with probability $\epsilon$ the student will give a random answer, and with probability $1-\epsilon$ it will give the answer that its model assigns the largest probability. This method enables the model to explore the space of actions and to potentially discover correct answers. \item data balancing: cluster the set of teacher responses $t$ and then balance training across the clusters equally.\footnote{In the simulated data, because the responses are templates, this can be implemented by first randomly sampling the response, and then randomly sampling a story with that response; we keep the history of all stories seen from which we sample. For real data slightly more sophisticated clustering should be used.} This is a type of experience replay \citep{mnih2013playing} but sampling with an evened distribution. Balancing stops part of the distribution dominating the learning. For example, if the model is not exposed to sufficient positive and negative feedback, and one class overly dominates, the learning process degenerates to a model that always predicts the same output regardless of its input. \end{itemize} In our experiments, we used variants of the End-to-End Memory Network (MemN2N) model \citep{sukhbaatar2015end} %as a backbone as our underlying architecture for learning from dialogue. % interactions. The input to MemN2N is the last utterance of the dialogue history $x$ as well as a set of memories (context) $C$=$c_1$, $c_2$, ..., $c_N$. The memory $C$ encodes both short-term memory, e.g., dialogue histories between the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bot has access to. Given the input $x$ and $C$, the goal is to produce an output/label $a$. In the first step, the query $x$ is transformed to a vector representation $u_0$ by summing up its constituent word embeddings: $u_0=Ax$. The input $x$ is a bag-of-words vector and $A$ is the $d\times V$ word embedding matrix where $d$ denotes the emebbding dimension and $V$\ denotes the vocabulary size. Each memory $c_i$ is similarly transformed to a vector $m_{i}$. The model will read information from the memory by comparing input representation $u_0$ with memory vectors $m_{i}$ using softmax weights: \begin{equation} o_1=\sum_{i}p_i^1 m_{i} ~~~~~~~~~~p_i^1=\texttt{softmax}(u_0^T m_i) \end{equation} This process selects memories relevant to the last utterance $x$, i.e., the memories with large values of $p_i^1$. The returned memory vector $o_1$ is the weighted sum of memory vectors. This process can be repeated to query the memory N times (so called ``hops'') by adding $o_n$ to the original input, $u_1=o_1+u_0$, or to the previous state, $u_n=o_n+u_{n-1}$, and then using $u_n$ to query the memories again. In the end, $u_N$ is input to a softmax function for the final prediction: \begin{equation}\label{eq:a} a=\texttt{softmax} (u_N^T y_1,u_N^T y_2,...,u_N^T y_L) \end{equation} where $y_1, \dots, y_L$ denote the set of candidate answers. If the answer is a word, $y_i$ is the corresponding word embedding. If the answer is a sentence, $y_i$ is the embedding for the sentence achieved in the same way that we obtain embeddings for query $x$ and memory $C$. The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs, which we refer to as supervised or imitation learning. As our work is in a reinforcement learning setup where our model must make predictions to learn, this procedure will not work, so we instead consider reinforcement learning algorithms which we describe next.
Dialogue Learning With Human-in-the-Loop
1611.09823
Table 6: Second Iteration of Feedback Using synthetic textual feedback of synthetic Task2+3 with the RBI+FP method, an additional iteration of data collection of 10k examples, varying sparse binary reward fraction r and exploration ϵ. The performance of the first iteration model was 0.478.
[ "[EMPTY]", "[ITALIC] r=0", "[ITALIC] r=0.1", "[ITALIC] r=0.5", "[ITALIC] r=1" ]
[ [ "[ITALIC] ϵ=0", "0.499", "0.502", "0.501", "0.502" ], [ "[ITALIC] ϵ=0.1", "0.494", "0.496", "0.501", "0.502" ], [ "[ITALIC] ϵ=0.25", "0.493", "0.495", "0.496", "0.499" ], [ "[ITALIC] ϵ=0.5", "0.501", "0.499", "0.501", "0.504" ], [ "[ITALIC] ϵ=1", "0.497", "0.497", "0.498", "0.497" ] ]
We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. Using that model as a predictor, we collected an additional 10,000 training examples. We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying r on the additional collected set. We also report the performance from varying ϵ, the proportion of random exploration of predictions on the new set. Overall, performance is improved in the second iteration, with slightly better performance for large r and ϵ=0.5. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case.
These are the instructions given for the textual feedback mechanical turk task (we also constructed a separate task to collect the initial questions, not described here):\\ Title: Write brief responses to given dialogue exchanges (about 15 min)\\ Description: Write a brief response to a student's answer to a teacher's question, providing feedback to the student on their answer. Instructions:\\ Each task consists of the following triplets: \begin{enumerate} \item a question by the teacher \item the correct answer(s) to the question (separated by ``OR'') \item a proposed answer in reply to the question from the student \end{enumerate} Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response giving feedback to the student about their answer. The correct answers are provided so that you know whether the student was correct or not. For example, given 1) question: ``what is a color in the united states flag?''; 2) correct answer: ``white, blue, red''; 3) student reply: ``red'', your response could be something like ``that's right!''; for 3) reply: ``green'', you might say ``no that's not right'' or ``nope, a correct answer is actually white''. Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or overused, we'll reject the HIT. Avoid naming the student or addressing ``the class'' directly. We will consider bonuses for higher quality responses during review. Experiments are first conducted using our simulator, and then using Amazon Mechanical Turk with real human subjects taking the role of the teacher\footnote{ Code and data are available at \tiny{\url{https://github.com/facebook/MemNN/tree/master/HITL}}. }. \subsection{Simulator} \paragraph{Online Experiments} \label{sec:online_exp} In our first experiments, we considered both the bAbI and WikiMovies tasks and varied batch size, random exploration rate $\epsilon$, and type of model. Figure~\ref{fig:online-babi-task6} and Figure~\ref{fig:online-movieqa-task6} shows (Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported in the appendix. Overall, we obtain the following conclusions: \begin{itemize} \item In general RBI and FP do work in a reinforcement learning setting, but can perform better with random exploration. \item In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail. \item REINFORCE obtains similar performance to RBI with optimal $\epsilon$. %, see figure~\ref{fig:online-comparison-rbi-fp-rf}. \item FP with balancing or with exploration via $\epsilon$ both outperform FP alone. \item For both RBI and FP, performance is largely independent of online batch size. \end{itemize} \if 0 \fi \paragraph{Dataset Batch Size Experiments} Given that larger online batch sizes appear to work well, and that this could be important in a real-world data collection setup where the same model is deployed to gather a large amount of feedback from humans, we conducted further experiments where the batch size is exactly equal to the dataset size and for each batch training is completed to convergence. After the model has been trained on the dataset, it is deployed to collect a new dataset of questions and answers, and the process is repeated. Table~\ref{table:dataset-batch-babi} reports test error at each iteration of training, using the bAbI Task $6$ as the case study (see the appendix for results on other tasks). The following conclusions can be made for this setting: \begin{itemize} \item RBI improves in performance as we iterate. Unlike in the online case, RBI does not need random exploration. We believe this is because the first batch, which is collected with a randomly initialized model, contains enough variety of examples with positive rewards that the model does not get stuck predicting a subset of labels. \item FP is not stable in this setting. %, % whereas it was in earlier online experiments. This is because once the model gets very good at making predictions (at the third iteration), it is not exposed to a sufficient number of negative responses anymore. From that point on, learning degenerates and performance drops as the model always predicts the same responses. At the next iteration, it will recover again since it has a more balanced training set, but then it will collapse again in an oscillating behavior. \item FP does work if extended with balancing or random exploration with sufficiently large $\epsilon$. \item RBI+FP also works well and helps with the instability of FP, alleviating the need for random exploration and data balancing. \end{itemize} Overall, our simulation results indicate that while a bot can be effectively trained fully online from bot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterate experiments over) is also a viable approach. We hence pursue the latter approach in our next set of experiments. \paragraph{Relation to experiments in \cite{weston2016dialog}} As described in detail in Section \ref{sec:related} the datasets we use in our experiments were introduced in \citep{weston2015towards}. However, that work involved constructing pre-built fixed policies (and hence, datasets), rather than training the learner in a reinforcement/interactive learning using a simulator, as in our work. They achieved this by choosing an omniscient (but deliberately imperfect) labeler that gets $\pi_{acc}$ examples always correct (the paper looked at values 1\%, 10\% and 50\%). In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, as we do here. Nevertheless, it is possible to compare our {\em learnt} policies to those results because we use the same train/valid/test splits. The clearest comparison comparison is via Table \ref{table:dataset-batch-babi}, where the policy is learnt using batch iterations of the dataset, updating the policy on each iteration. \cite{weston2015towards} can be viewed as training only one iteration, with a pre-built policy, as explained above, where 59\%, 81\% and 99\% accuracy was obtained for RBI for $\pi_{acc}$ with 1\%, 10\% and 50\% respectively\footnote{Note, this is not the same as a randomly initialized neural network policy, because due to the synthetic construction with an omniscient labeler the labels will be balanced. In our work, we learn the policy from randomly initialized weights which are updated as we learn the policy.}. While $\pi_{acc}$ of 50\% is good enough to solve the task, lower values are not. In this work a random policy begins with 74\% accuracy on the first iteration, but importantly on each iteration the policy is updated and improves, with values of 87\%, 90\% on iterations 2 and 3 respectively, and 98\% on iteration 6. This is a key differentiator to the work of \citep{weston2015towards} where such improvement was not shown. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (as long as balancing or random exploration is performed sufficiently). The final performance outperforms most values of $\pi_{acc}$ from \cite{weston2015towards} unless $\pi$ is so large that the task is already solved. This is a key contribution of our work. Similar conclusions can be made for Figures \ref{fig:online-babi-task6} and \ref{fig:online-movieqa-task6}. Despite our initial random policy starting at close to 0\% accuracy, if random exploration $\epsilon \ge 0.2$ is employed then after a number of epochs the performance is better than most values of $\pi_{acc}$ from \cite{weston2015towards}, e.g. compare the accuracies given in the previous paragraph (59\%, 81\% and 99\%) to Figure \ref{fig:online-babi-task6}, top left. \subsection{Human Feedback} \label{sec:mturkexp} We employed Turkers to both ask questions and then give textual feedback on the bot's answers, as described in Section \ref{sec:data-mturk}. Our experimental protocol was as follows. We first trained a MemN2N using supervised (i.e., imitation) learning on a training set of 1000 questions produced by Turkers and using the known correct answers provided by the original dataset (and no textual feedback). Next, using the trained policy, we collected textual feedback for the responses of the bot for an additional 10,000 questions. Examples from the collected dataset are given in Figure \ref{fig:mturk_data}. Given this dataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers to the additional questions, we can assign a positive reward to questions the bot got correct. We hence measure the impact of the sparseness of this reward signal, where a fraction $r$ of additional examples have rewards. The models are tested on a test set of $\sim$8,000 questions (produced by Turkers), and hyperparameters are tuned on a similarly sized validation set. Note this is a harder task than the WikiMovies task in the simulator due to the use natural language from Turkers, hence lower test performance is expected. Results are given in Table \ref{table:mturk-res}. They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 examples when $r=0$. As FP does not use numericalrewards at all, it is invariant to the parameter $r$. The combination of FP and RBI outperforms either alone. We also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fully supervised case which are given in Appendix \ref{sec:appendix-mturk}. They show that the results with human feedback are competitive with these approaches. A good conversational agent (which we sometimes refer to as a learner or bot\footnote{In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learning algorithm) as the student, and their more knowledgeable dialogue partner as the teacher.}) should have the ability to learn from the online feedback from a teacher: adapting its model when making mistakes and reinforcing the model when the teacher's feedback is positive. This is particularly important in the situation where the bot is initially trained in a supervised way on a fixed synthetic, domain-specific or pre-built dataset before release, but will be exposed to a different environment after release (e.g., more diverse natural language utterance usage when talking with real humans, different distributions, special cases, etc.). Most recent research has focused on training a bot from fixed training sets of labeled data but seldom on how the bot can improve through online interaction with humans. Human (rather than machine) language learning happens during communication \citep{bassiri2011interactional,werts1995instructive}, and not from labeled datasets, hence making this an important subject to study. In this work, we explore this direction by training a bot through interaction with teachers in an online fashion. The task is formalized under the general framework of reinforcement learning via the teacher's (dialogue partner's) feedback to the dialogue actions from the bot. The dialogue takes place in the context of question-answering tasks and the bot has to, given either a short story or a set of facts, answer a set of questions from the teacher. We consider two types of feedback: explicit numerical rewards as in conventional reinforcement learning, and textual feedback which is more natural in human dialogue, following \citep{weston2016dialog}. We consider two online training scenarios: (i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability of experiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk. We explore important issues involved in online learning such as how a bot can be most efficiently trained using a minimal amount of teacher's feedback, how a bot can harness different types of feedback signal, how to avoid pitfalls such as instability during online learing with different types of feedback via data balancing and exploration, and how to make learning with real humans feasible via data batching. Our findings indicate that it is feasible to build a pipeline that starts from a model trained with fixed data and then learns from interactions with humans to improve itself. %, obtaining accuracy close to a fully supervised dataset. We studied dialogue learning of end-to-end models using textual feedback and numerical rewards. Both fully online and iterative batch settings are viable approaches to policy learning, as long as possible instabilities in the learning algorithms are taken into account. Secondly, we showed for the first time that the recently introduced FP method can work in both an online setting and on real human feedback. Overall, our results indicate that it is feasible to build a practical pipeline that starts with a model trained on an initial fixed dataset, which then learns from interactions with humans in a (semi-)online fashion to improve itself. Future research should work towards doing this in a never-ending learning setup. The tasks in \cite{weston2016dialog} were specifically:\\ - {\bf Task 1}: The teacher tells the student exactly what they should have said (supervised baseline).\\ - {\bf Task 2}: The teacher replies with positive textual feedback and reward, or negative textual feedback. \\ - {\bf Task 3}: The teacher gives textual feedback containing the answer when the bot is wrong.\\ - {\bf Task 4}: The teacher provides a hint by providing the class of the correct answer, e.g., ``No it's a movie" for the question ``which movie did Forest Gump star in?".\\ - {\bf Task 5}: The teacher provides a reason why the student's answer is wrong by pointing out the relevant supporting fact from the knowledge base.\\ - {\bf Task 6}: The teacher gives positive reward only 50\% of the time. \\ - {\bf Task 7}: Rewards are missing and the teacher only gives natural language feedback.\\ - {\bf Task 8}: Combines Tasks 1 and 2 to see whether a learner can learn successfully from both forms of supervision at once.\\ - {\bf Task 9}: The bot asks questions of the teacher about what it has done wrong.\\ - {\bf Task 10}: The bot will receive a hint rather than the correct answer after asking for help. We refer the readers to \citep{weston2016dialog} for more detailed descriptions and the motivation behind these tasks. The difference in our system is that the model can be trained on-the-fly via the simulator: after receiving feedback and/or rewards, the model can update itself and apply its learning to the next episode. We present results on Tasks 2, 3 and 4 in this appendix \documentclass{article} % For LaTeX2e \newcommand{\taskTwo}{{\it Question Clarification-Verification}} \title{Dialogue Learning With Human-in-the-Loop} \author{Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston \\ Facebook AI Research, \\ New York, USA \\ \texttt{\{jiwel,ahm,spchopra,ranzato,jase\}@fb.com} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \newcommand{\MINUS}{} \begin{document} \maketitle \begin{abstract} An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach. \end{abstract} \section{Introduction} \input{intro} \section{Related Work} \label{sec:related} \input{related} \section{Dataset and Tasks} \input{data} \section{Methods} \subsection{Model Architecture} \input{model} \subsection{Reinforcement Learning} \label{sec:rl} % Algorithms} \input{methods} \section{Experiments} \input{exp} \section{Conclusion} \input{conc} \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Further Simulator Task Details} \label{sec:extra_data} \input{extra_data} \section{Instructions given to Turkers} \label{sec:mturk} \input{mturk} \newpage \section{Additional Experiments} \input{extra_exp} \end{document} We begin by describing the data setup we use. In our first set of experiments we build a simulator as a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk to provide real human teachers giving feedback. \subsection{Simulator} The simulator adapts two existing fixed datasets to our online setting. Following \cite{weston2016dialog}, we use (i) the single supporting fact problem from the bAbI datasets \citep{weston2015towards} which consists of 1000 short stories from a simulated world interspersed with questions; and (ii) the WikiMovies dataset \citep{weston2015towards} which consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb). Each dialogue takes place between a teacher, scripted by the simulation, and a bot. The communication protocol is as follows: (1) the teacher first asks a question from the fixed set of questions existing in the dataset, (2) the bot answers the question, and finally (3) the teacher gives feedback on the bot's answer. We follow the paradigm defined in \citep{weston2016dialog} where the teacher's feedback takes the form of either textual feedback, a numerical reward, or both, depending on the task. For each dataset, there are ten tasks, which are further described in Sec.~\ref{sec:extra_data} and illustrated in Figure~\ref{Tasks} of the appendix. We also refer the readers to \citep{weston2016dialog} for more detailed descriptions and the motivation behind these tasks. In the main text of this paper we only consider %Tasks 3 (`answers supplied by teacher') Task 6 (``partial feedback''): % from \cite{weston2016dialog}: the teacher replies with positive textual feedback (6 possible templates) when the bot answers correctly, and positive reward is given only 50\% of the time. When the bot is wrong, the teacher gives textual feedback containing the answer. Descriptions and experiments on the other tasks are detailed in the appendix. Example dialogues are given in Figure \ref{fig:simulator-examples}. The difference between our simulation and the original fixed tasks of~\cite{weston2016dialog} is that models are trained on-the-fly. % via the simulator. After receiving feedback and/or rewards, we update the model (policy) and then deploy it to collect teacher's feedback in the next episode or batch. This means the model's policy affects the data which is used to train it, which was not the case in the previous work. \definecolor{dred}{rgb}{0.7,0.0,0.0} \newcommand{\PLUS}{{\textcolor{blue}{(+)}}} \newcommand{\SPACE}{~~~~~~~~~~~~~~~~~~~~~~} \subsection{Mechanical Turk Experiments} \label{sec:data-mturk} Finally, we extended WikiMovies using Mechanical Turk so that real human teachers are giving feedback rather than using a simulation. As both the questions and feedback are templated in the simulation, they are now both replaced with natural human utterances. Rather than having a set of simulated tasks, we have only one task, and we gave instructions to the teachers that they could give feedback as they see fit. The exact instructions given to the Turkers is given in Appendix \ref{sec:mturk}. In general, each independent response contains feedback like (i) positive or negative sentences; or (ii) a phrase containing the answer or (iii) a hint, which are similar to setups defined in the simulator. However, some human responses cannot be so easily categorized, and the lexical variability is much larger in human responses. Some examples of the collected data are given in Figure~\ref{fig:mturk_data}. Reinforcement learning has been widely applied to dialogue, especially in slot filling to solve domain-specific tasks \citep{walker2000application,schatzmann2006survey,singh2000empirical,singh2002optimizing}. Efforts include Markov Decision Processes (MDPs) \citep{levin1997learning,levin2000stochastic,walker2003trainable,pieraccini2009we}, POMDP models \citep{young2010hidden,young2013pomdp,gavsic2013pomdp,gavsic2014incremental} and policy learning \citep{su2016continuously}. Such a line of research focuses mainly on frames with slots to fill, where the bot will use reinforcement learning to model a state transition pattern, generating dialogue utterances to prompt the appropriate user responses to put in the desired slots. This goal is different from ours, where we study end-to-end learning systems and also consider non-reward based setups via textual feedback. Our work is related to the line of research that focuses on supervised learning for question answering (QA) from dialogues \citep{dodge2015evaluating,weston2016dialog}, either given a database of knowledge \citep{bordes2015large,miller2016key} or short texts \citep{weston2015towards,hermann2015teaching,rajpurkar2016squad}. In our work, the discourse includes the statements made in the past, the question and answer, and crucially the response from the teacher. The latter is what makes the setting different from the standard QA setting, i.e. we use methods that leverage this response also, not just answering questions. Further, QA works only consider fixed datasets with gold annotations, i.e. they do not consider a reinforcement learning setting. Our work is closely related to a recent work from \cite{weston2016dialog} that learns through conducting conversations where supervision is given naturally in the response during the conversation. That work introduced the use of forward prediction that learns by predicting the teacher's feedback, in addition to using reward-based learning of correct answers. However, two important issues were not addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built datasets with fixed policies given in advance; and (ii) experiments used only simulated and no real language data. Hence, models that can learn policies from real online communication were not investigated. To make the differences with our work clear, we will now detail these points further. The experiments in \citep{weston2016dialog} involve constructing pre-built fixed datasets, rather than training the learner within a simulator, as in our work. Pre-built datasets can only be made by fixing a prior in advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler that gets $\pi_{acc}$ examples always correct (the paper looked at values 50\%, 10\% and 1\%). Again, this was not learned, and was fixed to generate the datasets. Note that the paper refers to these answers as coming from ``the learner'' (which should be the model), but since the policy is fixed it actually does not depend on the model. In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, so their setting was not practically viable. In our work, when policy training is viewed as batch learning over iterations of the dataset, updating the policy on each iteration, \citep{weston2016dialog} can be viewed as training only one iteration, whereas we perform multiple iterations. This is explained further in Sections \ref{sec:rl} and \ref{sec:online_exp}. We show in our experiments that performance improves over the iterations, i.e. it is better than the first iteration. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (under certain conditions which are detailed). This is a key contribution of our work. Finally, \citep{weston2016dialog} only conducted experiments on synthetic or templated language, and not real language, especially the feedback from the teacher was scripted. While we believe that synthetic datasets are very important for developing understanding (hence we develop a simulator and conduct experiments also with synthetic data), for a new method to gain traction it must be shown to work on real data. We hence employ Mechanical Turk to collect real language data for the questions and importantly for the teacher feedback and construct experiments in this real setting. \if 0 Let's consider Table 1, which reports test accuracy for the dataset batch size case over several iterations (1 to 6). On each iteration the policy that generated the predictions is fixed, but is updated on the next iteration after learning. On the first iteration you have to start with some kind of policy so we start with a random one. There exists a \pi_acc\% policy from Weston'16 that would obtain the same error rate as that chosen random policy on iteration 1. Values of acc\% higher would be getting better accuracy and lower acc\% lower accuracy. However, in Weston'16 the policy is never updated while training, this is like stopping after the first iteration (column 1, Table 1) and that is the final error rate you get (which is why in Weston'16 on page 5 it is stated explicitly “Note that because the policies are fixed the experiments in this paper are not in a reinforcement learning setting.”). However, in the setting in *this * paper we do update the policy and you get iterations 2, 3 and so on. What we want to show is that the accuracy gets *better* on subsequent iterations. And that is indeed the case, see Table 1, 2nd column (RBI), the accuracy goes from 0.74 to 0.87 to 0.90 to 0.96 and so on. Hence, our approaches are doing better than the original policy they started with. So if you started with one of the fake labelers from Weston'16, regardless of the value of the initial \pi_acc, you would improve over them as well. The point is that real random guessing isn't stronger, not initially, but after training and updating/learning the policy one would hope for it to be stronger. Our experiments showed this was the case, which is a positive result. This is a key contribution of the work. \fi \FloatBarrier \subsection{Additional Experiments For Mechanical Turk Setup}\label{sec:appendix-mturk} In the experiment in Section 5.2 %\ref{sec:mturkexp} we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50\% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The results are given in Table \ref{table:mturk-res-synth}. The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difficulty of dealing with its lexical and semantic variability. The real data is better than using Task 2 synthetic data. For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). The results are given in Table \ref{table:mturk-supervised}. They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning. \subsection{Second Iteration of Feedback} We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model from the previous training, using RBI+FP with $r=1$ which previously gave a test accuracy of 0.478 (see Table \ref{table:mturk-res-synth}). Using that model as a predictor, we collected an additional 10,000 training examples. We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying $r$ on the additional collected set. We also report the performance from varying $\epsilon$, the proportion of random exploration of predictions on the new set. The results are reported in Table \ref{table:mturk-2nd-it}. Overall, performance is improved in the second iteration, with slightly better performance for large $r$ and $\epsilon=0.5$. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case. In this section, we present the algorithms we used to train MemN2N in an online fashion. Our learning setup can be cast as a particular form of Reinforcement Learning. The policy is implemented by the MemN2N model. The state is the dialogue history. %deterministic and it is set by the simulator. The action space corresponds to the set of answers the MemN2N has to choose from to answer the teacher's question. In our setting, the policy chooses only one action for each episode. The reward is either $1$ (a reward from the teacher when the bot answers correctly) or $0$ otherwise. Note that in our experiments, a reward equal to $0$ might mean that the answer is incorrect or that the positive reward is simply missing. The overall setup is closest to standard contextual bandits, except that the reward is binary. When working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier to set up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to a single one. The latter would be more difficult to manage and scale up since it would require some form of synchronization between the model replicas interacting with each human. This is comparable to the real world situation where a teacher can either ask a student a single question and give feedback right away, or set up a test that contains many questions and grade all of them at once. Only after the learner completes all questions, it can hear feedback from the teacher. We use {\it batch size} to refer to how many dialogue episodes the current model is used to collect feedback before updating its parameters. In the Reinforcement Learning literature, batch size is related to {\em off-policy} learning since the MemN2N policy is trained using episodes collected with a stale version of the model. Our experiments show that our model and base algorithms are very robust to the choice of batch size, alleviating the need for correction terms in the learning algorithm~\citep{bottou-13}. We consider two strategies: (i) online batch size, whereby the target policy is updated after doing a single pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii) dataset-sized batch, whereby training is continued to convergence on the batch which is the size of the dataset, and then the target policy is updated with the new model, and a new batch is drawn and the procedure iterates. These strategies can be applied to all the methods we use, described below. Next, we discuss the learning algorithms we considered in this work. \subsubsection{Reward-Based Imitation (RBI)} The simplest algorithm we first consider is the one employed in \cite{weston2016dialog}. RBI relies on positive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner, i.e., learning to predict the correct answers (with reward 1) at training time and disregarding the other ones. This is implemented by using a {MemN2N} that maps a dialogue input to a prediction, i.e. using the cross entropy criterion on the positively rewarded subset of the data. In order to make this work in the online setting which requires exploration to find the correct answer, we employ an $\epsilon$-greedy strategy: the learner makes a prediction using its own model (the answer assigned the highest probability) with probability $1-\epsilon$, otherwise it picks a random answer with probability $\epsilon$. The teacher will then give a reward of $+1$ if the answer is correct, otherwise $0$. The bot will then learn to imitate the correct answers: predicting the correct answers while ignoring the incorrect ones. \subsubsection{REINFORCE} The second algorithm we use is the REINFORCE algorithm \citep{williams1992simple}, which maximizes the expected cumulative reward of the episode, in our case the expected reward provided by the teacher. The expectation is approximated by sampling an answer from the model distribution. Let $a$ denote the answer that the learner gives, $p(a)$ denote the probability that current model assigns to $a$, $r$ denote the teacher's reward, and $J(\theta)$ denote the expectation of the reward. We have: \begin{equation} \nabla J(\theta)\approx\nabla\log p(a) [r-b] \end{equation} where $b$ is the baseline value, which is estimated using a linear regression model that takes as input the output of the memory network after the last hop, and outputs a scalar $b$ denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward $b$ and actual reward $r$, $||r-b||^2$. We refer the readers to \citep{ranzato2015sequence,zaremba2015reinforcement} for more details. The baseline estimator model is independent from the policy model, and its error is not backpropagated through the policy model. The major difference between RBI and REINFORCE is that (i) the learner only tries to imitate correct behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) the learner explores using an $\epsilon$-greedy strategy in RBI while in REINFORCE it uses the distribution over actions produced by the model itself. \subsubsection{Forward Prediction (FP)} FP \citep{weston2016dialog} handles the situation where a numerical reward for a bot's answer is not available, meaning that there are no +1 or 0 labels available after a student's utterance. Instead, the model assumes the teacher gives textual feedback $t$ to the bot's answer, taking the form of a dialogue utterance, and the model tries to predict this instead. Suppose that $x$ denotes the teacher's question and $C$=$c_1$, $c_2$, ..., $c_N$ denotes the dialogue history as before. In {\it FP}, the model first maps the teacher's initial question $x$ and dialogue history $C$ to a vector representation $u$ using a memory network with multiple hops. Then the model will perform another hop of attention over all possible student's answers in $\mathbb{A}$, with an additional part that incorporates the information of which candidate (i.e., $a$) was actually selected in the dialogue: \begin{equation} p_{\hat{a}}=\texttt{softmax}(u^T y_{\hat{a}})~~~~~o=\sum_{\hat{a}\in \mathbb{A}} p_{\hat{a}} (y_{\hat{a}}+\beta\cdot {\bf 1}[\hat{a}=a] ) \end{equation} where $y_{\hat{a}}$ denotes the vector representation for the student's answer candidate $\hat{a}$. $\beta$ is a (learned) d-dimensional vector to signify the actual action $a$ that the student chooses. $o$ is then combined with $u$ to predict the teacher's feedback $t$ using a softmax: \begin{equation} u_1=o+u ~~~~ t=\texttt{softmax} (u_1^T x_{r_1}, u_1^T x_{r_2}, ..., u_1^T x_{r_N}) \end{equation} where $x_{r_{i}}$ denotes the embedding for the $i^{th}$ response. In the online setting, the teacher will give textual feedback, and the learner needs to update its model using the feedback. It was shown in \cite{weston2016dialog} that in an off-line setting this procedure can work either on its own, or in conjunction with a method that uses numerical rewards as well for improved performance. In the online setting, we consider two simple extensions: \begin{itemize} \item $\epsilon$-greedy exploration: with probability $\epsilon$ the student will give a random answer, and with probability $1-\epsilon$ it will give the answer that its model assigns the largest probability. This method enables the model to explore the space of actions and to potentially discover correct answers. \item data balancing: cluster the set of teacher responses $t$ and then balance training across the clusters equally.\footnote{In the simulated data, because the responses are templates, this can be implemented by first randomly sampling the response, and then randomly sampling a story with that response; we keep the history of all stories seen from which we sample. For real data slightly more sophisticated clustering should be used.} This is a type of experience replay \citep{mnih2013playing} but sampling with an evened distribution. Balancing stops part of the distribution dominating the learning. For example, if the model is not exposed to sufficient positive and negative feedback, and one class overly dominates, the learning process degenerates to a model that always predicts the same output regardless of its input. \end{itemize} In our experiments, we used variants of the End-to-End Memory Network (MemN2N) model \citep{sukhbaatar2015end} %as a backbone as our underlying architecture for learning from dialogue. % interactions. The input to MemN2N is the last utterance of the dialogue history $x$ as well as a set of memories (context) $C$=$c_1$, $c_2$, ..., $c_N$. The memory $C$ encodes both short-term memory, e.g., dialogue histories between the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bot has access to. Given the input $x$ and $C$, the goal is to produce an output/label $a$. In the first step, the query $x$ is transformed to a vector representation $u_0$ by summing up its constituent word embeddings: $u_0=Ax$. The input $x$ is a bag-of-words vector and $A$ is the $d\times V$ word embedding matrix where $d$ denotes the emebbding dimension and $V$\ denotes the vocabulary size. Each memory $c_i$ is similarly transformed to a vector $m_{i}$. The model will read information from the memory by comparing input representation $u_0$ with memory vectors $m_{i}$ using softmax weights: \begin{equation} o_1=\sum_{i}p_i^1 m_{i} ~~~~~~~~~~p_i^1=\texttt{softmax}(u_0^T m_i) \end{equation} This process selects memories relevant to the last utterance $x$, i.e., the memories with large values of $p_i^1$. The returned memory vector $o_1$ is the weighted sum of memory vectors. This process can be repeated to query the memory N times (so called ``hops'') by adding $o_n$ to the original input, $u_1=o_1+u_0$, or to the previous state, $u_n=o_n+u_{n-1}$, and then using $u_n$ to query the memories again. In the end, $u_N$ is input to a softmax function for the final prediction: \begin{equation}\label{eq:a} a=\texttt{softmax} (u_N^T y_1,u_N^T y_2,...,u_N^T y_L) \end{equation} where $y_1, \dots, y_L$ denote the set of candidate answers. If the answer is a word, $y_i$ is the corresponding word embedding. If the answer is a sentence, $y_i$ is the embedding for the sentence achieved in the same way that we obtain embeddings for query $x$ and memory $C$. The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs, which we refer to as supervised or imitation learning. As our work is in a reinforcement learning setup where our model must make predictions to learn, this procedure will not work, so we instead consider reinforcement learning algorithms which we describe next.
Revisiting Batch Normalization For Practical Domain Adaptation
1603.04779
Table 3: Single source domain adaptation results on Caltech-Bing (Bergamo & Torresani, 2010) dataset.
[ "Method", "C → B", "B → C", "Avg" ]
[ [ "Inception BN (Ioffe & Szegedy, 2015 )", "35.1", "64.6", "49.9" ], [ "CORAL (Sun et al., 2016 )", "[BOLD] 35.3", "67.2", "51.3" ], [ "AdaBN", "35.2", "[BOLD] 68.1", "[BOLD] 51.7" ], [ "AdaBN + CORAL", "35.0", "67.5", "51.2" ] ]
Compared with CORAL, AdaBN achieves better performance, which improves 1.8% over the baseline. Note that all the domain adaptation methods show minor improvements over the baseline in the task C → B. One of the hypotheses to this relatively small improvement is that the images in Bing dataset are collected from Internet, which are more diverse and noisier Thus, it is not easy to adapt on the Bing dataset from the relatively clean dataset Caltech-256. Combining CORAL with our method does not offer further improvements. This might be explained by the noise of the Bing dataset and the imbalance of the number of images in the two domains.
\def\A{{\bf A}} \def\a{{\bf a}} \def\B{{\bf B}} \def\bb{{\bf b}} \def\C{{\bf C}} \def\D{{\bf D}} \def\d{{\bf d}} \def\E{{\bf E}} \def\e{{\bf e}} \def\F{{\bf F}} \def\f{{\bf f}} \def\G{{\bf G}} \def\g{{\bf g}} \def\k{{\bf k}} \def\K{{\bf K}} \def\H{{\bf H}} \def\h{{\bf h}} \def\I{{\bf I}} \def\L{{\bf L}} \def\M{{\bf M}} \def\m{{\bf m}} \def\n{{\bf n}} \def\N{{\bf N}} \def\BP{{\bf P}} \def\R{{\bf R}} \def\BS{{\bf S}} \def\s{{\bf s}} \def\t{{\bf t}} \def\T{{\bf T}} \def\U{{\bf U}} \def\u{{\bf u}} \def\V{{\bf V}} \def\v{{\bf v}} \def\W{{\bf W}} \def\w{{\bf w}} \def\X{{\bf X}} \def\Y{{\bf Y}} \def\Q{{\bf Q}} \def\X{{\bf X}} \def\x{{\bf x}} \def\y{{\bf y}} \def\Z{{\bf Z}} \def\z{{\bf z}} \def\0{{\bf 0}} \def\1{{\bf 1}} \def\SS{{\bf S}} \def\ME{{\mathbb E}} \def\MF{{\mathcal F}} \def\MG{{\mathcal G}} \def\MI{{\mathcal I}} \def\ML{{\mathcal L}} \def\MN{{\mathcal N}} \def\MO{{\mathcal O}} \def\MT{{\mathcal T}} \def\MX{{\mathcal X}} \def\SW{{\mathcal {SW}}} \def\MW{{\mathcal W}} \def\MY{{\mathcal Y}} \def\BR{{\mathbb R}} \def\MS{{\mathcal S}} \def\MC{{\mathcal C}} \def\ph{\mbox{\boldmath$\phi$\unboldmath}} \def\vp{\mbox{\boldmath$\varphi$\unboldmath}} \def\pii{\mbox{\boldmath$\pi$\unboldmath}} \def\Ph{\mbox{\boldmath$\Phi$\unboldmath}} \def\pss{\mbox{\boldmath$\psi$\unboldmath}} \def\Ps{\mbox{\boldmath$\Psi$\unboldmath}} \def\muu{\mbox{\boldmath$\mu$\unboldmath}} \def\Si{\mbox{\boldmath$\Sigma$\unboldmath}} \def\lam{\mbox{\boldmath$\lambda$\unboldmath}} \def\Lam{\mbox{\boldmath$\Lambda$\unboldmath}} \def\Gam{\mbox{\boldmath$\Gamma$\unboldmath}} \def\gam{\mbox{\boldmath$\gamma$\unboldmath}} \def\bet{\mbox{\boldmath$\beta$\unboldmath}} \def\Oma{\mbox{\boldmath$\Omega$\unboldmath}} \def\De{\mbox{\boldmath$\Delta$\unboldmath}} \def\de{\mbox{\boldmath$\delta$\unboldmath}} \def\Tha{\mbox{\boldmath$\Theta$\unboldmath}} \def\tha{\mbox{\boldmath$\theta$\unboldmath}} \def\etal{{\em et al.\/}\,} \def\tr{\mathrm{tr}} \def\exp{\mathrm{exp}} \def\rank{\mathrm{rank}} \def\diag{\mathrm{diag}} \def\dg{\mathrm{dg}} \def\argmax{\mathop{\rm argmax}} \def\argmin{\mathop{\rm argmin}} \def\vecd{\mathrm{vec}} \def\diag{\mathrm{diag}} \newcommand{\row}[2] {#1_{#2 \cdot}} \newcommand{\col}[2] {#1_{\cdot #2}} \documentclass{article} % For LaTeX2e \usepackage[table]{xcolor} \graphicspath{{fig/}} \title{Revisiting Batch Normalization For \\Practical Domain Adaptation} \author{Yanghao Li$^\dagger$, Naiyan Wang$^\ddagger$, Jianping Shi$^\diamond$, Jiaying Liu$^\dagger$, Xiaodi Hou$^\ddagger$\\ $^\dagger$ Institute of Computer Science and Technology, Peking University\\ $^\ddagger$ TuSimple ~~~ $^\diamond$ SenseTime\\ {\tt\small lyttonhao@pku.edu.cn}~~~{\tt\small winsty@gmail.com}~~{\tt\small shijianping5000@gmail.com}\\ {\tt\small liujiaying@pku.edu.cn}~~ {\tt\small xiaodi.hou@gmail.com} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \rowcolors{2}{white}{gray!25} \begin{abstract} Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. %However, it is still a common (yet inconvenient) practice to prepare at least tens of thousands of labeled images to fine-tune a network on every task before the model is ready to use. Recent study~\citep{deeper_bias} shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called \emph{Adaptive Batch Normalization} (AdaBN) to increase the generalization ability of a DNN. %, based on the well-known Batch Normalization (BN) technique~\citep{bn} which has become a standard component in modern deep learning. By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. \end{abstract} \begin{section}{Introduction} Training a DNN for a new image recognition task is expensive. It requires a large amount of labeled training images that are not easy to obtain. One common practice is to use %a training set from a different source. For instance, one can borrow training data from an existing dataset, or query images from search engines and then label them using Amazon Mechanical Turk. These approaches usually suffer from inferior performance due to dataset discrepancies, or ``dataset bias'', because 1) the distributions of the source domains (third party datasets or Internet images) are often different from the target domain (testing images); and 2) DNN is particularly good at capturing dataset bias in its internal representation~\citep{unbiased}, which eventually leads to overfitting. labeled data from other related source such as a different public dataset, or harvesting images by keywords from a search engine. Because 1) the distributions of the source domains (third party datasets or Internet images) are often different from the target domain (testing images); and 2) DNN is particularly good at capturing dataset bias in its internal representation~\citep{unbiased}, which eventually leads to overfitting, imperfectly paired training and testing sets usually leads to inferior performance. Known as domain adaptation, the effort to bridge the gap between training and testing data distributions has been discussed several times under the context of deep learning~\citep{ddc,dan,joint,revgrad}. To make the connection between the domain of training and the domain of testing, most of these methods require additional optimization steps and extra parameters. Such additional computational burden could greatly complicate the training of a DNN which is already intimidating enough for most people. In this paper, we propose a simple yet effective approach called \emph{AdaBN} for batch normalized DNN domain adaptation. % Our observation suggests a dissociation between bias and variance terms in a batch-normalized DNN. We hypothesize that the label related knowledge is stored in the weight matrix of each layer, whereas domain related knowledge is represented by the statistics of the Batch Normalization (BN)~\citep{bn} layer. Therefore, we can easily transfer the trained model to a new domain by modulating the statistics in the BN layer. This approach is straightforward to implement, has zero parameter to tune, and requires minimal computational resources. Moreover, our AdaBN is ready to be extended to more sophisticated scenarios such as multi-source domain adaptation and semi-supervised settings. Fig.~\ref{fig:teaser} illustrates the flowchart of AdaBN. To summarize, our contributions are as follows: \begin{enumerate} \item We propose a novel domain adaptation technique called Adaptive Batch Normalization (AdaBN). We show that AdaBN can naturally dissociate bias and variance of a dataset, which is ideal for domain adaptation tasks. \item We validate the effectiveness of our approach on standard benchmarks for both single source and multi-source domain adaptation. Our method outperforms the state-of-the-art methods. \item We conduct experiments on the cloud detection for remote sensing images to further demonstrate the effectiveness of our approach in practical use. \end{enumerate} \end{section} \begin{section}{Related Work} Domain transfer in visual recognition tasks has gained increasing attention in recent literature~\citep{beijbom2012domain,patel2015visual}. Often referred to as \emph{covariate shift}~\citep{shimodaira2000improving} or \emph{dataset bias}~\citep{unbiased}, this problem poses a great challenge to the generalization ability of a learned model. One key component of domain transfer is to model the difference between source and target distributions. In~\citet{khosla2012undoing}, the authors assign each dataset with an explicit bias vector, and train one discriminative model to handle multiple classification problems with different bias terms. A more explicit way to compute dataset difference is based on Maximum Mean Discrepancy (MMD)~\citep{mmd}. This approach projects each data sample into a Reproducing Kernel Hilbert Space, and then computes the difference of sample means. To reduce dataset discrepancies, many methods are proposed, including sample selections~\citep{huang2006correcting,landmark}, explicit projection learning~\citep{tca,gopalan2011domain,dip} and principal axes alignment~\citep{sa,gfk,aljundi2015landmarks}. All of these methods face the same challenge of %devising an effective domain transfer function in high-dimensional non-linear space. constructing the domain transfer function -- a high-dimensional non-linear function. Due to computational constraints, most of the proposed transfer functions are in the category of simple shallow projections, which are typically composed of kernel transformations and linear mapping functions. In the field of deep learning, feature transferability across different domains is a tantalizing yet generally unsolved topic~\citep{transferable,deeper_bias}. To transfer the learned representations to a new dataset, pre-training plus fine-tuning~\citep{decaf} have become \textit{de facto} procedures. However, adaptation by fine-tuning is far from perfect. It requires a considerable amount of labeled data from the target domain, and non-negligible computational resources to re-train the whole network. A series of progress has been made in DNN to facilitate domain transfer. Early works of domain adaptation either focus on reordering fine-tuning samples~\citep{dlid}, or regularizing MMD~\citep{mmd} in a shallow network~\citep{ae_adaptation}. It is only until recently that the problem is directly attacked under the setting of classification of unlabeled target domain using modern convolutional neural network (CNN) architecture. DDC~\citep{ddc} used the classical MMD loss to regularize the representation in the last layer of CNN. DAN~\citep{dan} further extended the method to multiple kernel MMD and multiple layer adaptation. Besides adapting features using MMD, RTN~\citep{long2016unsupervised} also added a gated residual layer for classifier adaptation. RevGrad~\citep{revgrad} devised a gradient reversal layer to %reverse the gradient from the domain classification loss. %that helps to distinguish the domains of each data sample. compensate the back-propagated gradients that are domain specific. Recently, by explicitly modeling both private and shared components of the domain representations in the network, \citet{bousmalis2016domain} proposed a Domain Separation Network to extract better domain-invariant features. Another related work is CORAL~\citep{coral}. This model focuses on the last layer of CNN. CORAL whitens the data in source domain, and then re-correlates the source domain features to target domain. This operation aligns the second order statistics of source domain and target domain distributions. Surprisingly, such simple approach yields state-of-the-arts results in various text classification and visual recognition tasks. Recently, Deep CORAL~\citep{sun2016deep} also extends the method into DNN by incorporating a CORAL loss. \subsection{Batch Normalization}\label{sec:bn} In this section, we briefly review Batch Normalization (BN)~\citep{bn} which is closely related to our AdaBN. The BN layer is originally designed to alleviate the issue of internal covariate shifting -- a common problem while training a very deep neural network. It first standardizes each feature in a mini-batch, and then learns a common slope and bias for each mini-batch. Formally, given the input to a BN layer $\X \in \BR^{n \times p}$, where $n$ denotes the batch size, and $p$ is the feature dimension, BN layer transforms a feature $j \in \{1 \ldots p\}$ into: \begin{equation} \begin{aligned} \hat{x}_j &= \frac{x_j - \ME[\col{\X}{j}]}{\sqrt{\text{Var}[\col{\X}{j}]}}, \\ y_j &= \gamma_j \hat{x}_j + \beta_j, \end{aligned} \end{equation} where $x_j$ and $y_j$ are the input/output scalars of one neuron response in one data sample; $\col{\X}{j}$ denotes the $j^{th}$ column of the input data; and $\gamma_j$ and $\beta_j$ are parameters to be learned. This transformation guarantees that the input distribution of each layer remains unchanged across different mini-batches. For Stochastic Gradient Descent (SGD) optimization, a stable input distribution could greatly facilitate model convergence, leading to much faster training speed for CNN. Moreover, if training data are shuffled at each epoch, the same %training sample is transformed, or augmented differently in each epoch. This property acts as an additional regularization to combat against overfitting. In the testing phase, the global statistics instead of the statistics from one mini-batch are used to stabilize the testing results. training sample will be applied with different transformations, or in other words, more comprehensively augmented throughout the training. During the testing phase, the global statistics of all training samples is used to normalize every mini-batch of test data. Extensive experiments have shown that Batch Normalization significantly reduces the number of iteration to converge, and improves the final performance at the same time. BN layer has become a standard component in recent top-performing CNN architectures, such as deep residual network~\citep{resnet}, and Inception V3~\citep{inception_v3}. \end{section} \begin{section}{The Model} In Sec.~\ref{sec:observation}, we first analyze the domain shift in deep neural network, and reveal two key observations. Then in Sec.~\ref{sec:method}, we introduce our Adaptive Batch Normalization (AdaBN) method based on these observations. Finally, we analyze our method in-depth in Sec.~\ref{sec:discuss}. \begin{subsection}{A Pilot Experiment}\label{sec:observation} Although the Batch Normalization (BN) technique is originally proposed to help SGD optimization, its core idea is to align the distribution of training data. From this perspective, it is interesting to examine the BN parameters (batch-wise mean and variance) over different dataset at different layers of the network. In this pilot experiment, we use MXNet implementation~\citep{mxnet} of the Inception-BN model~\citep{bn} pre-trained on ImageNet classification task~\citep{imagenet} as our baseline DNN model. Our image data are drawn from~\citep{bing-caltech}, which contains the same classes of images from both Caltech-256 dataset~\citep{griffin2007caltech} and Bing image search results. For each mini-batch sampled from one dataset, we concatenate the mean and variance of all neurons from one layer to form a feature vector. Using linear SVM, we can almost perfectly classify whether the mini-batch feature vector is from Caltech-256 or Bing dataset. Fig.~\ref{fig:bn_visualize} visualizes the distributions of mini-batch feature vectors from two datasets in 2D. It is clear that BN statistics from different domains are separated into clusters. This pilot experiment suggests: \begin{enumerate} \item Both shallow layers and deep layers of the DNN are influenced by domain shift. Domain adaptation by manipulating the output layer alone is not enough. \item The statistics of BN layer contain the traits of the data domain. \end{enumerate} Both observations motivate us to adapt the representation across different domains by BN layer. \end{subsection} \begin{subsection}{Adaptive Batch Normalization}\label{sec:method} Given the pre-trained DNN model and a target domain, our Adaptive Batch Normalization algorithm is as follows\footnote{In practice we adopt an online algorithm~\citep{donald1999art} to efficiently estimate the mean and variance.}: \floatstyle{plain} \newfloat{myalgo}{tbhp}{mya} \begin{myalgo} \centering \begin{minipage}{8cm} \begin{algorithm}[H] \caption{Adaptive Batch Normalization (AdaBN)} \begin{algorithmic} \FOR{neuron $j$ in DNN} \STATE Concatenate neuron responses on all images of target domain $t$: $\mathbf{x}_j = [\ldots, x_j(m), \ldots]$ \STATE Compute the mean and variance of the target domain: $\mu_j^t = \mathbb{E}(\mathbf{x}_j^t)$, $\sigma_j^t = \sqrt{\text{Var}(\mathbf{x}_j^t)}$. \ENDFOR \FOR{neuron $j$ in DNN, testing image $m$ in target domain} \STATE Compute BN output $y_j(m):= \gamma_j \frac{\big(x_j(m) - \mu_j^t\big)}{\sigma_j^t} + \beta_j$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \end{myalgo} The intuition behind our method is straightforward: The standardization of each layer by domain ensures that each layer receives data from a similar distribution, no matter it comes from the source domain or the target domain. For $K$ domain adaptation where $K > 2$, we standardize each sample by the statistics in its own domain. During training, the statistics are calculated for every mini-batch, the only thing that we need to make sure is that the samples in every mini-batch are from the same domain. For (semi-)supervised domain adaptation, we may use the labeled data to fine-tune the weights as well. As a result, our method could fit in all different settings of domain adaptation with minimal effort. \end{subsection} \begin{subsection}{Further Thoughts About Adabn}\label{sec:discuss} The simplicity of AdaBN is in sharp contrast to the complication of the domain shift problem. One natural question to ask is whether such simple translation and scaling operations could approximate the intrinsically non-linear domain transfer function. Consider a simple neural network with input $\x \in \BR^{p_1 \times 1}$. It has one BN layer with mean and variance of each feature being $\mu_i$ and $\sigma^2_i$ ($i \in \{1 \ldots p_2\}$), one fully connected layer with weight matrix $\W \in \BR^{p_1 \times p_2}$ and bias $\bb \in \BR^{p_2 \times 1}$, and a non-linear transformation layer $f(\cdot)$, where $p_1$ and $p_2$ correspond to the input and output feature size. The output of this network is $f(\W_a \x + \bb_a)$, where \begin{equation} \begin{aligned} \W_a &= \W^T \Si^{-1},& \bb_a &= -\W^T \Si^{-1} \muu + \bb, \\ \Si &= \diag(\sigma_1, ..., \sigma_{p_1}),& \muu &= (\mu_1, ..., \mu_{p_1}). \end{aligned} \end{equation} The output without BN is simply $f(\W^T \x + \bb)$. We can see that the transformation is highly non-linear even for a simple network with one computation layer. As CNN architecture goes deeper, it will gain increasing power to represent more complicated transformations. Another question is why we transform the neuron responses independently, not decorrelate and then re-correlate the responses as suggested in~\citet{coral}. Under certain conditions, decorrelation could improve the performance. However, in CNN, the mini-batch size is usually smaller than the feature dimension, leading to singular covariance matrices that is hard to be inversed. As a result, the covariance matrix is always singular. In addition, decorrelation requires to compute the inverse of the covariance matrix which is computationally intensive, especially if we plan to apply AdaBN to all layers of the network. \end{subsection} \end{section} \begin{section}{Experiments}\label{sec:exp} In this section, we demonstrate the effectiveness of AdaBN on standard domain adaptation datasets, and empirically analyze the adapted features. We also evaluation our method on a practical application with remote sensing images. %In the sequel, we refer our method as ``BN Adapt'' for short. \subsection{Experimental Settings} We first introduce our experiments on two standard datasets: Office~\citep{office} and Caltech-Bing~\citep{bing-caltech}. \textbf{Office}~\citep{office} is a standard benchmark for domain adaptation, which is a collection of 4652 images in 31 classes from three different domains: \textit{Amazon}(\textbf{A}), \textit{DSRL}(\textbf{D}) and \textit{Webcam}(\textbf{W}). Similar to~\citep{ddc,coral,dan}, we evaluate the pairwise domain adaption performance of AdaBN on all six pairs of domains. For the multi-source setting, we evaluate our method on three transfer tasks \{\textbf{A, W}\} $\rightarrow$ \textbf{D}, \{\textbf{A, D}\} $\rightarrow$ \textbf{W}, \{\textbf{D, W}\} $\rightarrow$ \textbf{A}. \textbf{Caltech-Bing}~\citep{bing-caltech} is a much larger domain adaptation dataset, which contains 30,607 and 121,730 images in 256 categories from two domains Caltech-256(\textbf{C}) and Bing(\textbf{B}). The images in the Bing set are collected from Bing image search engine by keyword search. Apparently Bing data contains noise, and its data distribution is dramatically different from that of Caltech-256. We compare our approach with a variety of methods, including four shallow methods: SA~\citep{sa}, LSSA~\citep{aljundi2015landmarks}, GFK~\citep{gfk}, CORAL~\citep{coral}, and four deep methods: DDC~\citep{ddc}, DAN~\citep{dan}, RevGrad~\citep{revgrad}, Deep CORAL~\citep{sun2016deep}. Specifically, GFK models domain shift by integrating an infinite number of subspaces that characterize changes in statistical properties from the source to the target domain. SA, LSSA and CORAL align the source and target subspaces by explicit feature space transformations that would map source distribution into the target one. DDC and DAN are deep learning based methods which maximize domain invariance by adding to AlexNet one or several adaptation layers using MMD. RevGrad incorporates a gradient reversal layer in the deep model to encourage learning domain-invariant features. Deep CORAL extends CORAL to perform end-to-end adaptation in DNN. It should be noted that these deep learning methods have the adaptation layers on top of the output layers of DNNs, which is a sharp contrast to our method that delves into early convolution layers as well with the help of BN layers. We follow the full protocol~\citep{decaf} for the single source setting; while for multiple sources setting, we use all the samples in the source domains as training data, and use all the samples in the target domain as testing data. We fine-tune the Inception-BN~\citep{bn} model on source domain in each task for 100 epochs. The learning rate is set to $0.01$ initially, and then is dropped by a factor $0.1$ every 40 epochs. Since the office dataset is quite small, following the best practice in~\citet{dan}, we freeze the first three groups of Inception modules, and set the learning rate of fourth and fifth group one tenth of the base learning rate to avoid overfitting. For Caltech-Bing dataset, we fine-tune the whole model with the same base learning rate. \subsection{Results} \subsubsection{Office Dataset} Our results on Office dataset is reported in Table~\ref{tbl:result} and Table~\ref{tbl:multi} for single/multi source(s), respectively. Note that the first 5 models of the Table~\ref{tbl:result} are pre-trained on AlexNet~\citep{alexnet} instead of the Inception-BN~\citep{bn} model, due to the lack of publicly available pre-trained Inception BN model in Caffe~\citep{caffe}. Thus, the relative improvements over the baseline (AlexNet/Inception BN) make more sense than the absolute numbers of each algorithm. From Table~\ref{tbl:result}, we first notice that the Inception-BN indeed improves over the AlexNet on average, which means that the CNN pre-trained on ImageNet has learned general features, the improvements on ImageNet can be transferred to new tasks. Among the methods based on Inception-BN features, our method improves the most over the baseline. Moreover, since our method is complementary to other methods, we can simply apply CORAL on the top of AdaBN. Not surprisingly, this simple combination exhibits 0.5\% increase in performance. This preliminary test reveals further potential of AdaBN if combined with other advanced domain adaptation methods. Finally, we could improve 1.7\% over the baseline, and advance the state-of-the-art results for this dataset. %Compared to other methods based on AlexNet, our method is better than DDC and RevGrad, and worse than DAN and Deep CORAL in terms of relative improvements over corresponding baselines. None of the compared methods has reported their performance on multi-source domain adaptation. To demonstrate the capacity of AdaBN under multi-domain settings, we compare it against CORAL, which is the best performing algorithm in the single source setting. % here we only compare AdaBN with the best algorithm CORAL in the single source setting. Analyzing the results of the baseline in Table~\ref{tbl:multi}, The result is reported in Table~\ref{tbl:multi}. We find that simply combining two domains does not lead to better performance. The result is generally worse compared to the best performing single domain between the two. This phenomenon suggests that if we cannot properly cope with domain bias, the increase of training samples may be reversely affect to the testing performance. This result confirms the necessity of domain adaptation. In this more challenging setting, AdaBN still outperforms the baseline and CORAL on average. Again, when combined with CORAL, our method demonstrates further improvements. At last, our method archives 2.3\% gain over the baseline. \subsubsection{Caltech-Bing Dataset} To further evaluate our method on the large-scale dataset, we show our results on Caltech-Bing Dataset in Table~\ref{tbl:caltech-bing}. Compared with CORAL, AdaBN achieves better performance, which improves 1.8\% over the baseline. Note that all the domain adaptation methods show minor improvements over the baseline in the task \textbf{C} $\rightarrow$ \textbf{B}. One of the hypotheses to this relatively small improvement is that the images in Bing dataset are collected from Internet, which are more diverse and noisier~\citep{bing-caltech}. Thus, it is not easy to adapt on the Bing dataset from the relatively clean dataset Caltech-256. Combining CORAL with our method does not offer further improvements. This might be explained by the noise of the Bing dataset and the imbalance of the number of images in the two domains. \subsection{Empirical Analysis} In this section, we empirically analyze the features adapted by our method and investigate the influence of the number of samples in target domain to the performance. \subsubsection{Analysis of Feature Divergence.} In this experiment, we analyze the statistics of the output of one shallow layer (the output of second convolution layer) and one deep layer (the output of last Inception module before ReLU) in the network. In particular, we compute the distance of source domain distribution and target domain distribution before and after adaptation. We denote each feature $i$ as $F_i$, and assume that the output of each feature generally follows a Gaussian distribution with mean $\mu_i$ and variance $\sigma_i^2$. Then we use the symmetric KL divergence as our metric: \begin{equation} \begin{aligned} D(F_i \mid\mid F_j) &= \text{KL}(F_i \mid \mid F_j) + \text{KL}(F_j \mid \mid F_i),\\ \text{KL}(F_i \mid \mid F_j) &= \log \frac{\sigma_j}{\sigma_i} + \frac{\sigma_i^2 + (\mu_i - \mu_j)^2}{2\sigma_j^2} - \frac{1}{2}. \end{aligned} \end{equation} We plot the distribution of the distances in Fig.~\ref{fig:distribution}. Our method reduces the domain discrepancy in both shallow layer and deep layer. We also report the quantitative results in Table.~\ref{tbl:distribution}. This experiment once again verifies the effectiveness of the proposed method. \subsubsection{Sensitivity to Target Domain Size.} Since the key of our method is to calculate the mean and variance of the target domain on different BN layers, it is very natural to ask how many target images is necessary to obtain stable statistics. In this experiment, we randomly select a subset of images in target domain to calculate the statistics and then evaluate the performance on the whole target set. Fig.~\ref{fig:target-number} illustrates the effect of using different number of batches. The results demonstrate that our method can obtain good results when using only a small part of the target examples. It should also be noted that in the extremal case of one batch of target images, our method still achieves better results than the baseline. This is valuable in practical use since a large number of target images are often not available. \begin{subsection}{Practical Application for Cloud Detection in Remote Sensing Images} In this section, we further demonstrate the effectiveness of AdaBN on a practical problem: Cloud Detection in Remote Sensing Images. Since remote sensing images are taken by different satellites with different sensors and resolutions, the captured images are visually different in texture, color, and value range distributions, as shown in Fig.~\ref{fig:rsimage}. How to adapt a model trained on one satellite to another satellite images is naturally a domain adaptation problem. Our task here is to identify cloud from the remote sensing images, which can be regarded as a semantic segmentation task. The experiment is taken under a self-collected dataset, which includes three image sets, from GF2, GF1 and Tianhui satellites. Each image set contains 635, 324 and 113 images with resolution over 6000x6000 pixels respectively. We name the three different datasets following the satellite names. GF2 dataset is used as the training dataset while GF1 and Tianhui datasets are for testing. We use a state-of-art semantic segmentation method \citep{chen2016deeplab} as our baseline model.%Only the GF2 dataset contains labeled training data. We use the training data from GF2 dataset to train a cloud detector for GF2 images. The testing result in GF2 test set can reach to 87.64\% mIOU for our baseline model. The results on GF1 and Tianhui datasets are shown in Table~\ref{tbl:remote}. The relatively low results of the baseline method indicate that there exists large distribution disparity among images from different satellites. Thus, the significant improvement after applying AdaBN reveals the effectiveness of our method. Some of the visual results are shown in Fig.~\ref{fig:rsadabn}. Since other domain adaptation methods require either additional optimization steps and extra components ($e.g.$ MMD) or post-processing distribution alignment (like CORAL), it is very hard to apply these methods from image classification to this large-size (6000x6000) segmentation problem. Comparatively, besides the effective performance, our method needs no extra parameters and very few computations over the whole adaptation process. \end{subsection} \end{section} \section{Conclusion and Future Works} In this paper, we have introduced a simple yet effective approach for domain adaptation on batch normalized neural networks. Besides its original uses, we have exploited another functionality of Batch Normalization (BN) layer: domain adaptation. The main idea is to replace the statistics of each BN layer in source domain with those in target domain. The proposed method is easy to implement and parameter-free, and it takes almost no effort to extend to multiple source domains and semi-supervised settings. % Moreover, our method is not sensitive to the target domain size. Thus it is more favorable for practitioners compared with other deep learning based methods. Our method established new state-of-the-art results on both single and multiple source(s) domain adaptation settings on standard benchmarks. At last, the experiments on cloud detection for large-size remote sensing images further demonstrate the effectiveness of our method in practical use. We believe our method opens up a new direction for domain adaptation. In contrary to other methods that use Maximum Mean Discrepancy (MMD) or domain confusion loss to update the weights in CNN for domain adaptation, our method only modifies the statistics of BN layer. Therefore, our method is fully complementary to other existing deep learning based methods. It is interesting to see how these different methods can be unified under one framework. {%\small \bibliographystyle{iclr2017_conference} } \end{document}
Revisiting Batch Normalization For Practical Domain Adaptation
1603.04779
Table 4: The average symmetric KL divergence of the outputs in shallow layer and deep layer, respectively.
[ "[EMPTY]", "A → W shallow", "A → W deep", "A → D shallow", "A → D deep" ]
[ [ "Before Adapt", "0.0716", "0.0614", "0.2307", "0.0502" ], [ "After Adapt", "0.0227", "0.0134", "0.0266", "0.0140" ] ]
In this experiment, we analyze the statistics of the output of one shallow layer (the output of second convolution layer) and one deep layer (the output of last Inception module before ReLU) in the network. In particular, we compute the distance of source domain distribution and target domain distribution before and after adaptation. We denote each feature i as Fi, and assume that the output of each feature generally follows a Gaussian distribution with mean μi and variance σ2i. Then we use the symmetric KL divergence as our metric: D(Fi∣∣Fj) =KL(Fi∣∣Fj)+KL(Fj∣∣Fi), (3) KL(Fi∣∣Fj) = logσjσi+σ2i+(μi−μj)22σ2j−12. We plot the distribution of the distances in Fig. Our method reduces the domain discrepancy in both shallow layer and deep layer. We also report the quantitative results in Table. This experiment once again verifies the effectiveness of the proposed method.
\def\A{{\bf A}} \def\a{{\bf a}} \def\B{{\bf B}} \def\bb{{\bf b}} \def\C{{\bf C}} \def\D{{\bf D}} \def\d{{\bf d}} \def\E{{\bf E}} \def\e{{\bf e}} \def\F{{\bf F}} \def\f{{\bf f}} \def\G{{\bf G}} \def\g{{\bf g}} \def\k{{\bf k}} \def\K{{\bf K}} \def\H{{\bf H}} \def\h{{\bf h}} \def\I{{\bf I}} \def\L{{\bf L}} \def\M{{\bf M}} \def\m{{\bf m}} \def\n{{\bf n}} \def\N{{\bf N}} \def\BP{{\bf P}} \def\R{{\bf R}} \def\BS{{\bf S}} \def\s{{\bf s}} \def\t{{\bf t}} \def\T{{\bf T}} \def\U{{\bf U}} \def\u{{\bf u}} \def\V{{\bf V}} \def\v{{\bf v}} \def\W{{\bf W}} \def\w{{\bf w}} \def\X{{\bf X}} \def\Y{{\bf Y}} \def\Q{{\bf Q}} \def\X{{\bf X}} \def\x{{\bf x}} \def\y{{\bf y}} \def\Z{{\bf Z}} \def\z{{\bf z}} \def\0{{\bf 0}} \def\1{{\bf 1}} \def\SS{{\bf S}} \def\ME{{\mathbb E}} \def\MF{{\mathcal F}} \def\MG{{\mathcal G}} \def\MI{{\mathcal I}} \def\ML{{\mathcal L}} \def\MN{{\mathcal N}} \def\MO{{\mathcal O}} \def\MT{{\mathcal T}} \def\MX{{\mathcal X}} \def\SW{{\mathcal {SW}}} \def\MW{{\mathcal W}} \def\MY{{\mathcal Y}} \def\BR{{\mathbb R}} \def\MS{{\mathcal S}} \def\MC{{\mathcal C}} \def\ph{\mbox{\boldmath$\phi$\unboldmath}} \def\vp{\mbox{\boldmath$\varphi$\unboldmath}} \def\pii{\mbox{\boldmath$\pi$\unboldmath}} \def\Ph{\mbox{\boldmath$\Phi$\unboldmath}} \def\pss{\mbox{\boldmath$\psi$\unboldmath}} \def\Ps{\mbox{\boldmath$\Psi$\unboldmath}} \def\muu{\mbox{\boldmath$\mu$\unboldmath}} \def\Si{\mbox{\boldmath$\Sigma$\unboldmath}} \def\lam{\mbox{\boldmath$\lambda$\unboldmath}} \def\Lam{\mbox{\boldmath$\Lambda$\unboldmath}} \def\Gam{\mbox{\boldmath$\Gamma$\unboldmath}} \def\gam{\mbox{\boldmath$\gamma$\unboldmath}} \def\bet{\mbox{\boldmath$\beta$\unboldmath}} \def\Oma{\mbox{\boldmath$\Omega$\unboldmath}} \def\De{\mbox{\boldmath$\Delta$\unboldmath}} \def\de{\mbox{\boldmath$\delta$\unboldmath}} \def\Tha{\mbox{\boldmath$\Theta$\unboldmath}} \def\tha{\mbox{\boldmath$\theta$\unboldmath}} \def\etal{{\em et al.\/}\,} \def\tr{\mathrm{tr}} \def\exp{\mathrm{exp}} \def\rank{\mathrm{rank}} \def\diag{\mathrm{diag}} \def\dg{\mathrm{dg}} \def\argmax{\mathop{\rm argmax}} \def\argmin{\mathop{\rm argmin}} \def\vecd{\mathrm{vec}} \def\diag{\mathrm{diag}} \newcommand{\row}[2] {#1_{#2 \cdot}} \newcommand{\col}[2] {#1_{\cdot #2}} \documentclass{article} % For LaTeX2e \usepackage[table]{xcolor} \graphicspath{{fig/}} \title{Revisiting Batch Normalization For \\Practical Domain Adaptation} \author{Yanghao Li$^\dagger$, Naiyan Wang$^\ddagger$, Jianping Shi$^\diamond$, Jiaying Liu$^\dagger$, Xiaodi Hou$^\ddagger$\\ $^\dagger$ Institute of Computer Science and Technology, Peking University\\ $^\ddagger$ TuSimple ~~~ $^\diamond$ SenseTime\\ {\tt\small lyttonhao@pku.edu.cn}~~~{\tt\small winsty@gmail.com}~~{\tt\small shijianping5000@gmail.com}\\ {\tt\small liujiaying@pku.edu.cn}~~ {\tt\small xiaodi.hou@gmail.com} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \rowcolors{2}{white}{gray!25} \begin{abstract} Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. %However, it is still a common (yet inconvenient) practice to prepare at least tens of thousands of labeled images to fine-tune a network on every task before the model is ready to use. Recent study~\citep{deeper_bias} shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called \emph{Adaptive Batch Normalization} (AdaBN) to increase the generalization ability of a DNN. %, based on the well-known Batch Normalization (BN) technique~\citep{bn} which has become a standard component in modern deep learning. By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. \end{abstract} \begin{section}{Introduction} Training a DNN for a new image recognition task is expensive. It requires a large amount of labeled training images that are not easy to obtain. One common practice is to use %a training set from a different source. For instance, one can borrow training data from an existing dataset, or query images from search engines and then label them using Amazon Mechanical Turk. These approaches usually suffer from inferior performance due to dataset discrepancies, or ``dataset bias'', because 1) the distributions of the source domains (third party datasets or Internet images) are often different from the target domain (testing images); and 2) DNN is particularly good at capturing dataset bias in its internal representation~\citep{unbiased}, which eventually leads to overfitting. labeled data from other related source such as a different public dataset, or harvesting images by keywords from a search engine. Because 1) the distributions of the source domains (third party datasets or Internet images) are often different from the target domain (testing images); and 2) DNN is particularly good at capturing dataset bias in its internal representation~\citep{unbiased}, which eventually leads to overfitting, imperfectly paired training and testing sets usually leads to inferior performance. Known as domain adaptation, the effort to bridge the gap between training and testing data distributions has been discussed several times under the context of deep learning~\citep{ddc,dan,joint,revgrad}. To make the connection between the domain of training and the domain of testing, most of these methods require additional optimization steps and extra parameters. Such additional computational burden could greatly complicate the training of a DNN which is already intimidating enough for most people. In this paper, we propose a simple yet effective approach called \emph{AdaBN} for batch normalized DNN domain adaptation. % Our observation suggests a dissociation between bias and variance terms in a batch-normalized DNN. We hypothesize that the label related knowledge is stored in the weight matrix of each layer, whereas domain related knowledge is represented by the statistics of the Batch Normalization (BN)~\citep{bn} layer. Therefore, we can easily transfer the trained model to a new domain by modulating the statistics in the BN layer. This approach is straightforward to implement, has zero parameter to tune, and requires minimal computational resources. Moreover, our AdaBN is ready to be extended to more sophisticated scenarios such as multi-source domain adaptation and semi-supervised settings. Fig.~\ref{fig:teaser} illustrates the flowchart of AdaBN. To summarize, our contributions are as follows: \begin{enumerate} \item We propose a novel domain adaptation technique called Adaptive Batch Normalization (AdaBN). We show that AdaBN can naturally dissociate bias and variance of a dataset, which is ideal for domain adaptation tasks. \item We validate the effectiveness of our approach on standard benchmarks for both single source and multi-source domain adaptation. Our method outperforms the state-of-the-art methods. \item We conduct experiments on the cloud detection for remote sensing images to further demonstrate the effectiveness of our approach in practical use. \end{enumerate} \end{section} \begin{section}{Related Work} Domain transfer in visual recognition tasks has gained increasing attention in recent literature~\citep{beijbom2012domain,patel2015visual}. Often referred to as \emph{covariate shift}~\citep{shimodaira2000improving} or \emph{dataset bias}~\citep{unbiased}, this problem poses a great challenge to the generalization ability of a learned model. One key component of domain transfer is to model the difference between source and target distributions. In~\citet{khosla2012undoing}, the authors assign each dataset with an explicit bias vector, and train one discriminative model to handle multiple classification problems with different bias terms. A more explicit way to compute dataset difference is based on Maximum Mean Discrepancy (MMD)~\citep{mmd}. This approach projects each data sample into a Reproducing Kernel Hilbert Space, and then computes the difference of sample means. To reduce dataset discrepancies, many methods are proposed, including sample selections~\citep{huang2006correcting,landmark}, explicit projection learning~\citep{tca,gopalan2011domain,dip} and principal axes alignment~\citep{sa,gfk,aljundi2015landmarks}. All of these methods face the same challenge of %devising an effective domain transfer function in high-dimensional non-linear space. constructing the domain transfer function -- a high-dimensional non-linear function. Due to computational constraints, most of the proposed transfer functions are in the category of simple shallow projections, which are typically composed of kernel transformations and linear mapping functions. In the field of deep learning, feature transferability across different domains is a tantalizing yet generally unsolved topic~\citep{transferable,deeper_bias}. To transfer the learned representations to a new dataset, pre-training plus fine-tuning~\citep{decaf} have become \textit{de facto} procedures. However, adaptation by fine-tuning is far from perfect. It requires a considerable amount of labeled data from the target domain, and non-negligible computational resources to re-train the whole network. A series of progress has been made in DNN to facilitate domain transfer. Early works of domain adaptation either focus on reordering fine-tuning samples~\citep{dlid}, or regularizing MMD~\citep{mmd} in a shallow network~\citep{ae_adaptation}. It is only until recently that the problem is directly attacked under the setting of classification of unlabeled target domain using modern convolutional neural network (CNN) architecture. DDC~\citep{ddc} used the classical MMD loss to regularize the representation in the last layer of CNN. DAN~\citep{dan} further extended the method to multiple kernel MMD and multiple layer adaptation. Besides adapting features using MMD, RTN~\citep{long2016unsupervised} also added a gated residual layer for classifier adaptation. RevGrad~\citep{revgrad} devised a gradient reversal layer to %reverse the gradient from the domain classification loss. %that helps to distinguish the domains of each data sample. compensate the back-propagated gradients that are domain specific. Recently, by explicitly modeling both private and shared components of the domain representations in the network, \citet{bousmalis2016domain} proposed a Domain Separation Network to extract better domain-invariant features. Another related work is CORAL~\citep{coral}. This model focuses on the last layer of CNN. CORAL whitens the data in source domain, and then re-correlates the source domain features to target domain. This operation aligns the second order statistics of source domain and target domain distributions. Surprisingly, such simple approach yields state-of-the-arts results in various text classification and visual recognition tasks. Recently, Deep CORAL~\citep{sun2016deep} also extends the method into DNN by incorporating a CORAL loss. \subsection{Batch Normalization}\label{sec:bn} In this section, we briefly review Batch Normalization (BN)~\citep{bn} which is closely related to our AdaBN. The BN layer is originally designed to alleviate the issue of internal covariate shifting -- a common problem while training a very deep neural network. It first standardizes each feature in a mini-batch, and then learns a common slope and bias for each mini-batch. Formally, given the input to a BN layer $\X \in \BR^{n \times p}$, where $n$ denotes the batch size, and $p$ is the feature dimension, BN layer transforms a feature $j \in \{1 \ldots p\}$ into: \begin{equation} \begin{aligned} \hat{x}_j &= \frac{x_j - \ME[\col{\X}{j}]}{\sqrt{\text{Var}[\col{\X}{j}]}}, \\ y_j &= \gamma_j \hat{x}_j + \beta_j, \end{aligned} \end{equation} where $x_j$ and $y_j$ are the input/output scalars of one neuron response in one data sample; $\col{\X}{j}$ denotes the $j^{th}$ column of the input data; and $\gamma_j$ and $\beta_j$ are parameters to be learned. This transformation guarantees that the input distribution of each layer remains unchanged across different mini-batches. For Stochastic Gradient Descent (SGD) optimization, a stable input distribution could greatly facilitate model convergence, leading to much faster training speed for CNN. Moreover, if training data are shuffled at each epoch, the same %training sample is transformed, or augmented differently in each epoch. This property acts as an additional regularization to combat against overfitting. In the testing phase, the global statistics instead of the statistics from one mini-batch are used to stabilize the testing results. training sample will be applied with different transformations, or in other words, more comprehensively augmented throughout the training. During the testing phase, the global statistics of all training samples is used to normalize every mini-batch of test data. Extensive experiments have shown that Batch Normalization significantly reduces the number of iteration to converge, and improves the final performance at the same time. BN layer has become a standard component in recent top-performing CNN architectures, such as deep residual network~\citep{resnet}, and Inception V3~\citep{inception_v3}. \end{section} \begin{section}{The Model} In Sec.~\ref{sec:observation}, we first analyze the domain shift in deep neural network, and reveal two key observations. Then in Sec.~\ref{sec:method}, we introduce our Adaptive Batch Normalization (AdaBN) method based on these observations. Finally, we analyze our method in-depth in Sec.~\ref{sec:discuss}. \begin{subsection}{A Pilot Experiment}\label{sec:observation} Although the Batch Normalization (BN) technique is originally proposed to help SGD optimization, its core idea is to align the distribution of training data. From this perspective, it is interesting to examine the BN parameters (batch-wise mean and variance) over different dataset at different layers of the network. In this pilot experiment, we use MXNet implementation~\citep{mxnet} of the Inception-BN model~\citep{bn} pre-trained on ImageNet classification task~\citep{imagenet} as our baseline DNN model. Our image data are drawn from~\citep{bing-caltech}, which contains the same classes of images from both Caltech-256 dataset~\citep{griffin2007caltech} and Bing image search results. For each mini-batch sampled from one dataset, we concatenate the mean and variance of all neurons from one layer to form a feature vector. Using linear SVM, we can almost perfectly classify whether the mini-batch feature vector is from Caltech-256 or Bing dataset. Fig.~\ref{fig:bn_visualize} visualizes the distributions of mini-batch feature vectors from two datasets in 2D. It is clear that BN statistics from different domains are separated into clusters. This pilot experiment suggests: \begin{enumerate} \item Both shallow layers and deep layers of the DNN are influenced by domain shift. Domain adaptation by manipulating the output layer alone is not enough. \item The statistics of BN layer contain the traits of the data domain. \end{enumerate} Both observations motivate us to adapt the representation across different domains by BN layer. \end{subsection} \begin{subsection}{Adaptive Batch Normalization}\label{sec:method} Given the pre-trained DNN model and a target domain, our Adaptive Batch Normalization algorithm is as follows\footnote{In practice we adopt an online algorithm~\citep{donald1999art} to efficiently estimate the mean and variance.}: \floatstyle{plain} \newfloat{myalgo}{tbhp}{mya} \begin{myalgo} \centering \begin{minipage}{8cm} \begin{algorithm}[H] \caption{Adaptive Batch Normalization (AdaBN)} \begin{algorithmic} \FOR{neuron $j$ in DNN} \STATE Concatenate neuron responses on all images of target domain $t$: $\mathbf{x}_j = [\ldots, x_j(m), \ldots]$ \STATE Compute the mean and variance of the target domain: $\mu_j^t = \mathbb{E}(\mathbf{x}_j^t)$, $\sigma_j^t = \sqrt{\text{Var}(\mathbf{x}_j^t)}$. \ENDFOR \FOR{neuron $j$ in DNN, testing image $m$ in target domain} \STATE Compute BN output $y_j(m):= \gamma_j \frac{\big(x_j(m) - \mu_j^t\big)}{\sigma_j^t} + \beta_j$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \end{myalgo} The intuition behind our method is straightforward: The standardization of each layer by domain ensures that each layer receives data from a similar distribution, no matter it comes from the source domain or the target domain. For $K$ domain adaptation where $K > 2$, we standardize each sample by the statistics in its own domain. During training, the statistics are calculated for every mini-batch, the only thing that we need to make sure is that the samples in every mini-batch are from the same domain. For (semi-)supervised domain adaptation, we may use the labeled data to fine-tune the weights as well. As a result, our method could fit in all different settings of domain adaptation with minimal effort. \end{subsection} \begin{subsection}{Further Thoughts About Adabn}\label{sec:discuss} The simplicity of AdaBN is in sharp contrast to the complication of the domain shift problem. One natural question to ask is whether such simple translation and scaling operations could approximate the intrinsically non-linear domain transfer function. Consider a simple neural network with input $\x \in \BR^{p_1 \times 1}$. It has one BN layer with mean and variance of each feature being $\mu_i$ and $\sigma^2_i$ ($i \in \{1 \ldots p_2\}$), one fully connected layer with weight matrix $\W \in \BR^{p_1 \times p_2}$ and bias $\bb \in \BR^{p_2 \times 1}$, and a non-linear transformation layer $f(\cdot)$, where $p_1$ and $p_2$ correspond to the input and output feature size. The output of this network is $f(\W_a \x + \bb_a)$, where \begin{equation} \begin{aligned} \W_a &= \W^T \Si^{-1},& \bb_a &= -\W^T \Si^{-1} \muu + \bb, \\ \Si &= \diag(\sigma_1, ..., \sigma_{p_1}),& \muu &= (\mu_1, ..., \mu_{p_1}). \end{aligned} \end{equation} The output without BN is simply $f(\W^T \x + \bb)$. We can see that the transformation is highly non-linear even for a simple network with one computation layer. As CNN architecture goes deeper, it will gain increasing power to represent more complicated transformations. Another question is why we transform the neuron responses independently, not decorrelate and then re-correlate the responses as suggested in~\citet{coral}. Under certain conditions, decorrelation could improve the performance. However, in CNN, the mini-batch size is usually smaller than the feature dimension, leading to singular covariance matrices that is hard to be inversed. As a result, the covariance matrix is always singular. In addition, decorrelation requires to compute the inverse of the covariance matrix which is computationally intensive, especially if we plan to apply AdaBN to all layers of the network. \end{subsection} \end{section} \begin{section}{Experiments}\label{sec:exp} In this section, we demonstrate the effectiveness of AdaBN on standard domain adaptation datasets, and empirically analyze the adapted features. We also evaluation our method on a practical application with remote sensing images. %In the sequel, we refer our method as ``BN Adapt'' for short. \subsection{Experimental Settings} We first introduce our experiments on two standard datasets: Office~\citep{office} and Caltech-Bing~\citep{bing-caltech}. \textbf{Office}~\citep{office} is a standard benchmark for domain adaptation, which is a collection of 4652 images in 31 classes from three different domains: \textit{Amazon}(\textbf{A}), \textit{DSRL}(\textbf{D}) and \textit{Webcam}(\textbf{W}). Similar to~\citep{ddc,coral,dan}, we evaluate the pairwise domain adaption performance of AdaBN on all six pairs of domains. For the multi-source setting, we evaluate our method on three transfer tasks \{\textbf{A, W}\} $\rightarrow$ \textbf{D}, \{\textbf{A, D}\} $\rightarrow$ \textbf{W}, \{\textbf{D, W}\} $\rightarrow$ \textbf{A}. \textbf{Caltech-Bing}~\citep{bing-caltech} is a much larger domain adaptation dataset, which contains 30,607 and 121,730 images in 256 categories from two domains Caltech-256(\textbf{C}) and Bing(\textbf{B}). The images in the Bing set are collected from Bing image search engine by keyword search. Apparently Bing data contains noise, and its data distribution is dramatically different from that of Caltech-256. We compare our approach with a variety of methods, including four shallow methods: SA~\citep{sa}, LSSA~\citep{aljundi2015landmarks}, GFK~\citep{gfk}, CORAL~\citep{coral}, and four deep methods: DDC~\citep{ddc}, DAN~\citep{dan}, RevGrad~\citep{revgrad}, Deep CORAL~\citep{sun2016deep}. Specifically, GFK models domain shift by integrating an infinite number of subspaces that characterize changes in statistical properties from the source to the target domain. SA, LSSA and CORAL align the source and target subspaces by explicit feature space transformations that would map source distribution into the target one. DDC and DAN are deep learning based methods which maximize domain invariance by adding to AlexNet one or several adaptation layers using MMD. RevGrad incorporates a gradient reversal layer in the deep model to encourage learning domain-invariant features. Deep CORAL extends CORAL to perform end-to-end adaptation in DNN. It should be noted that these deep learning methods have the adaptation layers on top of the output layers of DNNs, which is a sharp contrast to our method that delves into early convolution layers as well with the help of BN layers. We follow the full protocol~\citep{decaf} for the single source setting; while for multiple sources setting, we use all the samples in the source domains as training data, and use all the samples in the target domain as testing data. We fine-tune the Inception-BN~\citep{bn} model on source domain in each task for 100 epochs. The learning rate is set to $0.01$ initially, and then is dropped by a factor $0.1$ every 40 epochs. Since the office dataset is quite small, following the best practice in~\citet{dan}, we freeze the first three groups of Inception modules, and set the learning rate of fourth and fifth group one tenth of the base learning rate to avoid overfitting. For Caltech-Bing dataset, we fine-tune the whole model with the same base learning rate. \subsection{Results} \subsubsection{Office Dataset} Our results on Office dataset is reported in Table~\ref{tbl:result} and Table~\ref{tbl:multi} for single/multi source(s), respectively. Note that the first 5 models of the Table~\ref{tbl:result} are pre-trained on AlexNet~\citep{alexnet} instead of the Inception-BN~\citep{bn} model, due to the lack of publicly available pre-trained Inception BN model in Caffe~\citep{caffe}. Thus, the relative improvements over the baseline (AlexNet/Inception BN) make more sense than the absolute numbers of each algorithm. From Table~\ref{tbl:result}, we first notice that the Inception-BN indeed improves over the AlexNet on average, which means that the CNN pre-trained on ImageNet has learned general features, the improvements on ImageNet can be transferred to new tasks. Among the methods based on Inception-BN features, our method improves the most over the baseline. Moreover, since our method is complementary to other methods, we can simply apply CORAL on the top of AdaBN. Not surprisingly, this simple combination exhibits 0.5\% increase in performance. This preliminary test reveals further potential of AdaBN if combined with other advanced domain adaptation methods. Finally, we could improve 1.7\% over the baseline, and advance the state-of-the-art results for this dataset. %Compared to other methods based on AlexNet, our method is better than DDC and RevGrad, and worse than DAN and Deep CORAL in terms of relative improvements over corresponding baselines. None of the compared methods has reported their performance on multi-source domain adaptation. To demonstrate the capacity of AdaBN under multi-domain settings, we compare it against CORAL, which is the best performing algorithm in the single source setting. % here we only compare AdaBN with the best algorithm CORAL in the single source setting. Analyzing the results of the baseline in Table~\ref{tbl:multi}, The result is reported in Table~\ref{tbl:multi}. We find that simply combining two domains does not lead to better performance. The result is generally worse compared to the best performing single domain between the two. This phenomenon suggests that if we cannot properly cope with domain bias, the increase of training samples may be reversely affect to the testing performance. This result confirms the necessity of domain adaptation. In this more challenging setting, AdaBN still outperforms the baseline and CORAL on average. Again, when combined with CORAL, our method demonstrates further improvements. At last, our method archives 2.3\% gain over the baseline. \subsubsection{Caltech-Bing Dataset} To further evaluate our method on the large-scale dataset, we show our results on Caltech-Bing Dataset in Table~\ref{tbl:caltech-bing}. Compared with CORAL, AdaBN achieves better performance, which improves 1.8\% over the baseline. Note that all the domain adaptation methods show minor improvements over the baseline in the task \textbf{C} $\rightarrow$ \textbf{B}. One of the hypotheses to this relatively small improvement is that the images in Bing dataset are collected from Internet, which are more diverse and noisier~\citep{bing-caltech}. Thus, it is not easy to adapt on the Bing dataset from the relatively clean dataset Caltech-256. Combining CORAL with our method does not offer further improvements. This might be explained by the noise of the Bing dataset and the imbalance of the number of images in the two domains. \subsection{Empirical Analysis} In this section, we empirically analyze the features adapted by our method and investigate the influence of the number of samples in target domain to the performance. \subsubsection{Analysis of Feature Divergence.} In this experiment, we analyze the statistics of the output of one shallow layer (the output of second convolution layer) and one deep layer (the output of last Inception module before ReLU) in the network. In particular, we compute the distance of source domain distribution and target domain distribution before and after adaptation. We denote each feature $i$ as $F_i$, and assume that the output of each feature generally follows a Gaussian distribution with mean $\mu_i$ and variance $\sigma_i^2$. Then we use the symmetric KL divergence as our metric: \begin{equation} \begin{aligned} D(F_i \mid\mid F_j) &= \text{KL}(F_i \mid \mid F_j) + \text{KL}(F_j \mid \mid F_i),\\ \text{KL}(F_i \mid \mid F_j) &= \log \frac{\sigma_j}{\sigma_i} + \frac{\sigma_i^2 + (\mu_i - \mu_j)^2}{2\sigma_j^2} - \frac{1}{2}. \end{aligned} \end{equation} We plot the distribution of the distances in Fig.~\ref{fig:distribution}. Our method reduces the domain discrepancy in both shallow layer and deep layer. We also report the quantitative results in Table.~\ref{tbl:distribution}. This experiment once again verifies the effectiveness of the proposed method. \subsubsection{Sensitivity to Target Domain Size.} Since the key of our method is to calculate the mean and variance of the target domain on different BN layers, it is very natural to ask how many target images is necessary to obtain stable statistics. In this experiment, we randomly select a subset of images in target domain to calculate the statistics and then evaluate the performance on the whole target set. Fig.~\ref{fig:target-number} illustrates the effect of using different number of batches. The results demonstrate that our method can obtain good results when using only a small part of the target examples. It should also be noted that in the extremal case of one batch of target images, our method still achieves better results than the baseline. This is valuable in practical use since a large number of target images are often not available. \begin{subsection}{Practical Application for Cloud Detection in Remote Sensing Images} In this section, we further demonstrate the effectiveness of AdaBN on a practical problem: Cloud Detection in Remote Sensing Images. Since remote sensing images are taken by different satellites with different sensors and resolutions, the captured images are visually different in texture, color, and value range distributions, as shown in Fig.~\ref{fig:rsimage}. How to adapt a model trained on one satellite to another satellite images is naturally a domain adaptation problem. Our task here is to identify cloud from the remote sensing images, which can be regarded as a semantic segmentation task. The experiment is taken under a self-collected dataset, which includes three image sets, from GF2, GF1 and Tianhui satellites. Each image set contains 635, 324 and 113 images with resolution over 6000x6000 pixels respectively. We name the three different datasets following the satellite names. GF2 dataset is used as the training dataset while GF1 and Tianhui datasets are for testing. We use a state-of-art semantic segmentation method \citep{chen2016deeplab} as our baseline model.%Only the GF2 dataset contains labeled training data. We use the training data from GF2 dataset to train a cloud detector for GF2 images. The testing result in GF2 test set can reach to 87.64\% mIOU for our baseline model. The results on GF1 and Tianhui datasets are shown in Table~\ref{tbl:remote}. The relatively low results of the baseline method indicate that there exists large distribution disparity among images from different satellites. Thus, the significant improvement after applying AdaBN reveals the effectiveness of our method. Some of the visual results are shown in Fig.~\ref{fig:rsadabn}. Since other domain adaptation methods require either additional optimization steps and extra components ($e.g.$ MMD) or post-processing distribution alignment (like CORAL), it is very hard to apply these methods from image classification to this large-size (6000x6000) segmentation problem. Comparatively, besides the effective performance, our method needs no extra parameters and very few computations over the whole adaptation process. \end{subsection} \end{section} \section{Conclusion and Future Works} In this paper, we have introduced a simple yet effective approach for domain adaptation on batch normalized neural networks. Besides its original uses, we have exploited another functionality of Batch Normalization (BN) layer: domain adaptation. The main idea is to replace the statistics of each BN layer in source domain with those in target domain. The proposed method is easy to implement and parameter-free, and it takes almost no effort to extend to multiple source domains and semi-supervised settings. % Moreover, our method is not sensitive to the target domain size. Thus it is more favorable for practitioners compared with other deep learning based methods. Our method established new state-of-the-art results on both single and multiple source(s) domain adaptation settings on standard benchmarks. At last, the experiments on cloud detection for large-size remote sensing images further demonstrate the effectiveness of our method in practical use. We believe our method opens up a new direction for domain adaptation. In contrary to other methods that use Maximum Mean Discrepancy (MMD) or domain confusion loss to update the weights in CNN for domain adaptation, our method only modifies the statistics of BN layer. Therefore, our method is fully complementary to other existing deep learning based methods. It is interesting to see how these different methods can be unified under one framework. {%\small \bibliographystyle{iclr2017_conference} } \end{document}
Dropout with Expectation-linear Regularization
1609.08017
Table 3: Hyper-parameters for all experiments.
[ "[BOLD] Experiment MNIST", "[BOLD] Hyper-parameter batch size", "200", "[EMPTY]" ]
[ [ "MNIST", "initial learning rate [ITALIC] η0", "0.1", "[EMPTY]" ], [ "MNIST", "decay rate [ITALIC] ρ", "0.025", "[EMPTY]" ], [ "MNIST", "momentum", "0.9", "[EMPTY]" ], [ "MNIST", "momentum type", "standard", "[EMPTY]" ], [ "MNIST", "max-norm constrain", "3.5", "[EMPTY]" ], [ "CIFAR", "[EMPTY]", "[BOLD] 10", "[BOLD] 100" ], [ "CIFAR", "batch size", "100", "100" ], [ "CIFAR", "initial learning rate [ITALIC] η0 for conv layers", "0.001", "0.001" ], [ "CIFAR", "initial learning rate [ITALIC] η0 for dense layers", "0.1", "0.02" ], [ "CIFAR", "decay rate [ITALIC] ρ", "0.005", "0.005" ], [ "CIFAR", "momentum", "0.95", "0.95" ], [ "CIFAR", "momentum type", "standard", "nesterov" ], [ "CIFAR", "max-norm constrain", "4.0", "2.0" ], [ "CIFAR", "L2-norm decay", "0.001", "0.001" ] ]
Most of the hyper-parameters are chosen from Srivastava et al. But for some experiments, we cannot reproduce the performance reported in Srivastava et al. (We guess one of the possible reasons is that we used different library for implementation.). For these experiments, we tune the hyper-parameters on the validation sets by random search. Due to time constrains it is infeasible to do a random search across the full hyper-parameter space. Thus, we try to use as many hyper-parameters reported in Srivastava et al.
\documentclass{article} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % simple URL typesetting % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. % microtypography % [disable] \usepackage[ruled,lined]{algorithm2e} \usepackage[lofdepth,lotdepth]{subfig} \theoremstyle{plain} \newcounter{theoremcounter} \newtheorem{theorem}[theoremcounter]{Theorem} \newtheorem{lemma}[theoremcounter]{Lemma} \newcommand{\op}{\mathsf{op}} \theoremstyle{definition} \newcounter{definitioncounter} \newtheorem{definition}[definitioncounter]{Definition} \newcommand{\argmax}{\operatornamewithlimits{argmax}} \newcommand{\argmin}{\operatornamewithlimits{argmin}} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \iclrfinalcopy % Uncomment for camera-ready version \title{Dropout with Expectation-linear \\ Regularization} \author{ Xuezhe Ma, Yingkai Gao \\ Language Technologies Institute \\ Carnegie Mellon University \\ {\tt \{xuezhem, yingkaig\}@cs.cmu.edu} \\ \And Zhiting Hu, Yaoliang Yu \\ Machine Learning Department \\ Carnegie Mellon University \\ {\tt \{zhitinghu, yaoliang\}@cs.cmu.edu} \\ \AND Yuntian Deng \\ School of Engineering and Applied Sciences \\ Harvard University \\ \texttt{dengyuntian@gmail.com} \\ \And Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ \texttt{hovy@cmu.edu} \\ } \begin{document} \maketitle \begin{abstract} Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout's training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an \emph{explicit} control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently. \end{abstract} \section{Introduction} Deep neural networks \citep[DNNs, e.g.,][]{LeCunBH15,Schmidhuber15}, if trained properly, have been demonstrated to significantly improve the benchmark performances in a wide range of application domains. As neural networks go deeper and deeper, naturally, its model complexity also increases quickly, hence the pressing need to \emph{reduce overfitting} in training DNNs. A number of techniques have emerged over the years to address this challenge, among which dropout~\citep{hinton2012improving,srivastava2013improving} has stood out for its simplicity and effectiveness. In a nutshell, dropout \emph{randomly} ``drops'' neural units during training as a means to prevent feature co-adaptation---a sign of overfitting \citep{hinton2012improving}. Simple as it appears to be, dropout has led to several record-breaking performances~\citep{hinton2012improving,ma2016end}, and thus spawned a lot of recent interests in analyzing and justifying dropout from the theoretical perspective, and also in further improving dropout from the algorithmic and practical perspective. In their pioneering work, \citet{hinton2012improving} and \citet{srivastava2014dropout} interpreted dropout as an extreme form of model combination (aka. model ensemble) with extensive parameter/weight sharing, and they proposed to learn the combination through minimizing an appropriate expected loss. Interestingly, they also pointed out that for a single logistic neural unit, the output of dropout is in fact the geometric mean of the outputs of the model ensemble with shared parameters. Subsequently, many theoretical justifications of dropout have been explored, and we can only mention a few here due to space limits. Building on the weight sharing perspective, \citet{baldi2013understanding,baldi2014dropout} analyzed the ensemble averaging property of dropout in deep non-linear logistic networks, and supported the view that dropout is equivalent to applying stochastic gradient descent on some regularized loss function. \citet{wager2013dropout} treated dropout as an adaptive regularizer for generalized linear models (GLMs). \citet{helmbold2016fundamental} discussed the differences between dropout and traditional weight decay regularization. In terms of statistical learning theory, \citet{gao2014dropout} studied the Rademacher complexity of different types of dropout, showing that dropout is able to reduce the Rademacher complexity polynomially for shallow neural networks (with one or no hidden layers) and exponentially for deep neural networks. This latter work~\citep{gao2014dropout} formally demonstrated that dropout, due to its regularizing effect, contributes to reducing the inherent model complexity, in particular the variance component in the generalization error. Seen as a model combination technique, it is intuitive that dropout contributes to reducing the variance of the model performance. Surprisingly, dropout has also been shown to play some role in reducing the model bias. For instance, \citet{jain2015drop} studied the ability of dropout training to escape local minima, hence leading to reduced model bias. Other studies~\citep{chen2014dropout,helmbold2014inductive,wager2014altitude} focus on the effect of the dropout noise on models with shallow architectures. We noted in passing that there are also some work~\citep{kingma2015variational,gal2015dropout,gal2016dropout:rnn} trying to understand dropout from the Bayesian perspective. In this work, we first formulate dropout as a tractable approximation of a latent variable model, and give a clean view of weight sharing (\S 3). Then, we focus on an \emph{inference gap} in dropout that has somehow gotten under-appreciated: In the inference phase, for computational tractability considerations, the model ensemble generated by dropout is approximated by a \emph{single} model with scaled weights, resulting in a gap between training and inference, and rendering the many previous theoretical findings inapplicable. In general, this inference gap can be very large and no attempt (to our best knowledge) has been made to control it. We make three contributions in bridging this gap: Theoretically, we introduce expectation-linear dropout neural networks, through which we are able to explicitly quantify the inference gap (\S 4). In particular, our theoretical results explain why the max-norm constraint on the network weights, a standard practice in training DNNs, can lead to a small inference gap hence potentially improve performance. Algorithmically, we propose to add a sampled version of the inference gap to regularize the standard dropout training objective~(\emph{expectation-linearization}), hence allowing explicit control of the inference gap, and analyze the interaction between expectation-linearization and the model accuracy (\S 5). Experimentally, through three benchmark datasets we show that our regularized dropout is not only as simple and efficient as standard dropout but also consistently leads to improved performance (\S 6). \section{Dropout Neural Networks} In this section we set up the notations, review the dropout neural network model, and discuss the inference gap in standard dropout training that we will attempt to study in the rest of the paper. \subsection{DNNs and Notations} \label{subsec:notation} Throughout we use uppercase letters for random variables (and occasionally for matrices as well), and lowercase letters for realizations of the corresponding random variables. Let $X \in \mathcal{X}$ be the input of the neural network, $Y \in \mathcal{Y}$ be the desired output, and $D = \{(x_1, y_1), \ldots, (x_N, y_N)\}$ be our training sample, where $x_i, i=1,\ldots, N,$ (resp. $y_i$) are usually i.i.d. samples of $X$ (resp. $Y$). Let $\mathbf{M}$ denote a deep neural network with $L$ hidden layers, indexed by $l \in \{1, \ldots, L \}$. Let $\mathbf{h}^{(l)}$ denote the output vector from layer $l$. As usual, $\mathbf{h}^{(0)} = x$ is the input, and $\mathbf{h}^{(L)}$ is the output of the neural network. Denote $\theta = \{\theta_l: l = 1, \ldots, L\}$ as the set of parameters in the network $\mathbf{M}$, where $\theta_l$ assembles the parameters in layer $l$. With dropout, we need to introduce a set of dropout random variables $S = \{\Gamma^{(l)}: l = 1, \ldots, L\}$, where $\Gamma^{(l)}$ is the dropout random variable for layer $l$. Then the deep neural network $\mathbf{M}$ can be described as: \begin{equation}\label{eq:dnn} \mathbf{h}^{(l)} = f_l(\mathbf{h}^{(l - 1)} \odot \gamma^{(l)}; \theta_l), \quad l = 1, \ldots, L, \end{equation} where $\odot$ is the element-wise product and $f_l$ is the transformation function of layer $l$. For example, if layer $l$ is a fully connected layer with weight matrix $W$, bias vector $b$, and sigmoid activation function $\sigma(x) = \frac{1}{1 + \exp(-x)}$, then $f_l(x) = \sigma(W x + b)$). We will also use $\mathbf{h}^{(l)}(x, s; \theta)$ to denote the output of layer $l$ with input $x$ and dropout value $s$, under parameter $\theta$. In the simplest form of dropout, which is also called standard dropout, $\Gamma^{(l)}$ is a vector of independent Bernoulli random variables, each of which has probability $p_l$ of being 1 and $1 - p_l$ of being 0. This corresponds to dropping each of the weights independently with probability $p_l$. \subsection{Dropout Training} The standard dropout neural networks can be trained using stochastic gradient decent (SGD), with a sub-network sampled by dropping neural units for each training instance in a mini-batch. Forward and backward pass for that training instance are done only on the sampled sub-network. Intuitively, dropout aims at, simultaneously and jointly, training an ensemble of exponentially many neural networks (one for each configuration of dropped units) while sharing the same weights/parameters. The goal of the stochastic training procedure of dropout can be understood as minimizing an expected loss function, after marginalizing out the dropout variables~\citep{srivastava2013improving,wang2013fast}. In the context of maximal likelihood estimation, dropout training can be formulated as: \begin{equation}\label{eq:expect-loss} \theta^* = \argmin\limits_{\theta} \mathrm{E}_{S_D}[-l(D, S_D; \theta)] = \argmin\limits_{\theta} \mathrm{E}_{S_D}\Big[ -\sum\limits_{i=1}^{N} \log p(y_i|x_i, S_i; \theta)\Big], \end{equation} where recall that $D$ is the training sample, $S_D = \{S_1, \ldots, S_N\}$ is the dropout variable (one for each training instance), and $l(D, S_D; \theta)$ is the (conditional) log-likelihood function defined by the conditional distribution $p(y|x, s; \theta)$ of output $y$ given input $x$, under parameter $\theta$ and dropout variable $s$. Throughout we use the notation $\mathrm{E}_Z$ to denote the conditional expectation where all random variables except $Z$ are conditioned on. Dropout has also been shown to work well with regularization, such as L2 weight decay~\citep{tikhonov1943stability}, Lasso~\citep{tibshirani1996regression}, KL-sparsity\citep{bradley2008differential,hinton2010practical}, and max-norm regularization~\citep{srebro2004maximum}, among which the max-norm regularization --- that constrains the norm of the incoming weight matrix to be bounded by some constant --- was found to be especially useful for dropout~\citep{srivastava2013improving,srivastava2014dropout}. \subsection{Dropout Inference and Gap}\label{subsec:dropout:inference} As mentioned before, dropout is effectively training an ensemble of neural networks with weight sharing. Consequently, at test time, the output of each network in the ensemble should be averaged to deliver the final prediction. This averaging over exponentially many sub-networks is, however, intractable, and standard dropout typically implements an approximation by introducing a \emph{deterministic} scaling factor for each layer to replace the \emph{random} dropout variable: \begin{equation}\label{eq:prediction} \mathrm{E}_S[\mathbf{H}^{(L)}(x, S; \theta)] \stackrel{?}{\approx} \mathbf{h}^{(L)}(x, \mathrm{E}[S]; \theta), \end{equation} where the right-hand side is the output of a single deterministic neural network whose weights are scaled to match the \emph{expected} number of active hidden units on the left-hand side. Importantly, the right-hand side can be easily computed since it only involves a single deterministic network. \citet{bulo2016dropout} combined dropout with knowledge distillation methods~\citep{hinton2015distilling} to better approximate the averaging processing of the left-hand side. However, the quality of the approximation in \eqref{eq:prediction} is largely unknown, and to our best knowledge, no attempt has been made to \emph{explicitly} control this inference gap. The main goal of this work is to explicitly quantify, algorithmically control, and experimentally demonstrate the inference gap in \eqref{eq:prediction}, in the hope of improving the generalization performance of DNNs eventually. To this end, in the next section we first present a latent variable model interpretation of dropout, which will greatly facilitate our later theoretical analysis. \section{Dropout as Latent Variable Models}\label{sec:lvm} With the end goal of studying the inference gap in \eqref{eq:prediction} in mind, in this section, we first formulate dropout neural networks as a latent variable model (LVM) in \S~\ref{subsec:lvm}. Then, we point out the relation between the training procedure of LVM and that of standard dropout in \S~\ref{subsec:training}. The advantage of formulating dropout as a LVM is that we need only deal with a single model (with latent structure), instead of an ensemble of exponentially many different models (with weight sharing). This much simplified view of dropout enables us to understand and analyze the model parameter $\theta$ in a much more straightforward and intuitive way. \subsection{An LVM Formulation of Dropout}\label{subsec:lvm} A latent variable model consists of two types of variables: the observed variables that represent the empirical (observed) data and the latent variables that characterize the hidden (unobserved) structure. To formulate dropout as a latent variable model, the input $x$ and output $y$ are regarded as observed variables, while the dropout variable $s$, representing the sub-network structure, is hidden. Then, upon fixing the input space $\mathcal{X}$, the output space $\mathcal{Y}$, and the latent space $\mathcal{S}$ for dropout variables, the conditional probability of $y$ given $x$ under parameter $\theta$ can be written as \begin{equation}\label{eq:lvm} p(y|x; \theta) = \int_{\mathcal{S}} p(y|x, s; \theta) p(s) d\mu(s), \end{equation} where $p(y|x, s; \theta)$ is the conditional distribution modeled by the neutral network with configuration $s$ (same as in Eq.~\eqref{eq:expect-loss}), $p(s)$ is the distribution of dropout variable $S$ (e.g. Bernoulli), here assumed to be independent of the input $x$, and $\mu(s)$ is the base measure on the space $\mathcal{S}$. \subsection{LVM Dropout training vs. Standard Dropout Training}\label{subsec:training} Building on the above latent variable model formulation \eqref{eq:lvm} of dropout, we are now ready to point out a simple relation between the training procedure of LVM and that of standard dropout. Given an i.i.d. training sample $D$, the maximum likelihood estimate for the LVM formulation of dropout in \eqref{eq:lvm} is equivalent to minimizing the following negative log-likelihood function: \begin{equation}\label{eq:lvm:train} \theta^* = \argmin\limits_{\theta} -l(D;\theta) = \argmin\limits_{\theta} -\sum\limits_{i=1}^{N} \log p(y_i|x_i; \theta), \end{equation} where $p(y|x; \theta)$ is given in Eq.~\eqref{eq:lvm}. Recall the dropout training objective $\mathrm{E}_{S_D}[-l(D, S_D; \theta)]$ in Eq.~\eqref{eq:expect-loss}. We have the following theorem as a simple consequence of Jensen's inequality (details in Appendix~\ref{appendix:proof:thm1}): \begin{theorem}\label{thm:loss-bound} The expected loss function of standard dropout (Eq.~\eqref{eq:expect-loss}) is an upper bound of the negative log-likelihood of LVM dropout (Eq.~\eqref{eq:lvm:train}): \begin{equation}\label{eq:train:rel} -l(D;\theta) \leq \mathrm{E}_{S_D}[-l(D, S_D; \theta)]. \end{equation} \end{theorem} Theorem~\ref{thm:loss-bound}, in a rigorous sense, justifies dropout training as a convenient and tractable approximation of the LVM formulation in \eqref{eq:lvm}. Indeed, since directly minimizing the marginalized negative log-likelihood in \eqref{eq:lvm:train} may not be easy, a standard practice is to replace the marginalized (conditional) likelihood $p(y|x;\theta)$ in \eqref{eq:lvm} with its empirical Monte carlo average through drawing samples from the dropout variable $S$. The dropout training objective in \eqref{eq:expect-loss} corresponds exactly to this Monte carlo approximation when a \emph{single} sample $S_i$ is drawn for each training instance $(x_i, y_i)$. Importantly, we note that the above LVM formulation involves only a single network parameter $\theta$, which largely simplifies the picture and facilitates our subsequent analysis. \section{Expectation-Linear Dropout Neural Networks}\label{subsec:expect:linearity} Building on the latent variable model formulation in \S~\ref{sec:lvm}, we introduce in this section the notion of expectation-linearity that essentially measures the inference gap in \eqref{eq:prediction}. We then characterize a general class of neural networks that exhibit expectation-linearity, either exactly or approximately over a distribution $p(x)$ on the input space. We start with defining expectation-linearity in the simplest single-layer neural network, then we extend the notion into general deep networks in a natural way. \begin{definition}[Expectation-linear Layer]\label{def:expect-linear:layer} A network layer $\mathbf{h} = f(x\odot\gamma; \theta)$ is \emph{expectation-linear with respect to} a set $\mathcal{X}' \subseteq \mathcal{X}$, if for all $x \in \mathcal{X}'$ we have \begin{equation} \label{eq:el} \big\| \mathrm{E}[f(x \odot \Gamma; \theta)] - f(x \odot \mathrm{E}[\Gamma]; \theta) \big\|_2 = 0. \end{equation} In this case we say that $\mathcal{X}'$ is \emph{expectation-linearizable}, and $\theta$ is \emph{expectation-linearizing} w.r.t $\mathcal{X}'$. \end{definition} Obviously, the condition in \eqref{eq:el} will guarantee no gap in the dropout inference approximation \eqref{eq:prediction}---an admittedly strong condition that we will relax below. Clearly, if $f$ is an affine function, then we can choose $\mathcal{X}' = \mathcal{X}$ and expectation-linearity is trivial. Note that expectation-linearity depends on the network parameter $\theta$ and the dropout distribution $\Gamma$. Expectation-linearity, as defined in \eqref{eq:el}, is overly strong: under standard regularity conditions, essentially the transformation function $f$ has to be affine over the set $\mathcal{X}'$, ruling out for instance the popular sigmoid or tanh activation functions. Moreover, in practice, downstream use of DNNs are usually robust to small errors resulting from \emph{approximate} expectation-linearity (hence the empirical success of dropout), so it makes sense to define an inexact extension. We note also that the definition in \eqref{eq:el} is \emph{uniform} over the set $\mathcal{X}'$, while in a statistical setting it is perhaps more meaningful to have expectation-linearity ``on average,'' since inputs from lower density regions are not going to play a significant role anyway. Taking into account the aforementioned motivations, we arrive at the following inexact extension: \begin{definition}[Approximately Expectation-linear Layer]\label{def:approx-expect-linear:layer} A network layer $\mathbf{h} = f(x\odot\gamma; \theta)$ is \emph{\mbox{$\delta$-approximately} expectation-linear with respect to} a distribution $p(x)$ over $\mathcal{X}$ if \begin{equation} \label{eq:ael} \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{\Gamma}\big[f(X \odot \Gamma; \theta) | X\big] - f(X \odot \mathrm{E}[\Gamma]; \theta) \big\|_2 \Big] < \delta. \end{equation} In this case we say that $p(x)$ is \emph{$\delta$-approximately expectation-linearizable}, and $\theta$ is \emph{$\delta$-approximately expectation-linearizing}. \end{definition} To appreciate the power of cutting some slack from exact expectation-linearity, we remark that even non-affine activation functions often have approximately linear regions. For example, the logistic function, a commonly used non-linear activation function in DNNs, is approximately linear around the origin. Naturally, we can ask whether it is sufficient for a target distribution $p(x)$ to be well-approximated by an approximately expectation-linearizable one. We begin by providing an appropriate measurement of the quality of this approximation. \begin{definition}[Closeness, \citep{andreas2015accuracy}] \label{def:closeness} A distribution $p(x)$ is $C$-close to a set $\mathcal{X}' \subseteq \mathcal{X}$ if \begin{equation} \mathrm{E}\Big[ \inf\limits_{x^* \in \mathcal{X}'} \sup\limits_{\gamma \in \mathcal{S}} \| X \odot \gamma - x^* \odot \gamma \|_2 \Big] \leq C, \end{equation} where recall that $\mathcal{S}$ is the (bounded) space that the dropout variable lives in. \end{definition} Intuitively, $p(x)$ is $C$-close to a set $\mathcal{X}'$ if a random sample from $p$ is no more than a distance $C$ from $\mathcal{X}'$ in expectation and under the worst ``dropout perturbation''. For example, a standard normal distribution is close to an interval centering at origin ($[-\alpha, \alpha]$) with some constant $C$. Our definition of closeness is similar to that in \citet{andreas2015accuracy}, who used this notion to analyze self-normalized log-linear models. We are now ready to state our first major result that quantifies approximate expectation-linearity of a single-layered network (proof in Appendix~\ref{appendix:proof:thm2}): \begin{theorem}\label{thm:layer} Given a network layer $\mathbf{h} = f(x\odot\gamma; \theta)$, where $\theta$ is \emph{expectation-linearizing} w.r.t. $\mathcal{X}' \subseteq \mathcal{X}$. Suppose $p(x)$ is $C$-close to $\mathcal{X}'$ and for all $x \in \mathcal{X}, \|\nabla_x f(x)\|_{\op} \leq B$, where $\|\cdot\|_{\op}$ is the usual operator norm. Then, $p(x)$ is $2BC$-approximately expectation-linearizable. \end{theorem} Roughly, Theorem~\ref{thm:layer} states that the input distribution $p(x)$ that place most of its mass on regions close to expectation-linearizable sets are approximately expectation-linearizable on a similar scale. The bounded operator norm assumption on the derivative $\nabla f$ is satisfied in most commonly used layers. For example, for a fully connected layer with weight matrix $W$, bias vector $b$, and activation function $\sigma$, $\| \nabla f(\cdot) \|_{\op} = |\sigma'(\cdot)| \cdot\| W \|_{\op}$ is bounded by $\| W \|_{\op}$ and the supremum of $|\sigma'(\cdot)|$ (1/4 when $\sigma$ is sigmoid and 1 when $\sigma$ is tanh). Next, we extend the notion of approximate expectation-linearity to deep dropout neural networks. \begin{definition}[Approximately Expectation-linear Network]\label{def:approx-expect-linear:network} A deep neural network with $L$ layers (cf. Eq.~\eqref{eq:dnn}) is \emph{\mbox{$\delta$-approximately} expectation-linear with respect to} $p(x)$ over $\mathcal{X}$ if \begin{equation} \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S}\big[\mathbf{H}^{(L)}(X, S; \theta) |X\big] - \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \big\|_2 \Big] < \delta. \end{equation} where $\mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta)$ is the output of the deterministic neural network in standard dropout. \end{definition} Lastly, we relate the level of approximate expectation-linearity of a deep neural network to the level of approximate expectation-linearity of each of its layers: \begin{theorem}\label{thm:dnn} Given an $L$-layer neural network as in Eq.~\eqref{eq:dnn}, and suppose that each layer $l \in \{1, \ldots, L\}$ is $\delta$-approximately expectation-linear w.r.t. $p(\mathbf{h}^{(l)})$, $\mathrm{E}[\Gamma^{(l)}] \leq \gamma$, $\sup_{x} \| \nabla f_l(x) \|_{\op} \leq B$, and $\mathrm{E}\big[\mathrm{Var}[\mathbf{H}^{(l)}|X]\big] \leq \sigma^2$. Then the network is $\Delta$-approximately expectation-linear with \begin{equation} \Delta = (B\gamma)^{L-1}\delta + (\delta + B\gamma\sigma)\bigg(\frac{1-(B\gamma)^{L-1}}{1-B\gamma}\bigg). \end{equation} \end{theorem} From Theorem~\ref{thm:dnn} (proof in Appendix~\ref{appendix:proof:thm3}) we observe that the level of approximate expectation-linearity of the network mainly depends on four factors: the level of approximate expecatation-linearity of each layer ($\delta$), the expected variance of each layer ($\sigma$), the operator norm of the derivative of each layer's transformation function ($B$), and the mean of each layer's dropout variable ($\gamma$). In practice, $\gamma$ is often a constant less than or equal to 1. For example, if $\Gamma \sim \mathrm{Bernoulli}(p)$, then $\gamma = p$. According to the theorem, the operator norm of the derivative of each layer's transformation function is an important factor in the level of approximate expectation-linearity: the smaller the operator norm is, the better the approximation. Interestingly, the operator norm of a layer often depends on the norm of the layer's weight (e.g. for fully connected layers). Therefore, adding max-norm constraints to regularize dropout neural networks can lead to better approximate expectation-linearity hence smaller inference gap and the often improved model performance. It should also be noted that when $B\gamma < 1$, the approximation error $\Delta$ tends to be a constant when the network becomes deeper. When $B\gamma = 1$, $\Delta$ grows linearly with $L$, and when $B\gamma > 1$, the growth of $\Delta$ becomes exponential. Thus, it is essential to keep $B\gamma < 1$ to achieve good approximation, particularly for deep neural networks. \section{Expectation-Linear Regularized Dropout}\label{sec:linearization} In the previous section we have managed to bound the approximate expectation-linearity, hence the inference gap in \eqref{eq:prediction}, of dropout neural networks. In this section, we first prove a uniform deviation bound of the \emph{sampled} approximate expectation-linearity measure from its mean, which motivates adding the sampled (hence computable) expectation-linearity measure as a regularization scheme to standard dropout, with the goal of explicitly controlling the inference gap of the learned parameter, hence potentially improving the performance. Then we give the upper bounds on the loss in accuracy due to expectation-linearization, and describe classes of distributions that expectation-linearize easily. \subsection{A Uniform Deviation Bound for the Sampled Expectation-linear Measure} We now show that an expectation-linear network can be found by expectation-linearizing the network on the training sample. To this end, we prove a uniform deviation bound between the empirical expectation-linearization measure using i.i.d. samples~(Eq.~\eqref{eq:empirical:risk}) and its mean~(Eq.~\eqref{eq:risk}). \begin{theorem}\label{thm:complexity} Let $\mathcal{H} = \left\{\mathbf{h}^{(L)}(x, s; \theta): \theta \in \Theta\right\}$ denote a space of $L$-layer dropout neural networks indexed with $\theta$, where $\mathbf{h}^{(L)}: \mathcal{X} \times \mathcal{S} \rightarrow \mathcal{R}$ and $\Theta$ is the space that $\theta$ lives in. Suppose that the neural networks in $\mathcal{H}$ satisfy the constraints: 1) $\forall x \in \mathcal{X}, \|x\|_2 \leq \alpha$; 2) $\forall l \in \{1, \ldots, L\}, \mathrm{E}(\Gamma^{(l)}) \leq \gamma$ and $\|\nabla f_{l}\|_{op} \leq B$; 3) $\|\mathbf{h}^{(L)}\| \leq \beta$. Denote empirical expectation-linearization measure and its mean as: {\small \begin{align} \hat{\Delta} & = \frac{1}{n}\sum\limits_{i=1}^{n} \big\| \mathrm{E}_{S_i}\big[\mathbf{H}^{(L)}(X_i, S_i; \theta)\big] - \mathbf{h}^{(L)}(X_i, \mathrm{E}[S_i]; \theta) \big\|_2 ,\label{eq:empirical:risk} \\ \Delta & = \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S}\big[\mathbf{H}^{(L)}(X, S; \theta)\big] - \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \big\|_2 \Big]. \label{eq:risk} \end{align}} Then, with probability at least $1 - \nu$, we have \begin{equation} \sup\limits_{\theta \in \Theta} |\Delta - \hat{\Delta}| < \frac{2\alpha B^{L} (\gamma^{L/2}+1)}{\sqrt{n}} + \beta \sqrt{\frac{\log(1/\nu)}{n}}. \end{equation} \end{theorem} From Theorem~\ref{thm:complexity} (proof in Appendix~\ref{appendix:proof:thm4}) we observe that the deviation bound decreases exponentially with the number of layers $L$ when the operator norm of the derivative of each layer's transformation function ($B)$ is less than 1 (and the contrary if $B \geq 1$). Importantly, the square root dependence on the number of samples ($n$) is standard and cannot be improved without significantly stronger assumptions. It should be noted that Theorem~\ref{thm:complexity} per se does not imply anything between expectation-linearization and the model accuracy (i.e. how well the expectation-linearized neural network actually achieves on modeling the data). Formally studying this relation is provided in \S~\ref{subsec:accuracy}. In addition, we provide some experimental evidences in \S~\ref{sec:experiment} on how improved approximate expectation-linearity (equivalently smaller inference gap) does lead to better empirical performances. \subsection{Expectation-Linearization as Regularization} The uniform deviation bound in Theorem~\ref{thm:complexity} motivates the possibility of obtaining an approximately expectation-linear dropout neural networks through adding the empirical measure \eqref{eq:empirical:risk} as a regularization scheme to the standard dropout training objective, as follows: \begin{equation}\label{eq:regular} loss(D; \theta) = -l(D; \theta) + \lambda V(D; \theta), \end{equation} where $-l(D; \theta)$ is the negative log-likelihood defined in Eq.~\eqref{eq:lvm:train}, $\lambda > 0$ is a regularization constant, and $V(D; \theta)$ measures the level of approximate expectation-linearity: \begin{equation}\label{eq:penalty} V(D; \theta) = \frac{1}{N}\sum\limits_{i=1}^{N} \big\| \mathrm{E}_{S_i}\big[\mathbf{H}^{(L)}(x_i, S_i; \theta)\big] - \mathbf{h}^{(L)}(x_i, \mathrm{E}[S_i]; \theta) \big\|_2^{2}. \end{equation} To solve \eqref{eq:regular}, we can minimize $loss(D; \theta)$ via stochastic gradient descent as in standard dropout, and approximate $V(D; \theta)$ using Monte carlo: \begin{equation}\label{eq:monte-carlo} V(D; \theta) \approx \frac{1}{N}\sum\limits_{i=1}^{N} \big\|\mathbf{h}^{(L)}(x_i, s_i; \theta) - \mathbf{h}^{(L)}(x_i, \mathrm{E}[S_i]; \theta)\big\|_2^{2}, \end{equation} where $s_i$ is the same dropout sample as in $l(D; \theta)$ for each training instance in a mini-batch. Thus, the only additional computational cost comes from the deterministic term $\mathbf{h}^{(L)}(x_i, \mathrm{E}[S_i]; \theta)$. Overall, our regularized dropout \eqref{eq:regular}, in its Monte carlo approximate form, is as simple and efficient as the standard dropout. \subsection{On the accuracy of Expectation-linearized Models}\label{subsec:accuracy} So far our discussion has concentrated on the problem of finding expectation-linear neural network models, without any concerns on how well they actually perform at modeling the data. In this section, we characterize the trade-off between maximizing ``data likelihood'' and satisfying an expectation-linearization constraint. To achieve the characterization, we measure the \emph{likelihood gap} between the classical maximum likelihood estimator (MLE) and the MLE subject to a expectation-linearization constraint. Formally, given training data $D = \{(x_1, y_1), \ldots, (x_n, y_n)\}$, we define \begin{align} \hat{\theta} & = \quad \, \argmin\limits_{\theta \in \Theta} \quad \, -l(D; \theta) \\ \hat{\theta}_\delta & = \argmin\limits_{\theta \in \Theta,V(D; \theta) \leq \delta} -l(D; \theta) \end{align} where $-l(D; \theta)$ is the negative log-likelihood defined in Eq.~\eqref{eq:lvm:train}, and $V(D; \theta)$ is the level of approximate expectation-linearity in Eq.~\eqref{eq:penalty}. We would like to control the loss of model accuracy by obtaining a bound on the \emph{likelihood gap} defined as: \begin{equation}\label{eq:likelihood:gap} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) = \frac{1}{n} (l(D; \hat{\theta}) - l(D; \hat{\theta}_\delta)) \end{equation} In the following, we focus on neural networks with \emph{softmax} output layer for classification tasks. \begin{equation}\label{eq:softmax} p(y|x, s; \theta) = \mathbf{h}_y^{(L)}(x, s; \theta) = f_L(\mathbf{h}^{(L - 1)}(x, s); \eta) = \frac{e^{\eta_y^T \mathbf{h}^{(L - 1)}(x, s)}}{\sum\limits_{y' \in \mathcal{Y}} e^{\eta_{y'}^T \mathbf{h}^{(L - 1)}(x,s)}} \end{equation} where $\theta = \{\theta_1, \ldots, \theta_{L-1}, \eta\}$, $\mathcal{Y} = \{1, \ldots, k\}$ and $\eta = \{\eta_y: y \in \mathcal{Y} \}$. We claim: \begin{theorem}\label{thm:det:bound} Given an $L$-layer neural network $\mathbf{h}^{(L)}(x, s; \theta)$ with softmax output layer in \eqref{eq:softmax}, where parameter $\theta \in \Theta$, dropout variable $s \in \mathcal{S}$, input $x \in \mathcal{X}$ and target $y \in \mathcal{Y}$. Suppose that for every $x$ and $s$, $p(y|x, s; \hat{\theta})$ makes a unique best prediction---that is, for each $x\in\mathcal{X}, s \in \mathcal{S}$, there exists a unique $y^* \in \mathcal{Y}$ such that $\forall y\neq y^*$, $\hat{\eta}_y^T\mathbf{h}^{(L-1)}(x, s) < \hat{\eta}_{y^*}^T\mathbf{h}^{(L-1)}(x, s)$. Suppose additionally that $\forall x, s, \,\|\mathbf{h}^{(L-1)}(x, s; \hat{\theta})\| \leq \beta$, and $\forall y, p(y|x; \hat{\theta}) >0$. Then \begin{equation} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq c_1 \beta^2 \left(\|\hat{\eta}\|_2 -\frac{\delta}{4\beta}\right)^2 e^{-c_2\delta/4\beta} \end{equation} where $c_1$ and $c_2$ are distribution-dependent constants. \end{theorem} From Theorem~\ref{thm:det:bound} (proof in Appendix~\ref{appendix:proof:thm5}) we observe that, at one extreme, distributions closed to deterministic can be expectation-linearized with little loss of likelihood. What about the other extreme --- distributions ``as close to uniform distribution as possible''? With suitable assumptions about the form of $p(y|x, s; \hat{\theta})$ and $p(y|x; \hat{\theta})$, we can achieve an accuracy loss bound for distributions that are close to uniform: \begin{theorem}\label{thm:uniform:bound} Suppose that $\forall x, s, \,\|\mathbf{h}^{(L-1)}(x, s; \hat{\theta})\| \leq \beta$. Additionally, for each $(x_i, y_i) \in D, s \in \mathcal{S}$, $\log \frac{1}{k} \leq \log p(y_i|x_i, s;\hat{\theta}) \leq \frac{1}{k}\sum\limits_{y\in\mathcal{Y}}\log p(y|x_i, s; \hat{\theta})$. Then asymptotically as $n \rightarrow \infty$: \begin{equation} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq \left(1 - \frac{\delta}{4\beta\|\hat{\eta}\|_2}\right) \mathrm{E}\left[ \mathrm{KL}\left(p(\cdot|X; \theta) \| \mathrm{Unif}(\mathcal{Y})\right) \right] \end{equation} \end{theorem} Theorem~\ref{thm:uniform:bound} (proof in Appendix~\ref{appendix:proof:thm6}) indicates that uniform distributions are also an easy class for expectation-linearization. The next question is whether there exist any classes of conditional distributions $p(y|x)$ for which all distributions are provably hard to expectation-linearize. It remains an open problem and might be an interesting direction for future work. \section{Experiments}\label{sec:experiment} In this section, we evaluate the empirical performance of the proposed regularized dropout in \eqref{eq:regular} on a variety of network architectures for the classification task on three benchmark datasets---MNIST, CIFAR-10 and CIFAR-100. We applied the same data preprocessing procedure as in \citet{srivastava2014dropout}. To make a thorough comparison and provide experimental evidence on how the expectation-linearization interacts with the predictive power of the learned model, we perform experiments of Monte Carlo (MC) dropout, which approximately computes the final prediction (left-hand side of \eqref{eq:prediction}) via Monte Carlo sampling, w/o the proposed regularizer. In the case of MC dropout, we average $m = 100$ predictions using randomly sampled configurations. In addition, the network architectures and hyper-parameters for each experiment setup are the same as those in \citet{srivastava2014dropout}, unless we explicitly claim to use different ones. Following previous works, for each data set We held out 10,000 random training images for validation to tune the hyper-parameters, including $\lambda$ in Eq.~\eqref{eq:regular}. When the hyper-parameters are fixed, we train the final models with all the training data, including the validation data. A more detailed description of the conducted experiments can be provided in Appendix~\ref{appendix:experiment}. For each experiment, we report the mean test errors with corresponding standard deviations over 5 repetitions. \subsection{MNIST} The MNIST dataset~\citep{lecun1998gradient} consists of 70,000 handwritten digit images of size 28$\times$28, where 60,000 images are used for training and the rest for testing. This task is to classify the images into 10 digit classes. For the purpose of comparison, we train 6 neural networks with different architectures. The experimental results are shown in Table~\ref{tab:results}. \subsection{CIFAR-10 and CIFAR-100} The CIFAR-10 and CIFAR-100 datasets~\citep{krizhevsky2009learning} consist of 60,000 color images of size $32\times32$, drawn from 10 and 100 categories, respectively. 50,000 images are used for training and the rest for testing. The neural network architecture we used for these two datasets has 3 convolutional layers, followed by two fully-connected (dense) hidden layers (again, same as that in \citet{srivastava2014dropout}). The experimental results are recorded in Table~\ref{tab:results}, too. From Table~\ref{tab:results} we can see that on MNIST data, dropout network training with expectation-linearization outperforms standard dropout on all 6 neural architectures. On CIFAR data, expectation-linearization reduces error rate from 12.82\% to 12.20\% for CIFAR-10, achieving 0.62\% improvement. For CIFAR-100, the improvement in terms of error rate is 0.97\% with reduction from 37.22\% to 36.25\%. From the results we see that with or without expectation-linearization, the MC dropout networks achieve similar results. It illustrates that by achieving expectation-linear neural networks, the predictive power of the learned models has not degraded significantly. Moreover, it is interesting to see that with the regularization, on MNIST dataset, standard dropout networks achieve even better accuracy than MC dropout. It may be because that with expectation-linearization, standard dropout inference achieves better approximation of the final prediction than MC dropout with (only) 100 samples. On CIFAR datasets, MC dropout networks achieve better accuracy than the ones with the regularization. But, obviously, MC dropout requires much more inference time than standard dropout~(MC dropout with $m$ samples requires about $m$ times the inference time of standard dropout). \subsection{Effect of Regularization Constant $\lambda$} In this section, we explore the effect of varying the hyper-parameter for the expectation-linearization rate $\lambda$. We train the network architectures in Table~\ref{tab:results} with the $\lambda$ value ranging from 0.1 to 10.0. Figure~\ref{fig:lambda} shows the test errors obtained as a function of $\lambda$ on three datasets. In addition, Figure~\ref{fig:lambda}, middle and right panels, also measures the empirical expectation-linearization risk $\hat{\Delta}$ of Eq.~\eqref{eq:empirical:risk} with varying $\lambda$ on CIFAR-10 and CIFAR-100, where $\hat{\Delta}$ is computed using Monte carlo with 100 independent samples. From Figure~\ref{fig:lambda} we can see that when $\lambda$ increases, better expectation-linearity is achieved (i.e. $\hat{\Delta}$ decreases). The model accuracy, however, has not kept growing with increasing $\lambda$, showing that in practice considerations on the trade-off between model expectation-linearity and accuracy are needed. \subsection{Comparison with Dropout Distillation} To make a thorough empirical comparison with the recently proposed Dropout Distillation method~\citep{bulo2016dropout}, we also evaluate our regularization method on CIFAR-10 and CIFAR-100 datasets with the All Convolutional Network~\citep{springenberg2014striving} (AllConv). To facilitate comparison, we adopt the originally reported hyper-parameters and the same setup for training. Table~\ref{tab:comparison} gives the results comparison the classification error percentages on test data under AllConv using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100~\footnote{We obtained similar results as that reported in Table~1 of \citet{bulo2016dropout} on CIFAR-10 corpus, while we cannot reproduce comparable results on CIFAR-100 (around 3\% worse)}. According to Table~\ref{tab:comparison}, our proposed expectation-linear regularization method achieves comparable performance to dropout distillation. \section{Conclusions} In this work, we attempted to establish a theoretical basis for the understanding of dropout, motivated by controlling the gap between dropout's training and inference phases. Through formulating dropout as a latent variable model and introducing the notion of (approximate) expectation-linearity, we have formally studied the inference gap of dropout, and introduced an empirical measure as a regularization scheme to explicitly control the gap. Experiments on three benchmark datasets demonstrate that reducing the inference gap can indeed improve the end performance. In the future, we intend to formally relate the inference gap to the generalization error of the underlying network, hence providing further justification of regularized dropout. \section*{Acknowledgements} This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. \bibliographystyle{iclr2017_conference} \newpage \section*{Appendix: Dropout with Expectation-linear Regularization} \appendix \setcounter{equation}{0} \section{LVM Dropout training vs. Standard Dropout Training}\label{appendix:proof:thm1} \paragraph{Proof of Theorem 1} \begin{proof} \begin{displaymath} \begin{array}{rcl} \mathrm{E}_{S_D}[l(D, S_D; \theta)] & = & \bigintss_{\mathcal{S}} \prod\limits_{i=1}^{N}p(s_i) \Big(\sum\limits_{i=1}^{N} \log p(y_i|x_i, s_i; \theta)\Big) d\mu(s_1) \ldots d\mu(s_N) \\ & = & \sum\limits_{i=1}^{N} \bigintsss_{\mathcal{S}} p(s_i) \log p(y_i|x_i, s_i; \theta) d\mu(s_i) \end{array} \end{displaymath} Because $\log(\cdot)$ is a concave function, from Jensen's Inequality, \begin{displaymath} \int_{\mathcal{S}} p(s) \log p(y|x, s; \theta) d\mu(s) \leq \log \int_{\mathcal{S}} p(s) p(y|x, s; \theta) d\mu(s) \end{displaymath} Thus \begin{displaymath} \mathrm{E}_{S_D}[-l(D, S_D; \theta)] \geq \sum\limits_{i=1}^{N} \log \int_{\mathcal{S}} p(s_i) p(y_i|x_i, s_i; \theta) d\mu(s_i) = -l(D;\theta). \end{displaymath} \end{proof} \section{Expectation-Linear Dropout Neural Networks} \subsection{Proof of Theorem 2}\label{appendix:proof:thm2} \begin{proof} Let $\gamma^* = \mathrm{E}[\Gamma]$, and \begin{displaymath} A \stackrel{\Delta}{=} \left\{x: \|\mathrm{E}[f(x \odot \Gamma; \theta)] - f(x \odot \gamma^*; \theta) \|_2 = 0 \right\} \end{displaymath} Let $X^* = \argmin\limits_{x \in A} \sup\limits_{\gamma \in \mathcal{S}} \|X \odot \gamma - x \odot \gamma \|_2$, and $X^- = X - X^*$. Then, \begin{displaymath} X \odot \gamma = X^* \odot \gamma + X^{-} \odot \gamma \end{displaymath} In the following, we omit the parameter $\theta$ for convenience. Moreover, we denote \begin{displaymath} \mathrm{E}_{\Gamma}\big[f(X \odot \Gamma; \theta)\big] \stackrel{\Delta}{=} \mathrm{E}\big[f(X \odot \Gamma; \theta) | X\big] \end{displaymath} From Taylor Series, there exit some $X', X'' \in \mathcal{X}$ satisfy that \begin{displaymath} \begin{array}{rcl} f(X \odot \Gamma) & = & f(X^* \odot \Gamma) + f'(X' \odot \Gamma) (X^{-} \odot \Gamma) \\ f(X \odot \gamma^*) & = & f(X^* \odot \gamma^*) + f'(X'' \odot \gamma^*) (X^{-} \odot \gamma^*) \end{array} \end{displaymath} where we denote $f'(x) = (\nabla_x f(x))^{T}$. Then, \begin{displaymath} \begin{array}{rl} & \mathrm{E}_\Gamma[f(X \odot \Gamma) - f(X \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[f(X^* \odot \Gamma + X^{-} \odot \Gamma) - f(X^* \odot \gamma^* + X^{-} \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*) + f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^{-} \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*)] + \mathrm{E}_\Gamma[f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^{-} \odot \gamma^*)] \end{array} \end{displaymath} Since $X^* \in A$, we have \begin{displaymath} \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*)] = 0. \end{displaymath} Then, \begin{displaymath} \begin{array}{rl} & \mathrm{E}_\Gamma[f(X \odot \Gamma) - f(X \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^{-} \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] + \mathrm{E}_\Gamma[f'(X''\odot\gamma^*)(X^- \odot \Gamma - X^- \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \end{array} \end{displaymath} Then, \begin{displaymath} \begin{array}{rl} & \| \mathrm{E}_\Gamma[f(X \odot \Gamma)] - f(X \odot \gamma^*) \|_2 \\ = & \|\mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)]\|_2 \end{array} \end{displaymath} Since $\|X^- \odot \gamma'\|_2 \leq \sup\limits_{\gamma \in \mathcal{S}} \|X^- \odot \gamma\|_2 = \inf\limits_{x \in A} \sup\limits_{\gamma \in \mathcal{S}} \|X \odot \gamma - x \odot \gamma\|_2$, and from Jensen's inequality and property of operator norm, \begin{displaymath} \begin{array}{rl} & \|\mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)]\|_2 \\ \leq & \mathrm{E}_\Gamma\Big[\|f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*)\|_{op} \|X^- \odot \Gamma \|_2\Big] \\ \leq & 2B\mathrm{E}_\Gamma\Big[\|X^- \odot \Gamma \|_2\Big] \\ \leq & 2B\inf\limits_{x \in A} \sup\limits_{\gamma \in \mathcal{S}} \|X \odot \gamma - x \odot \gamma\|_2 \end{array} \end{displaymath} Finally we have, \begin{displaymath} \begin{array}{rl} & \mathrm{E}_X\bigg[ \|\mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)]\|_2\bigg] \\ \leq & 2B\mathrm{E}\bigg[\inf\limits_{x \in A} \sup\limits_{\gamma \in \mathcal{S}} \|X \odot \gamma - x \odot \gamma\|_2\bigg] \leq 2BC \end{array} \end{displaymath} \end{proof} \subsection{Proof of Theorem 3}\label{appendix:proof:thm3} \begin{proof} Induction on the number of the layers $L$. As before, we omit the parameter $\theta$. \\ \textbf{Initial step:} when $L=1$, the statement is obviously true. \\ \textbf{Induction on $L$:} Suppose that the statement is true for neural networks with $L$ layers.\\ Now we prove the case $L+1$. From the inductive assumption, we have, \begin{equation}\label{eq:induction} \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}(X, S_L)\big] - \mathbf{h}^{(L)}(X, \mathrm{E}[S_L]) \big\|_2 \Big] \leq \Delta_L \end{equation} where $S_L = \{\Gamma^{(1)}, \ldots, \Gamma^{(L)}\}$ is the dropout random variables for the first $L$ layers, and \begin{displaymath} \Delta_L = (B\gamma)^{L-1}\delta + (\delta + B\gamma\sigma)\bigg(\frac{1-(B\gamma)^{L-1}}{1-B\gamma}\bigg) \end{displaymath} In addition, the $L+1$ layer is $\delta$-approximately expectation-linear, we have: \begin{equation}\label{eq:step} \mathrm{E}_{\mathbf{H}^{(L)}}\Big[\big\|\mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \big\|_2\Big] \leq \delta \end{equation} Let $\mathrm{E}[\Gamma^{(l)}] = \gamma^{(l)}, \forall l \in \{1, \ldots, L+1\}$, and let $\mathbf{H}^{(l)}$ and $\mathbf{h}^{(l)}$ be short for $\mathbf{H}^{(l)}(X, S_l)$ and $\mathbf{h}^{(l)}(X, \mathrm{E}(S_l))$, respectively, when there is no ambiguity. Moreover, we denote \begin{displaymath} \mathrm{E}_{S}\big[\mathbf{H}^{(L)}(X, S; \theta)\big] = \mathrm{E}_{S}\big[\mathbf{H}^{(L)}(X, S; \theta) \big| X\big] \end{displaymath} for convenience. Then, \begin{displaymath} \begin{array}{rl} & \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S_{L+1}}\big[\mathbf{H}^{(L+1)}\big] - \mathbf{h}^{(L+1)} \big\|_2 \Big] \\ = & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[ \mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot\Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big] \\ & + \mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)}) \Big\|_2\bigg] \\ \leq & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[ \mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot\Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big] \Big\|_2\bigg] \\ & + \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)}) \Big\|_2\bigg] \end{array} \end{displaymath} From Eq.~\ref{eq:step} and Jensen's inequality, we have \begin{equation}\label{eq:part1} \begin{array}{rl} & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[ \mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot\Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big] \Big\|_2\bigg] \\ \leq & \mathrm{E}_{\mathbf{H}^{(L)}}\bigg[\Big\|\mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot\Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big\|_2\bigg] \leq \delta \end{array} \end{equation} and \begin{equation}\label{eq:part2} \begin{array}{rl} & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)}) \Big\|_2\bigg] \\ = & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) \\ & + f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big\|_2\bigg] \\ \leq & \mathrm{E}_{X}\bigg[\Big\|\mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)})\Big\|_2\bigg] \\ & + \mathrm{E}_{X}\bigg[\Big\| f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big\|_2\bigg] \end{array} \end{equation} Using Jensen's inequality, property of operator norm and $\mathrm{E}\big[\mathrm{Var}[\mathbf{H}^{(l)}|X]\big] \leq \sigma^2$, we have \begin{equation}\label{eq:part3} \begin{array}{rl} & \mathrm{E}_{X}\bigg[\Big\|\mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)})\Big\|_2\bigg] \\ \leq & \mathrm{E}_{\mathbf{H}^{(L)}}\bigg[\Big\| f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) - f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) \Big\|_2\bigg] \\ \leq & B\gamma\mathrm{E}_{\mathbf{H}^{(L)}}\Big[\big\|\mathbf{H}^{(L)} - \mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big]\big\|_2\Big] \\ \leq & B\gamma\left( \mathrm{E}_{\mathbf{H}^{(L)}}\Big[\big\|\mathbf{H}^{(L)} - \mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big]\big\|^2_2\Big]\right)^{\frac{1}{2}} \leq B\gamma\sigma \end{array} \end{equation} From Eq.~\ref{eq:induction} \begin{equation}\label{eq:part4} \begin{array}{rl} & \mathrm{E}_{X}\bigg[\Big\| f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big\|_2\bigg] \\ = & B\gamma \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] - \mathbf{h}^{(L)} \big\|_2 \Big] \leq B\gamma\Delta_L \end{array} \end{equation} Finally, to sum up with Eq.~\ref{eq:part1}, Eq.~\ref{eq:part2}, , Eq.~\ref{eq:part3}, , Eq.~\ref{eq:part4}, we have \begin{displaymath} \begin{array}{rl} & \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S_{L+1}}\big[\mathbf{H}^{(L+1)}\big] - \mathbf{h}^{(L+1)} \big\|_2 \Big] \\ \leq & \delta + B\gamma\sigma + B\gamma\Delta_L \\ = & (B\gamma)^{L}\delta + (\delta + B\gamma\sigma)\bigg(\frac{1-(B\gamma)^{L}}{1-B\gamma}\bigg) = \Delta_{L+1} \end{array} \end{displaymath} \end{proof} \section{Expectation-Linearization} \subsection{Proof of Theorem 4: Uniform Deviation Bound}\label{appendix:proof:thm4} Before proving Theorem~\ref{thm:complexity}, we first define the notations. Let $X^n = \{X_1, \ldots, X_n \}$ be a set of $n$ samples of input $X$. For a function space $\mathcal{F}: \mathcal{X} \rightarrow \mathcal{R}$, we use $Rad_n(\mathcal{F}, X^n)$ to denote the \emph{empirical Rademacher complexity} of $\mathcal{F}$, \begin{displaymath} Rad_n(\mathcal{F}, X^n) = \mathrm{E}_{\sigma}\bigg[ \sup\limits_{f \in \mathcal{F}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i f(X_i) \Big)\bigg] \end{displaymath} and the \emph{Rademacher complexity} is defined as \begin{displaymath} Rad_n(\mathcal{F}) = \mathrm{E}_{X^n}\Big[ Rad_n(\mathcal{F}, X^n) \Big] \end{displaymath} In addition, we import the definition of \emph{dropout Rademacher complexity} from \citet{gao2014dropout}: \begin{displaymath} \begin{array}{rcl} \mathcal{R}_n (\mathcal{H}, X^n, S^n) & = & \mathrm{E}_{\sigma}\bigg[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i h(X_i, S_i) \Big)\bigg] \\ \mathcal{R}_n (\mathcal{H}) & = & \mathrm{E}_{X^n,S^n}\Big[ Rad_n(\mathcal{H}, X^n, S^n) \Big] \end{array} \end{displaymath} where $\mathcal{H}: \mathcal{X} \times \mathcal{S} \rightarrow \mathcal{R}$ is a function space defined on input space $\mathcal{X}$ and dropout variable space $\mathcal{S}$. $\mathcal{R}_n (\mathcal{H}, X^n, S^n)$ and $\mathcal{R}_n (\mathcal{H})$ are the empirical dropout Rademacher complexity and dropout Rademacher complexity, respectively. We further denote $\mathcal{R}_n (\mathcal{H}, X^n) \stackrel{\Delta}{=} \mathrm{E}_{S^n}\Big[ Rad_n(\mathcal{H}, X^n, S^n) \Big]$. Now, we define the following function spaces: \begin{displaymath} \begin{array}{rcl} \mathcal{F} & = & \bigg\{f(x; \theta): f(x; \theta) = \mathrm{E}_{S}\Big[\mathbf{H}^{(L)}(x, S; \theta) \Big], \theta \in \Theta\bigg\} \\ \mathcal{G} & = & \bigg\{g(x; \theta): g(x; \theta) = \mathbf{h}^{(L)}(x, \mathrm{E}[S]; \theta), \theta \in \Theta\bigg\} \\ \mathcal{H} & = & \bigg\{h(x, s; \theta): h(x, s; \theta) = \mathbf{h}^{(L)}(x, s; \theta), \theta \in \Theta\bigg\} \end{array} \end{displaymath} Then, the function space of $v(x) = f(x) - g(x)$ is $\mathcal{V} = \{f(x) - g(x): f \in \mathcal{F}, g \in \mathcal{G}\}$. \begin{lemma}\label{lem:ineq:rad} \begin{displaymath} Rad_n(\mathcal{F}, X^n) \leq \mathcal{R}_n(\mathcal{H}, X^n) \end{displaymath} \end{lemma} \begin{proof} \begin{displaymath} \begin{array}{rl} & \mathcal{R}_n(\mathcal{H}, X^n) = \mathrm{E}_{S^n}\Big[ Rad_n(\mathcal{H}, X^n, S^n) \Big] \\ = & \mathrm{E}_{S^n} \bigg[ \mathrm{E}_{\sigma}\Big[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i h(X_i, S_i) \Big)\Big]\bigg] \\ = & \mathrm{E}_{\sigma} \bigg[ \mathrm{E}_{S^n}\Big[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i h(X_i, S_i) \Big)\Big]\bigg] \\ \geq & \mathrm{E}_{\sigma} \bigg[ \sup\limits_{h \in \mathcal{H}} \mathrm{E}_{S^n}\Big[ \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i h(X_i, S_i) \Big)\Big]\bigg] \\ = & \mathrm{E}_{\sigma} \bigg[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i \mathrm{E}_{S_i}\big[h(X_i, S_i)\big] \Big)\bigg] \\ = & \mathrm{E}_{\sigma} \bigg[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i \mathrm{E}_{S_i}\big[\mathbf{H}^{(L)}(X_i, S_i; \theta)\big] \Big)\bigg] = Rad_n(\mathcal{F}, X^n) \end{array} \end{displaymath} \end{proof} From Lemma~\ref{lem:ineq:rad}, we have $Rad_n(\mathcal{F}) \leq \mathcal{R}_n(\mathcal{H})$. \begin{lemma}\label{lem:drop:rad} \begin{displaymath} \begin{array}{rcl} \mathcal{R}_n(\mathcal{H}) & \leq & \frac{\alpha B^{L} \gamma^{L/2}}{\sqrt{n}} \\ Rad_n(\mathcal{G}) & \leq & \frac{\alpha B^{L}}{\sqrt{n}} \end{array} \end{displaymath} \end{lemma} \begin{proof} See Theorem 4 in \citet{gao2014dropout}. \end{proof} Now, we can prove Theorem~\ref{thm:complexity}. \paragraph{Proof of Theorem 4} \begin{proof} From Rademacher-based uniform bounds theorem, with probability $\geq 1 - \delta$, \begin{displaymath} \sup\limits_{v \in \mathcal{V}} |\Delta - \hat{\Delta}| < 2 Rad_n(\mathcal{V}) + \beta \sqrt{\frac{\log(1/\delta)}{n}} \end{displaymath} Since $\mathcal{V} = \mathcal{F} - \mathcal{G}$, we have \begin{displaymath} Rad_n(\mathcal{V}) = Rad_n(\mathcal{F} - \mathcal{G}) \leq Rad_n(\mathcal{F}) + Rad_n(\mathcal{G}) \leq \frac{\alpha B^{L} (\gamma^{L/2}+ 1)}{\sqrt{n}} \end{displaymath} Then, finally, we have that with probability $\geq 1 - \delta$, \begin{displaymath} \sup\limits_{\theta \in \Theta} |\Delta - \hat{\Delta}| < \frac{2\alpha B^{L} (\gamma^{L/2}+ 1)}{\sqrt{n}} + \beta \sqrt{\frac{\log(1/\delta)}{n}} \end{displaymath} \end{proof} \subsection{Proof of Theorem 5: Non-Uniform Bound of Model Accuracy}\label{appendix:proof:thm5} For convenience, we denote $\lambda = \{\theta_1, \ldots, \theta_{L-1}\}$. Then $\theta = \{\lambda, \eta\}$, and MLE $\hat{\theta} = \{\hat{\lambda}, \hat{\eta} \}$ \begin{lemma}\label{lem:softmax:op} \begin{equation} \|\nabla f_L(\cdot; \eta)^T \|_{op} \leq 2 \| \eta \|_2 \end{equation} \end{lemma} \begin{proof} denote \begin{displaymath} A = \nabla f_L(\cdot; \eta)^T = \left[ p_y (\eta_y - \overline{\eta})^T \right]\Big|_{y=1}^{k} \end{displaymath} where $p_y = p(y|x, s; \theta)$, $\overline{\eta} = \mathrm{E}\left[\eta_{Y}\right] = \sum\limits_{y=1}^{k}p_y \eta_y$. For each $v$ such that $\|v\|_2 = 1$, \begin{displaymath} \begin{array}{rcl} \|Av\|_2^2 & = & \sum\limits_{y \in \mathcal{Y}} \left( p_y \left( \eta_y - \overline{\eta} \right)^T v\right)^2 \leq \sum\limits_{y \in \mathcal{Y}} \|p_y \left( \eta_y - \overline{\eta} \right) \|_2^2 \|v\|_2^2 = \sum\limits_{y \in \mathcal{Y}} \|p_y \left( \eta_y - \overline{\eta} \right) \|_2^2 \\ & \leq & \sum\limits_{y \in \mathcal{Y}} p_y \|\eta_y - \overline{\eta}\|_2^2 \leq \sum\limits_{y \in \mathcal{Y}} 2 p_y \left( \|\eta\|_2^2 + \sum\limits_{y'\in\mathcal{Y}}p_{y'}\|\eta_{y'}\|_2^2\right) \\ & = & 4 \sum\limits_{y \in \mathcal{Y}} p_y \|\eta_y\|_2^2 \leq 4\|\eta\|_2^2 \end{array} \end{displaymath} So we have $\|A\|_{op} \leq 2\|\eta\|_2$. \end{proof} \begin{lemma}\label{lem:norm:constrain} If parameter $\tilde{\theta} = \{\hat{\lambda}, \eta\}$ satisfies that $\|\eta\|_2 \leq \frac{\delta}{4\beta}$, then $V(D; \tilde{\theta}) \leq \delta$, where $V(D; \theta)$ is defined in Eq.~\eqref{eq:penalty}. \end{lemma} \begin{proof} Let $S_L = \{\Gamma^{(1)}, \ldots, \Gamma^{(L)}\}$, and let $\mathbf{H}^{(l)}$ and $\mathbf{h}^{(l)}$ be short for $\mathbf{H}^{(l)}(X, S_l; \tilde{\theta})$ and $\mathbf{h}^{(l)}(X, \mathrm{E}(S_l); \tilde{\theta})$, respectively. From lemma~\ref{lem:softmax:op}, we have $\|f_L(x; \eta) - f_L(y; \eta)\|_2 \leq 2\|\eta\|_2 \|x - y\|_2$. Then, \begin{displaymath} \begin{array}{rcl} \left\|\mathrm{E}_{S_L}\left[\mathbf{H}^{L}\right] - \mathbf{h}^{L}\right\|_2 & = & \left\|\mathrm{E}_{S_{L-1}}\left[f_{L}(\mathbf{H}^{(L-1)}; \eta)\right] - f_{L}(\mathbf{h}^{(L-1)}; \eta)\right\|_2 \\ & \leq & \mathrm{E}_{S_{L-1}}\left\|f_{L}(\mathbf{H}^{(L-1)}; \eta) - f_{L}(\mathbf{h}^{(L-1)}; \eta) \right\|_2 \\ & \leq & 2 \|\eta\|_2 \left\| \mathbf{H}^{(L-1)} - \mathbf{h}^{(L-1)} \right\|_2 \\ & \leq & 4\beta\|\eta\|_2 \leq \delta \end{array} \end{displaymath} \end{proof} Lemma~\ref{lem:norm:constrain} says that we can get $\theta$ satisfying the expectation-linearization constrain by explicitly scaling down $\hat{\eta}$ while keeping $\hat{\lambda}$. In order to prove Theorem~\ref{thm:det:bound}, we make the following assumptions: \begin{itemize} \item The dimension of $\mathbf{h}^{(L-1)}$ is $d$, i.e. $\mathbf{h}^{(L-1)} \in \mathcal{R}^d$. \item Since $\forall y \in \mathcal{Y}, p(y|x; \hat{\theta}) > 0$, we assume $p(y|x; \hat{\theta}) \geq 1/b$, where $b \geq |\mathcal{Y}| = k$. \item As in the body text, let $p(y|x, s; \hat{\theta})$ be nonuniform, and in particular let \\ $\hat{\eta}_{y^*}^T\mathbf{h}^{(L-1)}(x, s; \hat{\lambda}) - \hat{\eta}_{y}^T\mathbf{h}^{(L-1)}(x, s; \hat{\lambda}) > c\|\hat{\eta}\|_2, \forall y \neq y^*$. \end{itemize} For convenience, we denote $\eta^T\mathbf{h}^{(L-1)}(x, s; \lambda) = \eta^T u_y (x, s; \lambda)$, where $u_y^T (x, s; \lambda) = (v_0^T, \ldots, v_k^T)$ and \begin{displaymath} v_i = \left\{\begin{array}{ll} \mathbf{h}^{(L-1)}(x, s; \lambda) & \textrm{if } i = y \\ 0 & \textrm{otherwise} \end{array}\right. \end{displaymath} To prove Theorem~\ref{thm:det:bound}, we first prove the following lemmas. \begin{lemma}\label{lem:prob:low:bound} If $p(y|x; \hat{\theta}) \geq 1/b$, then $\forall \alpha \in [0,1]$, for parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$, we have \begin{displaymath} p(y|x; \tilde{\theta}) \geq \frac{1}{b} \end{displaymath} \end{lemma} \begin{proof} We define \begin{displaymath} f(\alpha) \stackrel{\Delta}{=} (y|x, s; \tilde{\theta}) = \frac{e^{\alpha\eta_y^T \mathbf{h}^{(L - 1)}(x, s; \hat{\lambda})}}{\sum\limits_{y' \in \mathcal{Y}} e^{\alpha\eta_{y'}^T \mathbf{h}^{(L - 1)}(x,s; \hat{\lambda})}} = \frac{\Big(e^{\eta_y^T \mathbf{h}^{(L - 1)}(x, s; \hat{\lambda})}\Big)^\alpha}{\sum\limits_{y' \in \mathcal{Y}} \Big(e^{\eta_{y'}^T \mathbf{h}^{(L - 1)}(x,s; \hat{\lambda})}\Big)^\alpha} \end{displaymath} Since $\mathcal{Y} = \{1, \ldots, k\}$, for fixed $x \in \mathcal{X}, s \in \mathcal{S}$, $\log f(\alpha)$ is a concave function w.r.t $\alpha$. \\ Since $b \geq k$, we have \begin{displaymath} \log f(\alpha) \geq (1-\alpha) \log f(0) + \alpha \log f(1) \geq -\log b \end{displaymath} So we have $\forall x, s$, $p(y|x, s; \tilde{\theta}) \geq 1/b$. Then \begin{displaymath} p(y|x; \tilde{\theta}) = \mathrm{E}_S \left[ p(y|x, S; \hat{\theta})\right] \geq \frac{1}{b} \end{displaymath} \end{proof} \begin{lemma}\label{lem:prob:up:bound} if $y$ is not the majority class, i.e. $y \neq y^*$, then for parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$ \begin{displaymath} p(y|x, s, \tilde{\theta}) \leq e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} \begin{displaymath} p(y|x, s, \tilde{\theta}) = \frac{e^{\alpha \hat{\eta}^T u_y}}{\sum\limits_{y'\in\mathcal{Y}} e^{\alpha \hat{\eta}^T u_{y'}}} \leq \frac{e^{\alpha \hat{\eta}^T u_y}}{e^{\alpha \hat{\eta}^T u_{y^*}}} \leq e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{proof} \begin{lemma}\label{lem:hessian1:bound} For a fixed $x$ and $s$, the absolute value of the entry of the vector under the parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$: \begin{displaymath} |p(y|x, s; \tilde{\theta}) (u_y - \mathrm{E}_Y[u_Y])|_i \leq \beta(k-1)e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} Suppose $y$ is the majority class of $p(y|x, s; \tilde{\theta})$. Then, \begin{displaymath} u_y - \mathrm{E}_y[u_Y] = \left(v_{y'}\right)_{y'=1}^{k} \end{displaymath} where \begin{displaymath} v_y = \left\{ \begin{array}{ll} (1 - p(y|x, s; \tilde{\theta}) \mathbf{h}^{(L-1)} & \textrm{if } y = y^* \\ -p(y|x, s; \tilde{\theta}) \mathbf{h}^{(L-1)} & \textrm{otherwise} \end{array}\right. \end{displaymath} From Lemma~\ref{lem:prob:up:bound}, we have \begin{displaymath} |p(y|x, s; \tilde{\theta}) (u_y - \mathrm{E}_Y[u_Y])|_i \leq |(u_y - \mathrm{E}_Y[u_Y])|_i \leq \beta(k-1)e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} Now, we suppose $y$ is not the majority class of $p(y|x, s; \tilde{\theta})$. Then, \begin{displaymath} |p(y|x, s; \tilde{\theta}) (u_y - \mathrm{E}_Y[u_Y])|_i \leq p(y|x, s; \tilde{\theta}) \beta \leq \beta e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} Overall, the lemma follows. \end{proof} \begin{lemma}\label{lem:hessian2:bound} We denote the matrix \begin{displaymath} \begin{array}{rl} A \stackrel{\Delta}{=} & \mathrm{E}_S\left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y]) (u_y - \mathrm{E}_Y[u_Y])^T\right] \\ & - \mathrm{E}_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y])\right] \mathrm{E}_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y])\right]^T \end{array} \end{displaymath} Then the absolute value of the entry of $A$ under the parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$: \begin{displaymath} |A_{ij}| \leq 2b(k-1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} From Lemma~\ref{lem:prob:low:bound}, we have $p(y|x; \tilde{\theta}) \geq 1/b$. Additionally, the absolute value of the entry of $u_y - \mathrm{E}_Y[u_Y]$ is bounded by $\beta$. We have for each $i$ \begin{displaymath} \left| \mathrm{E}_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y])\right]\right|_i \leq \mathrm{E}_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} \beta\right] = \beta \end{displaymath} Then from Lemma~\ref{lem:hessian1:bound} \begin{displaymath} |A_{ij}| \leq 2b(k-1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{proof} \begin{lemma}\label{lem:hessian3:bound} We denote the matrix \begin{displaymath} B \stackrel{\Delta}{=} \mathrm{E}_S\left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} \left( \mathrm{E}_Y\left[ u_Y u_Y^T\right] - \mathrm{E}_Y[u_Y] \mathrm{E}_Y[u_Y]^T \right)\right] \end{displaymath} Then the absolute value of the entry of $B$ under the parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$: \begin{displaymath} |B_{ij}| \leq 2(k-1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} We only need to prove that for fixed $x$ and $s$, for each $i, j$: \begin{displaymath} \left|\mathrm{E}_Y\left[ u_Y u_Y^T\right] - \mathrm{E}_Y[u_Y] \mathrm{E}_Y[u_Y]^T\right|_{ij} \leq 2(k-1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} Since \begin{displaymath} \left|\mathrm{E}_Y\left[ u_Y u_Y^T\right] - \mathrm{E}_Y[u_Y] \mathrm{E}_Y[u_Y]^T\right|_{ij} = \left|\mathrm{Cov}_Y [(u_Y)_i, (u_Y)_j]\right| \leq \beta^2 \sum\limits_{y=1}^{k} p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \end{displaymath} Suppose $y$ is the majority class. Then from Lemma~\ref{lem:prob:up:bound}, \begin{displaymath} p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq 1 - p(y|x, s; \tilde{\theta}) \leq (k-1) e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} If $y$ is not the majority class. Then, \begin{displaymath} p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq p(y|x, s; \tilde{\theta}) \leq e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} So we have \begin{displaymath} \sum\limits_{y=1}^{k} p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq 2(k-1) e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} The lemma follows. \end{proof} \begin{lemma}\label{lem:eigen:bound} Under the parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$, the largest eigenvalue of the matrix \begin{equation}\label{eq:hessian} \frac{1}{n} \sum\limits_{i=1}^{n} \left(A(x_i, y_i) - B(x_i, y_i)\right) \end{equation} is at most \begin{displaymath} 2dk(k-1)(b+1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} From Lemma~\ref{lem:hessian2:bound} and Lemma~\ref{lem:hessian3:bound}, each entry of the matrix in \eqref{eq:hessian} is at most $2(k-1)(b+1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2}$. Thus, by Gershgorin's theorem, the maximum eigenvalue of the matrix in \eqref{eq:hessian} is at most $2dk(k-1)(b+1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2}$. \end{proof} Now, we can prove Theorem~\ref{thm:det:bound} by constructing a scaled version of $\hat{\theta}$ that satisfies the expectation-linearization constraint. \paragraph{Proof of Theorem~\ref{thm:det:bound}} \begin{proof} Consider the likelihood evaluated at $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta} \}$, where $\alpha = \frac{\delta}{4\beta\|\hat{\eta}\|_2}$. If $\alpha > 1$, then $\|\eta\|_2 > \frac{\delta}{4\beta}$. We know the MLE $\hat{\theta}$ already satisfies the expectation-linearization constraint. So we can assume that $0 \leq \alpha \leq 1$, and we know that $\tilde{\theta}$ satisfies $V(D; \tilde{\theta}) \leq \delta$. Then, \begin{displaymath} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq \Delta_l(\hat{\theta}, \tilde{\theta}) = \frac{1}{n} (l(D; \hat{\theta}) - l(D; \tilde{\theta})) = g(\hat{\lambda}, \hat{\eta}) - g(\hat{\lambda}, \alpha\hat{\eta}) \end{displaymath} where $g(\lambda, \eta) = \frac{1}{n} l(D; (\lambda, \eta))$. Taking the second-order Taylor expansion about $\eta$, we have \begin{displaymath} g(\hat{\lambda}, \alpha\hat{\eta}) = g(\hat{\lambda}, \hat{\eta}) + \nabla_{\eta}^T g(\hat{\lambda}, \hat{\eta}) (\alpha\hat{\eta} - \hat{\eta}) + (\alpha\hat{\eta} - \hat{\eta})^T \nabla_{\eta}^2 g(\hat{\lambda}, \hat{\eta}) (\alpha\hat{\eta} - \hat{\eta}) \end{displaymath} Since $\hat{\theta}$ is the MLE, the first-order term $\nabla_{\eta}^T g(\hat{\lambda}, \hat{\eta}) (\alpha\hat{\eta} - \hat{\eta}) = 0$. The Hessian in the second-order term is just Eq.\eqref{eq:hessian}. Thus, from Lemma~\ref{lem:eigen:bound} we have \begin{displaymath} \begin{array}{rcl} g(\hat{\lambda}, \alpha\hat{\eta}) & \leq & g(\hat{\lambda}, \hat{\eta}) - (1-\alpha)^2\|\hat{\eta}\|_{2}^{2} 2dk(k-1)(b+1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \\ & = & g(\hat{\lambda}, \hat{\eta}) - 2dk(k-1)(b+1)\beta^2 \left(\|\hat{\eta}\|_2 -\frac{\delta}{4\beta}\right)^2 e^{-c\delta/4\beta} \\ & = & g(\hat{\lambda}, \hat{\eta}) - c_1 \beta^2 \left(\|\hat{\eta}\|_2 -\frac{\delta}{4\beta}\right)^2 e^{-c_2\delta/4\beta} \end{array} \end{displaymath} with setting $c1 = 2dk(k-1)(b+1)$ and $c2 = c$. Then the theorem follows. \end{proof} \subsection{Proof of Theorem 6: Uniform Bound of Model Accuracy}\label{appendix:proof:thm6} In the following, we denote $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta} \}$. \begin{lemma}\label{lem:convex0} For each $y\in \mathcal{Y}$, if $p(y|x, s;\hat{\theta}) \geq 1/k$, then $\forall \alpha \in [0, 1]$ \begin{displaymath} p(y|x, s; \tilde{\theta}) \geq \frac{1}{k} \end{displaymath} \end{lemma} \begin{proof} This lemma can be regarded as a corollary of Lemma~\ref{lem:prob:low:bound}. \end{proof} \begin{lemma}\label{lem:convex1} For a fixed $x$ and $s$, we denote $e^{\hat{\eta}_y^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda})} = w_y$. Then we have \begin{displaymath} p(y|x, s, \tilde{\theta}) = \frac{e^{\alpha\hat{\eta}_y^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda})}}{\sum\limits_{y'\in \mathcal{Y}} e^{\alpha\hat{\eta}_{y'}^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda})}} = \frac{(w_y)^\alpha}{\sum\limits_{y'\in\mathcal{Y}} (w_{y'})^\alpha} \end{displaymath} Additionally, we denote $g_s(\alpha) = \sum\limits_{y'\in\mathcal{Y}} p(y'|x, s; \tilde{\theta})\log w_{y'} - \log w_y$. We assume $g_s(0) \geq 0$. Then we have $\forall \alpha \geq 0$ \begin{displaymath} g_s(\alpha) \geq 0 \end{displaymath} \end{lemma} \begin{proof} \begin{displaymath} \frac{\partial{g_s(\alpha)}}{\partial{\alpha}} = \sum\limits_{y'\in \mathcal{Y}}\log w_{y'} \frac{\partial p(y'|x, s; \tilde{\theta})}{\partial \alpha} = \mathrm{Var}_Y\left[\log w_Y|X-x,S=s\right] \geq 0 \end{displaymath} So $g_s(\alpha)$ is non-decreasing. Since $g_s(0) \geq 0$, we have $g_s(\alpha) \geq 0$ when $\alpha \geq 0$. \end{proof} From above lemma, we have for each training instance $(x_i, y_i) \in D$, and $\forall \alpha\in [0,1]$, \begin{equation}\label{eq:msy} \mathrm{E}_Y \left[\log p(Y|x_i, s; \tilde{\theta})\right] \geq \log p(y_i|x_i, s; \tilde{\theta}) \end{equation} For convenience, we define \begin{displaymath} m(s, y) = \log p(y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[\log p(Y|x, s; \tilde{\theta})\right] \end{displaymath} \begin{lemma}\label{lem:convex2} If $y$ satisfies Lemma~\ref{lem:convex0} and $g_s(\alpha) \geq 0$, then \begin{displaymath} \mathrm{Var}_Y[m(s, Y)] \geq m(s, y)^2 \end{displaymath} \end{lemma} \begin{proof} First we have \begin{displaymath} m(s, y) = \log p(y|x, s; \tilde{\theta}) - \log 1/k - KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) \leq 0 \end{displaymath} So we have \begin{displaymath} \begin{array}{rcl} \left(\mathrm{Var}_Y \left[ m(s, Y)\right]\right)^{1/2} & = & \sqrt{\mathrm{E}_Y \left[\left(\log p(Y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[\log p(Y|x, s; \tilde{\theta})\right]\right)^2 \right]} \\ & \geq & \mathrm{E}_Y \left[\left|\log p(Y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[\log p(Y|x, s; \tilde{\theta})\right]\right| \right] \\ & = & \mathrm{E}_Y \left[\left|KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) + \log 1/k - \log p(Y|x, s; \tilde{\theta}) \right|\right] \\ & = & \mathrm{E}_Y \left[KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) + \left|\log 1/k - \log p(Y|x, s; \tilde{\theta}) \right|\right] \\ & \geq & KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) + \mathrm{E}_Y \left[\log p(Y|x, s; \tilde{\theta}) - \log 1/k\right] \\ & = & 2KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) \end{array} \end{displaymath} As $KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) \geq 0$ and $\log p(y|x, s; \tilde{\theta}) \geq \log 1/k$. So we have \begin{displaymath} 2KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) \geq KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) + \log 1/k - \log p(y|x, s; \tilde{\theta}) = -m(s, y) \end{displaymath} Then the lemma follows. \end{proof} From Lemma~\ref{lem:convex2} and Eq.~\eqref{eq:msy}, we have for each training instance $(x_i, y_i) \in D$, and $\forall \alpha\in [0,1]$, \begin{equation}\label{eq:varsy} \mathrm{Var}_Y[m(s, Y)] \geq m(s, y_i)^2 \end{equation} \begin{lemma}\label{lem:convex3} For each training instance $(x_i, y_i) \in D$, and $\forall \alpha \in [0, 1]$, we have \begin{displaymath} \log p(y_i|x_i; \{\hat{\lambda}, \alpha\hat{\eta}\}) \geq (1-\alpha) \log p(y_i|x_i; \{\hat{\lambda},0\}) + \alpha \log p(y_i|x_i; \{\hat{\lambda}, \hat{\eta}\}) \end{displaymath} \end{lemma} \begin{proof} We define \begin{displaymath} f(\alpha) = \log p(y_i|x_i; \{\hat{\lambda}, \alpha\hat{\eta}\}) - (1-\alpha) \log p(y_i|x_i; \{\hat{\lambda},0\}) - \alpha \log p(y_i|x_i; \{\hat{\lambda}, \hat{\eta}\}) \end{displaymath} Because $f(0) = f(1) = 0$, we only need to prove that $f(\alpha)$ is concave on $[0, 1]$. We have \begin{displaymath} \nabla^2 f(\alpha) = - \mathrm{E}_{S|Y=y_i}\left[\mathrm{Var}_Y \left[ m(S, Y)\right]\right] + \mathrm{Var}_{S|Y=y_i}\left[ m(S, y_i)\right] \end{displaymath} where $S|Y=y_i$ is under the probability distribution $p(s|Y=y_i, x_i; \tilde{\theta}) = \frac{p(y_i|x_i, S; \tilde{\theta})p(s)}{p(y_i|x_i; \tilde{\theta})}$\\ From Eq.~\eqref{eq:varsy}, we have \begin{displaymath} \mathrm{E}_{S|Y=y_i}\left[\mathrm{Var}_Y \left[ m(S, Y)\right]\right] \geq \mathrm{E}_{S|Y=y_i}\left[ m(S, y_i)^2\right] \geq \mathrm{Var}_{S|Y=y_i}\left[ m(S, y_i)\right] \end{displaymath} So we have $\nabla^2 f(\alpha) \leq 0$. The lemma follows. \end{proof} Now, we can prove Theorem~\ref{thm:uniform:bound} by using the same construction of an expectation-linearizing parameter as in Theorem~\ref{thm:det:bound}. \paragraph{Proof of Theorem~\ref{thm:uniform:bound}} \begin{proof} Consider the same parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta} \}$, where $\alpha = \frac{\delta}{4\beta\|\hat{\eta}\|_2} \leq 1$. we know that $\tilde{\theta}$ satisfies $V(D; \tilde{\theta}) \leq \delta$. Then, \begin{displaymath} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq \Delta_l(\hat{\theta}, \tilde{\theta}) = \frac{1}{n} (l(D; \hat{\theta}) - l(D; \tilde{\theta})) \end{displaymath} From Lemma~\ref{lem:convex3} we have: \begin{displaymath} l(D; \tilde{\theta}) = l(D; \{\hat{\lambda}, \alpha\hat{\eta}\}) \geq (1-\alpha) l(D; \{\hat{\lambda}, 0\}) + \alpha l(D; \{\hat{\lambda}, \hat{\eta}\}) \end{displaymath} So \begin{displaymath} \begin{array}{rcl} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) & \leq & (1-\alpha) \frac{1}{n} \left( l(D; \hat{\theta}) - l(D; \{\hat{\lambda}, 0\})\right) \\ & = & (1 - \alpha) \frac{1}{n} \sum\limits_{i=1}^{n} \log p(y_i|x_i;\hat{\theta}) - \log\mathrm{Unif}(\mathcal{Y}) \\ & \asymp & (1 - \alpha)\mathrm{E}\left[\mathrm{KL}\left(p(\cdot|X; \theta) \| \mathrm{Unif}(\mathcal{Y})\right)\right] \\ & \leq & \left(1 - \frac{\delta}{4\beta\|\hat{\eta}\|_2}\right) \mathrm{E}\left[\mathrm{KL}\left(p(\cdot|X; \theta) \| \mathrm{Unif}(\mathcal{Y})\right) \right] \end{array} \end{displaymath} \end{proof} \section{Detailed Description of Experiments}\label{appendix:experiment} \subsection{Neural Network Architectures}\label{appendix:architec} \paragraph{MNIST} For MNIST, we train 6 different fully-connected (dense) neural networks with 2 or 3 layers (see Table~\ref{tab:results}). For all architectures, we used dropout rate $p=0.5$ for all hidden layers and $p=0.2$ for the input layer. \paragraph{CIFAR-10 and CIFAR-100} For the two CIFAR datasets, we used the same architecture in \citet{srivastava2014dropout} --- three convolutional layers followed by two fully-connected hidden layers. The convolutional layers have 96, 128, 265 filters respectively, with a $5\times5$ receptive field applied with a stride of 1. Each convolutional layer is followed by a max pooling layer pools $3\times3$ regions at strides of 2. The fully-connected layers have 2048 units each. All units use the rectified linear activation function. Dropout was applied to all the layers with dropout rate $p = (0.1, 0.25, 0.25, 0.5, 0.5, 0.5)$ for the layers going from input to convolutional layers to fully-connected layers. \subsection{Neural Network Training} Neural network training in all the experiments is performed with mini-batch stochastic gradient descent (SGD) with momentum. We choose an initial learning rate of $\eta_0$, and the learning rate is updated on each epoch of training as $\eta_t = \eta_0/(1 + \rho t)$, where $\rho$ is the decay rate and $t$ is the number of epoch completed. We run each experiment with 2,000 epochs and choose the parameters achieving the best performance on validation sets. Table~\ref{tab:hyper-parameter} summarizes the chosen hyper-parameters for all experiments. Most of the hyper-parameters are chosen from \citet{srivastava2014dropout}. But for some experiments, we cannot reproduce the performance reported in \citet{srivastava2014dropout} (We guess one of the possible reasons is that we used different library for implementation.). For these experiments, we tune the hyper-parameters on the validation sets by random search. Due to time constrains it is infeasible to do a random search across the full hyper-parameter space. Thus, we try to use as many hyper-parameters reported in \citet{srivastava2014dropout} as possible. \subsection{Effect of Expectation-linearization Rate $\lambda$} Table~\ref{tab:result:lambda} illustrates the detailed results of the experiments on the effect of $\lambda$. For MNIST, it lists the error rates under different $\lambda$ values for six different network architectures. For two datasets of CIFAR, it gives the error rates under different $\lambda$ values, among with the empirical expectation-linearization risk $\hat{\Delta}$. \end{document}
Dropout with Expectation-linear Regularization
1609.08017
Table 1: Comparison of classification error percentage on test data with and without using expectation-linearization on MNIST, CIFAR-10 and CIFAR-100, under different network architectures (with standard deviations for 5 repetitions).
[ "[BOLD] Data", "[BOLD] Architecture", "[BOLD] w.o. EL Standard", "[BOLD] w.o. EL MC", "[BOLD] w. EL Standard", "[BOLD] w. EL MC" ]
[ [ "MNIST", "3 dense,1024,logistic", "1.23±0.03", "1.06±0.02", "1.07±0.02", "1.06±0.03" ], [ "MNIST", "3 dense,1024,relu", "1.19±0.02", "1.04±0.02", "1.03±0.02", "1.05±0.03" ], [ "MNIST", "3 dense,1024,relu+max-norm", "1.05±0.03", "1.02±0.02", "0.98±0.03", "1.02±0.02" ], [ "MNIST", "3 dense,2048,relu+max-norm", "1.07±0.02", "1.00±0.02", "0.94±0.02", "0.97±0.03" ], [ "MNIST", "2 dense,4096,relu+max-norm", "1.03±0.02", "0.92±0.03", "0.90±0.02", "0.93±0.02" ], [ "MNIST", "2 dense,8192,relu+max-norm", "0.99±0.02", "0.96±0.02", "0.87±0.02", "0.92±0.03" ], [ "CIFAR-10", "3 conv+2 dense,relu+max-norm", "12.82±0.10", "12.16±0.12", "12.20±0.14", "12.21±0.15" ], [ "CIFAR-100", "3 conv+2 dense,relu+max-norm", "37.22±0.22", "36.01±0.21", "36.25±0.12", "36.10±0.18" ] ]
This task is to classify the images into 10 digit classes. For the purpose of comparison, we train 6 neural networks with different architectures. 50,000 images are used for training and the rest for testing. The neural network architecture we used for these two datasets has 3 convolutional layers, followed by two fully-connected (dense) hidden layers (again, same as that in Srivastava et al. On CIFAR data, expectation-linearization reduces error rate from 12.82% to 12.20% for CIFAR-10, achieving 0.62% improvement. For CIFAR-100, the improvement in terms of error rate is 0.97% with reduction from 37.22% to 36.25%. In this section, we explore the effect of varying the hyper-parameter for the expectation-linearization rate λ.
\documentclass{article} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % simple URL typesetting % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. % microtypography % [disable] \usepackage[ruled,lined]{algorithm2e} \usepackage[lofdepth,lotdepth]{subfig} \theoremstyle{plain} \newcounter{theoremcounter} \newtheorem{theorem}[theoremcounter]{Theorem} \newtheorem{lemma}[theoremcounter]{Lemma} \newcommand{\op}{\mathsf{op}} \theoremstyle{definition} \newcounter{definitioncounter} \newtheorem{definition}[definitioncounter]{Definition} \newcommand{\argmax}{\operatornamewithlimits{argmax}} \newcommand{\argmin}{\operatornamewithlimits{argmin}} \newcommand{\FIXME}[1]{\textcolor{red}{[#1]}} \iclrfinalcopy % Uncomment for camera-ready version \title{Dropout with Expectation-linear \\ Regularization} \author{ Xuezhe Ma, Yingkai Gao \\ Language Technologies Institute \\ Carnegie Mellon University \\ {\tt \{xuezhem, yingkaig\}@cs.cmu.edu} \\ \And Zhiting Hu, Yaoliang Yu \\ Machine Learning Department \\ Carnegie Mellon University \\ {\tt \{zhitinghu, yaoliang\}@cs.cmu.edu} \\ \AND Yuntian Deng \\ School of Engineering and Applied Sciences \\ Harvard University \\ \texttt{dengyuntian@gmail.com} \\ \And Eduard Hovy \\ Language Technologies Institute \\ Carnegie Mellon University \\ \texttt{hovy@cmu.edu} \\ } \begin{document} \maketitle \begin{abstract} Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout's training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an \emph{explicit} control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently. \end{abstract} \section{Introduction} Deep neural networks \citep[DNNs, e.g.,][]{LeCunBH15,Schmidhuber15}, if trained properly, have been demonstrated to significantly improve the benchmark performances in a wide range of application domains. As neural networks go deeper and deeper, naturally, its model complexity also increases quickly, hence the pressing need to \emph{reduce overfitting} in training DNNs. A number of techniques have emerged over the years to address this challenge, among which dropout~\citep{hinton2012improving,srivastava2013improving} has stood out for its simplicity and effectiveness. In a nutshell, dropout \emph{randomly} ``drops'' neural units during training as a means to prevent feature co-adaptation---a sign of overfitting \citep{hinton2012improving}. Simple as it appears to be, dropout has led to several record-breaking performances~\citep{hinton2012improving,ma2016end}, and thus spawned a lot of recent interests in analyzing and justifying dropout from the theoretical perspective, and also in further improving dropout from the algorithmic and practical perspective. In their pioneering work, \citet{hinton2012improving} and \citet{srivastava2014dropout} interpreted dropout as an extreme form of model combination (aka. model ensemble) with extensive parameter/weight sharing, and they proposed to learn the combination through minimizing an appropriate expected loss. Interestingly, they also pointed out that for a single logistic neural unit, the output of dropout is in fact the geometric mean of the outputs of the model ensemble with shared parameters. Subsequently, many theoretical justifications of dropout have been explored, and we can only mention a few here due to space limits. Building on the weight sharing perspective, \citet{baldi2013understanding,baldi2014dropout} analyzed the ensemble averaging property of dropout in deep non-linear logistic networks, and supported the view that dropout is equivalent to applying stochastic gradient descent on some regularized loss function. \citet{wager2013dropout} treated dropout as an adaptive regularizer for generalized linear models (GLMs). \citet{helmbold2016fundamental} discussed the differences between dropout and traditional weight decay regularization. In terms of statistical learning theory, \citet{gao2014dropout} studied the Rademacher complexity of different types of dropout, showing that dropout is able to reduce the Rademacher complexity polynomially for shallow neural networks (with one or no hidden layers) and exponentially for deep neural networks. This latter work~\citep{gao2014dropout} formally demonstrated that dropout, due to its regularizing effect, contributes to reducing the inherent model complexity, in particular the variance component in the generalization error. Seen as a model combination technique, it is intuitive that dropout contributes to reducing the variance of the model performance. Surprisingly, dropout has also been shown to play some role in reducing the model bias. For instance, \citet{jain2015drop} studied the ability of dropout training to escape local minima, hence leading to reduced model bias. Other studies~\citep{chen2014dropout,helmbold2014inductive,wager2014altitude} focus on the effect of the dropout noise on models with shallow architectures. We noted in passing that there are also some work~\citep{kingma2015variational,gal2015dropout,gal2016dropout:rnn} trying to understand dropout from the Bayesian perspective. In this work, we first formulate dropout as a tractable approximation of a latent variable model, and give a clean view of weight sharing (\S 3). Then, we focus on an \emph{inference gap} in dropout that has somehow gotten under-appreciated: In the inference phase, for computational tractability considerations, the model ensemble generated by dropout is approximated by a \emph{single} model with scaled weights, resulting in a gap between training and inference, and rendering the many previous theoretical findings inapplicable. In general, this inference gap can be very large and no attempt (to our best knowledge) has been made to control it. We make three contributions in bridging this gap: Theoretically, we introduce expectation-linear dropout neural networks, through which we are able to explicitly quantify the inference gap (\S 4). In particular, our theoretical results explain why the max-norm constraint on the network weights, a standard practice in training DNNs, can lead to a small inference gap hence potentially improve performance. Algorithmically, we propose to add a sampled version of the inference gap to regularize the standard dropout training objective~(\emph{expectation-linearization}), hence allowing explicit control of the inference gap, and analyze the interaction between expectation-linearization and the model accuracy (\S 5). Experimentally, through three benchmark datasets we show that our regularized dropout is not only as simple and efficient as standard dropout but also consistently leads to improved performance (\S 6). \section{Dropout Neural Networks} In this section we set up the notations, review the dropout neural network model, and discuss the inference gap in standard dropout training that we will attempt to study in the rest of the paper. \subsection{DNNs and Notations} \label{subsec:notation} Throughout we use uppercase letters for random variables (and occasionally for matrices as well), and lowercase letters for realizations of the corresponding random variables. Let $X \in \mathcal{X}$ be the input of the neural network, $Y \in \mathcal{Y}$ be the desired output, and $D = \{(x_1, y_1), \ldots, (x_N, y_N)\}$ be our training sample, where $x_i, i=1,\ldots, N,$ (resp. $y_i$) are usually i.i.d. samples of $X$ (resp. $Y$). Let $\mathbf{M}$ denote a deep neural network with $L$ hidden layers, indexed by $l \in \{1, \ldots, L \}$. Let $\mathbf{h}^{(l)}$ denote the output vector from layer $l$. As usual, $\mathbf{h}^{(0)} = x$ is the input, and $\mathbf{h}^{(L)}$ is the output of the neural network. Denote $\theta = \{\theta_l: l = 1, \ldots, L\}$ as the set of parameters in the network $\mathbf{M}$, where $\theta_l$ assembles the parameters in layer $l$. With dropout, we need to introduce a set of dropout random variables $S = \{\Gamma^{(l)}: l = 1, \ldots, L\}$, where $\Gamma^{(l)}$ is the dropout random variable for layer $l$. Then the deep neural network $\mathbf{M}$ can be described as: \begin{equation}\label{eq:dnn} \mathbf{h}^{(l)} = f_l(\mathbf{h}^{(l - 1)} \odot \gamma^{(l)}; \theta_l), \quad l = 1, \ldots, L, \end{equation} where $\odot$ is the element-wise product and $f_l$ is the transformation function of layer $l$. For example, if layer $l$ is a fully connected layer with weight matrix $W$, bias vector $b$, and sigmoid activation function $\sigma(x) = \frac{1}{1 + \exp(-x)}$, then $f_l(x) = \sigma(W x + b)$). We will also use $\mathbf{h}^{(l)}(x, s; \theta)$ to denote the output of layer $l$ with input $x$ and dropout value $s$, under parameter $\theta$. In the simplest form of dropout, which is also called standard dropout, $\Gamma^{(l)}$ is a vector of independent Bernoulli random variables, each of which has probability $p_l$ of being 1 and $1 - p_l$ of being 0. This corresponds to dropping each of the weights independently with probability $p_l$. \subsection{Dropout Training} The standard dropout neural networks can be trained using stochastic gradient decent (SGD), with a sub-network sampled by dropping neural units for each training instance in a mini-batch. Forward and backward pass for that training instance are done only on the sampled sub-network. Intuitively, dropout aims at, simultaneously and jointly, training an ensemble of exponentially many neural networks (one for each configuration of dropped units) while sharing the same weights/parameters. The goal of the stochastic training procedure of dropout can be understood as minimizing an expected loss function, after marginalizing out the dropout variables~\citep{srivastava2013improving,wang2013fast}. In the context of maximal likelihood estimation, dropout training can be formulated as: \begin{equation}\label{eq:expect-loss} \theta^* = \argmin\limits_{\theta} \mathrm{E}_{S_D}[-l(D, S_D; \theta)] = \argmin\limits_{\theta} \mathrm{E}_{S_D}\Big[ -\sum\limits_{i=1}^{N} \log p(y_i|x_i, S_i; \theta)\Big], \end{equation} where recall that $D$ is the training sample, $S_D = \{S_1, \ldots, S_N\}$ is the dropout variable (one for each training instance), and $l(D, S_D; \theta)$ is the (conditional) log-likelihood function defined by the conditional distribution $p(y|x, s; \theta)$ of output $y$ given input $x$, under parameter $\theta$ and dropout variable $s$. Throughout we use the notation $\mathrm{E}_Z$ to denote the conditional expectation where all random variables except $Z$ are conditioned on. Dropout has also been shown to work well with regularization, such as L2 weight decay~\citep{tikhonov1943stability}, Lasso~\citep{tibshirani1996regression}, KL-sparsity\citep{bradley2008differential,hinton2010practical}, and max-norm regularization~\citep{srebro2004maximum}, among which the max-norm regularization --- that constrains the norm of the incoming weight matrix to be bounded by some constant --- was found to be especially useful for dropout~\citep{srivastava2013improving,srivastava2014dropout}. \subsection{Dropout Inference and Gap}\label{subsec:dropout:inference} As mentioned before, dropout is effectively training an ensemble of neural networks with weight sharing. Consequently, at test time, the output of each network in the ensemble should be averaged to deliver the final prediction. This averaging over exponentially many sub-networks is, however, intractable, and standard dropout typically implements an approximation by introducing a \emph{deterministic} scaling factor for each layer to replace the \emph{random} dropout variable: \begin{equation}\label{eq:prediction} \mathrm{E}_S[\mathbf{H}^{(L)}(x, S; \theta)] \stackrel{?}{\approx} \mathbf{h}^{(L)}(x, \mathrm{E}[S]; \theta), \end{equation} where the right-hand side is the output of a single deterministic neural network whose weights are scaled to match the \emph{expected} number of active hidden units on the left-hand side. Importantly, the right-hand side can be easily computed since it only involves a single deterministic network. \citet{bulo2016dropout} combined dropout with knowledge distillation methods~\citep{hinton2015distilling} to better approximate the averaging processing of the left-hand side. However, the quality of the approximation in \eqref{eq:prediction} is largely unknown, and to our best knowledge, no attempt has been made to \emph{explicitly} control this inference gap. The main goal of this work is to explicitly quantify, algorithmically control, and experimentally demonstrate the inference gap in \eqref{eq:prediction}, in the hope of improving the generalization performance of DNNs eventually. To this end, in the next section we first present a latent variable model interpretation of dropout, which will greatly facilitate our later theoretical analysis. \section{Dropout as Latent Variable Models}\label{sec:lvm} With the end goal of studying the inference gap in \eqref{eq:prediction} in mind, in this section, we first formulate dropout neural networks as a latent variable model (LVM) in \S~\ref{subsec:lvm}. Then, we point out the relation between the training procedure of LVM and that of standard dropout in \S~\ref{subsec:training}. The advantage of formulating dropout as a LVM is that we need only deal with a single model (with latent structure), instead of an ensemble of exponentially many different models (with weight sharing). This much simplified view of dropout enables us to understand and analyze the model parameter $\theta$ in a much more straightforward and intuitive way. \subsection{An LVM Formulation of Dropout}\label{subsec:lvm} A latent variable model consists of two types of variables: the observed variables that represent the empirical (observed) data and the latent variables that characterize the hidden (unobserved) structure. To formulate dropout as a latent variable model, the input $x$ and output $y$ are regarded as observed variables, while the dropout variable $s$, representing the sub-network structure, is hidden. Then, upon fixing the input space $\mathcal{X}$, the output space $\mathcal{Y}$, and the latent space $\mathcal{S}$ for dropout variables, the conditional probability of $y$ given $x$ under parameter $\theta$ can be written as \begin{equation}\label{eq:lvm} p(y|x; \theta) = \int_{\mathcal{S}} p(y|x, s; \theta) p(s) d\mu(s), \end{equation} where $p(y|x, s; \theta)$ is the conditional distribution modeled by the neutral network with configuration $s$ (same as in Eq.~\eqref{eq:expect-loss}), $p(s)$ is the distribution of dropout variable $S$ (e.g. Bernoulli), here assumed to be independent of the input $x$, and $\mu(s)$ is the base measure on the space $\mathcal{S}$. \subsection{LVM Dropout training vs. Standard Dropout Training}\label{subsec:training} Building on the above latent variable model formulation \eqref{eq:lvm} of dropout, we are now ready to point out a simple relation between the training procedure of LVM and that of standard dropout. Given an i.i.d. training sample $D$, the maximum likelihood estimate for the LVM formulation of dropout in \eqref{eq:lvm} is equivalent to minimizing the following negative log-likelihood function: \begin{equation}\label{eq:lvm:train} \theta^* = \argmin\limits_{\theta} -l(D;\theta) = \argmin\limits_{\theta} -\sum\limits_{i=1}^{N} \log p(y_i|x_i; \theta), \end{equation} where $p(y|x; \theta)$ is given in Eq.~\eqref{eq:lvm}. Recall the dropout training objective $\mathrm{E}_{S_D}[-l(D, S_D; \theta)]$ in Eq.~\eqref{eq:expect-loss}. We have the following theorem as a simple consequence of Jensen's inequality (details in Appendix~\ref{appendix:proof:thm1}): \begin{theorem}\label{thm:loss-bound} The expected loss function of standard dropout (Eq.~\eqref{eq:expect-loss}) is an upper bound of the negative log-likelihood of LVM dropout (Eq.~\eqref{eq:lvm:train}): \begin{equation}\label{eq:train:rel} -l(D;\theta) \leq \mathrm{E}_{S_D}[-l(D, S_D; \theta)]. \end{equation} \end{theorem} Theorem~\ref{thm:loss-bound}, in a rigorous sense, justifies dropout training as a convenient and tractable approximation of the LVM formulation in \eqref{eq:lvm}. Indeed, since directly minimizing the marginalized negative log-likelihood in \eqref{eq:lvm:train} may not be easy, a standard practice is to replace the marginalized (conditional) likelihood $p(y|x;\theta)$ in \eqref{eq:lvm} with its empirical Monte carlo average through drawing samples from the dropout variable $S$. The dropout training objective in \eqref{eq:expect-loss} corresponds exactly to this Monte carlo approximation when a \emph{single} sample $S_i$ is drawn for each training instance $(x_i, y_i)$. Importantly, we note that the above LVM formulation involves only a single network parameter $\theta$, which largely simplifies the picture and facilitates our subsequent analysis. \section{Expectation-Linear Dropout Neural Networks}\label{subsec:expect:linearity} Building on the latent variable model formulation in \S~\ref{sec:lvm}, we introduce in this section the notion of expectation-linearity that essentially measures the inference gap in \eqref{eq:prediction}. We then characterize a general class of neural networks that exhibit expectation-linearity, either exactly or approximately over a distribution $p(x)$ on the input space. We start with defining expectation-linearity in the simplest single-layer neural network, then we extend the notion into general deep networks in a natural way. \begin{definition}[Expectation-linear Layer]\label{def:expect-linear:layer} A network layer $\mathbf{h} = f(x\odot\gamma; \theta)$ is \emph{expectation-linear with respect to} a set $\mathcal{X}' \subseteq \mathcal{X}$, if for all $x \in \mathcal{X}'$ we have \begin{equation} \label{eq:el} \big\| \mathrm{E}[f(x \odot \Gamma; \theta)] - f(x \odot \mathrm{E}[\Gamma]; \theta) \big\|_2 = 0. \end{equation} In this case we say that $\mathcal{X}'$ is \emph{expectation-linearizable}, and $\theta$ is \emph{expectation-linearizing} w.r.t $\mathcal{X}'$. \end{definition} Obviously, the condition in \eqref{eq:el} will guarantee no gap in the dropout inference approximation \eqref{eq:prediction}---an admittedly strong condition that we will relax below. Clearly, if $f$ is an affine function, then we can choose $\mathcal{X}' = \mathcal{X}$ and expectation-linearity is trivial. Note that expectation-linearity depends on the network parameter $\theta$ and the dropout distribution $\Gamma$. Expectation-linearity, as defined in \eqref{eq:el}, is overly strong: under standard regularity conditions, essentially the transformation function $f$ has to be affine over the set $\mathcal{X}'$, ruling out for instance the popular sigmoid or tanh activation functions. Moreover, in practice, downstream use of DNNs are usually robust to small errors resulting from \emph{approximate} expectation-linearity (hence the empirical success of dropout), so it makes sense to define an inexact extension. We note also that the definition in \eqref{eq:el} is \emph{uniform} over the set $\mathcal{X}'$, while in a statistical setting it is perhaps more meaningful to have expectation-linearity ``on average,'' since inputs from lower density regions are not going to play a significant role anyway. Taking into account the aforementioned motivations, we arrive at the following inexact extension: \begin{definition}[Approximately Expectation-linear Layer]\label{def:approx-expect-linear:layer} A network layer $\mathbf{h} = f(x\odot\gamma; \theta)$ is \emph{\mbox{$\delta$-approximately} expectation-linear with respect to} a distribution $p(x)$ over $\mathcal{X}$ if \begin{equation} \label{eq:ael} \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{\Gamma}\big[f(X \odot \Gamma; \theta) | X\big] - f(X \odot \mathrm{E}[\Gamma]; \theta) \big\|_2 \Big] < \delta. \end{equation} In this case we say that $p(x)$ is \emph{$\delta$-approximately expectation-linearizable}, and $\theta$ is \emph{$\delta$-approximately expectation-linearizing}. \end{definition} To appreciate the power of cutting some slack from exact expectation-linearity, we remark that even non-affine activation functions often have approximately linear regions. For example, the logistic function, a commonly used non-linear activation function in DNNs, is approximately linear around the origin. Naturally, we can ask whether it is sufficient for a target distribution $p(x)$ to be well-approximated by an approximately expectation-linearizable one. We begin by providing an appropriate measurement of the quality of this approximation. \begin{definition}[Closeness, \citep{andreas2015accuracy}] \label{def:closeness} A distribution $p(x)$ is $C$-close to a set $\mathcal{X}' \subseteq \mathcal{X}$ if \begin{equation} \mathrm{E}\Big[ \inf\limits_{x^* \in \mathcal{X}'} \sup\limits_{\gamma \in \mathcal{S}} \| X \odot \gamma - x^* \odot \gamma \|_2 \Big] \leq C, \end{equation} where recall that $\mathcal{S}$ is the (bounded) space that the dropout variable lives in. \end{definition} Intuitively, $p(x)$ is $C$-close to a set $\mathcal{X}'$ if a random sample from $p$ is no more than a distance $C$ from $\mathcal{X}'$ in expectation and under the worst ``dropout perturbation''. For example, a standard normal distribution is close to an interval centering at origin ($[-\alpha, \alpha]$) with some constant $C$. Our definition of closeness is similar to that in \citet{andreas2015accuracy}, who used this notion to analyze self-normalized log-linear models. We are now ready to state our first major result that quantifies approximate expectation-linearity of a single-layered network (proof in Appendix~\ref{appendix:proof:thm2}): \begin{theorem}\label{thm:layer} Given a network layer $\mathbf{h} = f(x\odot\gamma; \theta)$, where $\theta$ is \emph{expectation-linearizing} w.r.t. $\mathcal{X}' \subseteq \mathcal{X}$. Suppose $p(x)$ is $C$-close to $\mathcal{X}'$ and for all $x \in \mathcal{X}, \|\nabla_x f(x)\|_{\op} \leq B$, where $\|\cdot\|_{\op}$ is the usual operator norm. Then, $p(x)$ is $2BC$-approximately expectation-linearizable. \end{theorem} Roughly, Theorem~\ref{thm:layer} states that the input distribution $p(x)$ that place most of its mass on regions close to expectation-linearizable sets are approximately expectation-linearizable on a similar scale. The bounded operator norm assumption on the derivative $\nabla f$ is satisfied in most commonly used layers. For example, for a fully connected layer with weight matrix $W$, bias vector $b$, and activation function $\sigma$, $\| \nabla f(\cdot) \|_{\op} = |\sigma'(\cdot)| \cdot\| W \|_{\op}$ is bounded by $\| W \|_{\op}$ and the supremum of $|\sigma'(\cdot)|$ (1/4 when $\sigma$ is sigmoid and 1 when $\sigma$ is tanh). Next, we extend the notion of approximate expectation-linearity to deep dropout neural networks. \begin{definition}[Approximately Expectation-linear Network]\label{def:approx-expect-linear:network} A deep neural network with $L$ layers (cf. Eq.~\eqref{eq:dnn}) is \emph{\mbox{$\delta$-approximately} expectation-linear with respect to} $p(x)$ over $\mathcal{X}$ if \begin{equation} \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S}\big[\mathbf{H}^{(L)}(X, S; \theta) |X\big] - \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \big\|_2 \Big] < \delta. \end{equation} where $\mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta)$ is the output of the deterministic neural network in standard dropout. \end{definition} Lastly, we relate the level of approximate expectation-linearity of a deep neural network to the level of approximate expectation-linearity of each of its layers: \begin{theorem}\label{thm:dnn} Given an $L$-layer neural network as in Eq.~\eqref{eq:dnn}, and suppose that each layer $l \in \{1, \ldots, L\}$ is $\delta$-approximately expectation-linear w.r.t. $p(\mathbf{h}^{(l)})$, $\mathrm{E}[\Gamma^{(l)}] \leq \gamma$, $\sup_{x} \| \nabla f_l(x) \|_{\op} \leq B$, and $\mathrm{E}\big[\mathrm{Var}[\mathbf{H}^{(l)}|X]\big] \leq \sigma^2$. Then the network is $\Delta$-approximately expectation-linear with \begin{equation} \Delta = (B\gamma)^{L-1}\delta + (\delta + B\gamma\sigma)\bigg(\frac{1-(B\gamma)^{L-1}}{1-B\gamma}\bigg). \end{equation} \end{theorem} From Theorem~\ref{thm:dnn} (proof in Appendix~\ref{appendix:proof:thm3}) we observe that the level of approximate expectation-linearity of the network mainly depends on four factors: the level of approximate expecatation-linearity of each layer ($\delta$), the expected variance of each layer ($\sigma$), the operator norm of the derivative of each layer's transformation function ($B$), and the mean of each layer's dropout variable ($\gamma$). In practice, $\gamma$ is often a constant less than or equal to 1. For example, if $\Gamma \sim \mathrm{Bernoulli}(p)$, then $\gamma = p$. According to the theorem, the operator norm of the derivative of each layer's transformation function is an important factor in the level of approximate expectation-linearity: the smaller the operator norm is, the better the approximation. Interestingly, the operator norm of a layer often depends on the norm of the layer's weight (e.g. for fully connected layers). Therefore, adding max-norm constraints to regularize dropout neural networks can lead to better approximate expectation-linearity hence smaller inference gap and the often improved model performance. It should also be noted that when $B\gamma < 1$, the approximation error $\Delta$ tends to be a constant when the network becomes deeper. When $B\gamma = 1$, $\Delta$ grows linearly with $L$, and when $B\gamma > 1$, the growth of $\Delta$ becomes exponential. Thus, it is essential to keep $B\gamma < 1$ to achieve good approximation, particularly for deep neural networks. \section{Expectation-Linear Regularized Dropout}\label{sec:linearization} In the previous section we have managed to bound the approximate expectation-linearity, hence the inference gap in \eqref{eq:prediction}, of dropout neural networks. In this section, we first prove a uniform deviation bound of the \emph{sampled} approximate expectation-linearity measure from its mean, which motivates adding the sampled (hence computable) expectation-linearity measure as a regularization scheme to standard dropout, with the goal of explicitly controlling the inference gap of the learned parameter, hence potentially improving the performance. Then we give the upper bounds on the loss in accuracy due to expectation-linearization, and describe classes of distributions that expectation-linearize easily. \subsection{A Uniform Deviation Bound for the Sampled Expectation-linear Measure} We now show that an expectation-linear network can be found by expectation-linearizing the network on the training sample. To this end, we prove a uniform deviation bound between the empirical expectation-linearization measure using i.i.d. samples~(Eq.~\eqref{eq:empirical:risk}) and its mean~(Eq.~\eqref{eq:risk}). \begin{theorem}\label{thm:complexity} Let $\mathcal{H} = \left\{\mathbf{h}^{(L)}(x, s; \theta): \theta \in \Theta\right\}$ denote a space of $L$-layer dropout neural networks indexed with $\theta$, where $\mathbf{h}^{(L)}: \mathcal{X} \times \mathcal{S} \rightarrow \mathcal{R}$ and $\Theta$ is the space that $\theta$ lives in. Suppose that the neural networks in $\mathcal{H}$ satisfy the constraints: 1) $\forall x \in \mathcal{X}, \|x\|_2 \leq \alpha$; 2) $\forall l \in \{1, \ldots, L\}, \mathrm{E}(\Gamma^{(l)}) \leq \gamma$ and $\|\nabla f_{l}\|_{op} \leq B$; 3) $\|\mathbf{h}^{(L)}\| \leq \beta$. Denote empirical expectation-linearization measure and its mean as: {\small \begin{align} \hat{\Delta} & = \frac{1}{n}\sum\limits_{i=1}^{n} \big\| \mathrm{E}_{S_i}\big[\mathbf{H}^{(L)}(X_i, S_i; \theta)\big] - \mathbf{h}^{(L)}(X_i, \mathrm{E}[S_i]; \theta) \big\|_2 ,\label{eq:empirical:risk} \\ \Delta & = \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S}\big[\mathbf{H}^{(L)}(X, S; \theta)\big] - \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \big\|_2 \Big]. \label{eq:risk} \end{align}} Then, with probability at least $1 - \nu$, we have \begin{equation} \sup\limits_{\theta \in \Theta} |\Delta - \hat{\Delta}| < \frac{2\alpha B^{L} (\gamma^{L/2}+1)}{\sqrt{n}} + \beta \sqrt{\frac{\log(1/\nu)}{n}}. \end{equation} \end{theorem} From Theorem~\ref{thm:complexity} (proof in Appendix~\ref{appendix:proof:thm4}) we observe that the deviation bound decreases exponentially with the number of layers $L$ when the operator norm of the derivative of each layer's transformation function ($B)$ is less than 1 (and the contrary if $B \geq 1$). Importantly, the square root dependence on the number of samples ($n$) is standard and cannot be improved without significantly stronger assumptions. It should be noted that Theorem~\ref{thm:complexity} per se does not imply anything between expectation-linearization and the model accuracy (i.e. how well the expectation-linearized neural network actually achieves on modeling the data). Formally studying this relation is provided in \S~\ref{subsec:accuracy}. In addition, we provide some experimental evidences in \S~\ref{sec:experiment} on how improved approximate expectation-linearity (equivalently smaller inference gap) does lead to better empirical performances. \subsection{Expectation-Linearization as Regularization} The uniform deviation bound in Theorem~\ref{thm:complexity} motivates the possibility of obtaining an approximately expectation-linear dropout neural networks through adding the empirical measure \eqref{eq:empirical:risk} as a regularization scheme to the standard dropout training objective, as follows: \begin{equation}\label{eq:regular} loss(D; \theta) = -l(D; \theta) + \lambda V(D; \theta), \end{equation} where $-l(D; \theta)$ is the negative log-likelihood defined in Eq.~\eqref{eq:lvm:train}, $\lambda > 0$ is a regularization constant, and $V(D; \theta)$ measures the level of approximate expectation-linearity: \begin{equation}\label{eq:penalty} V(D; \theta) = \frac{1}{N}\sum\limits_{i=1}^{N} \big\| \mathrm{E}_{S_i}\big[\mathbf{H}^{(L)}(x_i, S_i; \theta)\big] - \mathbf{h}^{(L)}(x_i, \mathrm{E}[S_i]; \theta) \big\|_2^{2}. \end{equation} To solve \eqref{eq:regular}, we can minimize $loss(D; \theta)$ via stochastic gradient descent as in standard dropout, and approximate $V(D; \theta)$ using Monte carlo: \begin{equation}\label{eq:monte-carlo} V(D; \theta) \approx \frac{1}{N}\sum\limits_{i=1}^{N} \big\|\mathbf{h}^{(L)}(x_i, s_i; \theta) - \mathbf{h}^{(L)}(x_i, \mathrm{E}[S_i]; \theta)\big\|_2^{2}, \end{equation} where $s_i$ is the same dropout sample as in $l(D; \theta)$ for each training instance in a mini-batch. Thus, the only additional computational cost comes from the deterministic term $\mathbf{h}^{(L)}(x_i, \mathrm{E}[S_i]; \theta)$. Overall, our regularized dropout \eqref{eq:regular}, in its Monte carlo approximate form, is as simple and efficient as the standard dropout. \subsection{On the accuracy of Expectation-linearized Models}\label{subsec:accuracy} So far our discussion has concentrated on the problem of finding expectation-linear neural network models, without any concerns on how well they actually perform at modeling the data. In this section, we characterize the trade-off between maximizing ``data likelihood'' and satisfying an expectation-linearization constraint. To achieve the characterization, we measure the \emph{likelihood gap} between the classical maximum likelihood estimator (MLE) and the MLE subject to a expectation-linearization constraint. Formally, given training data $D = \{(x_1, y_1), \ldots, (x_n, y_n)\}$, we define \begin{align} \hat{\theta} & = \quad \, \argmin\limits_{\theta \in \Theta} \quad \, -l(D; \theta) \\ \hat{\theta}_\delta & = \argmin\limits_{\theta \in \Theta,V(D; \theta) \leq \delta} -l(D; \theta) \end{align} where $-l(D; \theta)$ is the negative log-likelihood defined in Eq.~\eqref{eq:lvm:train}, and $V(D; \theta)$ is the level of approximate expectation-linearity in Eq.~\eqref{eq:penalty}. We would like to control the loss of model accuracy by obtaining a bound on the \emph{likelihood gap} defined as: \begin{equation}\label{eq:likelihood:gap} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) = \frac{1}{n} (l(D; \hat{\theta}) - l(D; \hat{\theta}_\delta)) \end{equation} In the following, we focus on neural networks with \emph{softmax} output layer for classification tasks. \begin{equation}\label{eq:softmax} p(y|x, s; \theta) = \mathbf{h}_y^{(L)}(x, s; \theta) = f_L(\mathbf{h}^{(L - 1)}(x, s); \eta) = \frac{e^{\eta_y^T \mathbf{h}^{(L - 1)}(x, s)}}{\sum\limits_{y' \in \mathcal{Y}} e^{\eta_{y'}^T \mathbf{h}^{(L - 1)}(x,s)}} \end{equation} where $\theta = \{\theta_1, \ldots, \theta_{L-1}, \eta\}$, $\mathcal{Y} = \{1, \ldots, k\}$ and $\eta = \{\eta_y: y \in \mathcal{Y} \}$. We claim: \begin{theorem}\label{thm:det:bound} Given an $L$-layer neural network $\mathbf{h}^{(L)}(x, s; \theta)$ with softmax output layer in \eqref{eq:softmax}, where parameter $\theta \in \Theta$, dropout variable $s \in \mathcal{S}$, input $x \in \mathcal{X}$ and target $y \in \mathcal{Y}$. Suppose that for every $x$ and $s$, $p(y|x, s; \hat{\theta})$ makes a unique best prediction---that is, for each $x\in\mathcal{X}, s \in \mathcal{S}$, there exists a unique $y^* \in \mathcal{Y}$ such that $\forall y\neq y^*$, $\hat{\eta}_y^T\mathbf{h}^{(L-1)}(x, s) < \hat{\eta}_{y^*}^T\mathbf{h}^{(L-1)}(x, s)$. Suppose additionally that $\forall x, s, \,\|\mathbf{h}^{(L-1)}(x, s; \hat{\theta})\| \leq \beta$, and $\forall y, p(y|x; \hat{\theta}) >0$. Then \begin{equation} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq c_1 \beta^2 \left(\|\hat{\eta}\|_2 -\frac{\delta}{4\beta}\right)^2 e^{-c_2\delta/4\beta} \end{equation} where $c_1$ and $c_2$ are distribution-dependent constants. \end{theorem} From Theorem~\ref{thm:det:bound} (proof in Appendix~\ref{appendix:proof:thm5}) we observe that, at one extreme, distributions closed to deterministic can be expectation-linearized with little loss of likelihood. What about the other extreme --- distributions ``as close to uniform distribution as possible''? With suitable assumptions about the form of $p(y|x, s; \hat{\theta})$ and $p(y|x; \hat{\theta})$, we can achieve an accuracy loss bound for distributions that are close to uniform: \begin{theorem}\label{thm:uniform:bound} Suppose that $\forall x, s, \,\|\mathbf{h}^{(L-1)}(x, s; \hat{\theta})\| \leq \beta$. Additionally, for each $(x_i, y_i) \in D, s \in \mathcal{S}$, $\log \frac{1}{k} \leq \log p(y_i|x_i, s;\hat{\theta}) \leq \frac{1}{k}\sum\limits_{y\in\mathcal{Y}}\log p(y|x_i, s; \hat{\theta})$. Then asymptotically as $n \rightarrow \infty$: \begin{equation} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq \left(1 - \frac{\delta}{4\beta\|\hat{\eta}\|_2}\right) \mathrm{E}\left[ \mathrm{KL}\left(p(\cdot|X; \theta) \| \mathrm{Unif}(\mathcal{Y})\right) \right] \end{equation} \end{theorem} Theorem~\ref{thm:uniform:bound} (proof in Appendix~\ref{appendix:proof:thm6}) indicates that uniform distributions are also an easy class for expectation-linearization. The next question is whether there exist any classes of conditional distributions $p(y|x)$ for which all distributions are provably hard to expectation-linearize. It remains an open problem and might be an interesting direction for future work. \section{Experiments}\label{sec:experiment} In this section, we evaluate the empirical performance of the proposed regularized dropout in \eqref{eq:regular} on a variety of network architectures for the classification task on three benchmark datasets---MNIST, CIFAR-10 and CIFAR-100. We applied the same data preprocessing procedure as in \citet{srivastava2014dropout}. To make a thorough comparison and provide experimental evidence on how the expectation-linearization interacts with the predictive power of the learned model, we perform experiments of Monte Carlo (MC) dropout, which approximately computes the final prediction (left-hand side of \eqref{eq:prediction}) via Monte Carlo sampling, w/o the proposed regularizer. In the case of MC dropout, we average $m = 100$ predictions using randomly sampled configurations. In addition, the network architectures and hyper-parameters for each experiment setup are the same as those in \citet{srivastava2014dropout}, unless we explicitly claim to use different ones. Following previous works, for each data set We held out 10,000 random training images for validation to tune the hyper-parameters, including $\lambda$ in Eq.~\eqref{eq:regular}. When the hyper-parameters are fixed, we train the final models with all the training data, including the validation data. A more detailed description of the conducted experiments can be provided in Appendix~\ref{appendix:experiment}. For each experiment, we report the mean test errors with corresponding standard deviations over 5 repetitions. \subsection{MNIST} The MNIST dataset~\citep{lecun1998gradient} consists of 70,000 handwritten digit images of size 28$\times$28, where 60,000 images are used for training and the rest for testing. This task is to classify the images into 10 digit classes. For the purpose of comparison, we train 6 neural networks with different architectures. The experimental results are shown in Table~\ref{tab:results}. \subsection{CIFAR-10 and CIFAR-100} The CIFAR-10 and CIFAR-100 datasets~\citep{krizhevsky2009learning} consist of 60,000 color images of size $32\times32$, drawn from 10 and 100 categories, respectively. 50,000 images are used for training and the rest for testing. The neural network architecture we used for these two datasets has 3 convolutional layers, followed by two fully-connected (dense) hidden layers (again, same as that in \citet{srivastava2014dropout}). The experimental results are recorded in Table~\ref{tab:results}, too. From Table~\ref{tab:results} we can see that on MNIST data, dropout network training with expectation-linearization outperforms standard dropout on all 6 neural architectures. On CIFAR data, expectation-linearization reduces error rate from 12.82\% to 12.20\% for CIFAR-10, achieving 0.62\% improvement. For CIFAR-100, the improvement in terms of error rate is 0.97\% with reduction from 37.22\% to 36.25\%. From the results we see that with or without expectation-linearization, the MC dropout networks achieve similar results. It illustrates that by achieving expectation-linear neural networks, the predictive power of the learned models has not degraded significantly. Moreover, it is interesting to see that with the regularization, on MNIST dataset, standard dropout networks achieve even better accuracy than MC dropout. It may be because that with expectation-linearization, standard dropout inference achieves better approximation of the final prediction than MC dropout with (only) 100 samples. On CIFAR datasets, MC dropout networks achieve better accuracy than the ones with the regularization. But, obviously, MC dropout requires much more inference time than standard dropout~(MC dropout with $m$ samples requires about $m$ times the inference time of standard dropout). \subsection{Effect of Regularization Constant $\lambda$} In this section, we explore the effect of varying the hyper-parameter for the expectation-linearization rate $\lambda$. We train the network architectures in Table~\ref{tab:results} with the $\lambda$ value ranging from 0.1 to 10.0. Figure~\ref{fig:lambda} shows the test errors obtained as a function of $\lambda$ on three datasets. In addition, Figure~\ref{fig:lambda}, middle and right panels, also measures the empirical expectation-linearization risk $\hat{\Delta}$ of Eq.~\eqref{eq:empirical:risk} with varying $\lambda$ on CIFAR-10 and CIFAR-100, where $\hat{\Delta}$ is computed using Monte carlo with 100 independent samples. From Figure~\ref{fig:lambda} we can see that when $\lambda$ increases, better expectation-linearity is achieved (i.e. $\hat{\Delta}$ decreases). The model accuracy, however, has not kept growing with increasing $\lambda$, showing that in practice considerations on the trade-off between model expectation-linearity and accuracy are needed. \subsection{Comparison with Dropout Distillation} To make a thorough empirical comparison with the recently proposed Dropout Distillation method~\citep{bulo2016dropout}, we also evaluate our regularization method on CIFAR-10 and CIFAR-100 datasets with the All Convolutional Network~\citep{springenberg2014striving} (AllConv). To facilitate comparison, we adopt the originally reported hyper-parameters and the same setup for training. Table~\ref{tab:comparison} gives the results comparison the classification error percentages on test data under AllConv using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100~\footnote{We obtained similar results as that reported in Table~1 of \citet{bulo2016dropout} on CIFAR-10 corpus, while we cannot reproduce comparable results on CIFAR-100 (around 3\% worse)}. According to Table~\ref{tab:comparison}, our proposed expectation-linear regularization method achieves comparable performance to dropout distillation. \section{Conclusions} In this work, we attempted to establish a theoretical basis for the understanding of dropout, motivated by controlling the gap between dropout's training and inference phases. Through formulating dropout as a latent variable model and introducing the notion of (approximate) expectation-linearity, we have formally studied the inference gap of dropout, and introduced an empirical measure as a regularization scheme to explicitly control the gap. Experiments on three benchmark datasets demonstrate that reducing the inference gap can indeed improve the end performance. In the future, we intend to formally relate the inference gap to the generalization error of the underlying network, hence providing further justification of regularized dropout. \section*{Acknowledgements} This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. \bibliographystyle{iclr2017_conference} \newpage \section*{Appendix: Dropout with Expectation-linear Regularization} \appendix \setcounter{equation}{0} \section{LVM Dropout training vs. Standard Dropout Training}\label{appendix:proof:thm1} \paragraph{Proof of Theorem 1} \begin{proof} \begin{displaymath} \begin{array}{rcl} \mathrm{E}_{S_D}[l(D, S_D; \theta)] & = & \bigintss_{\mathcal{S}} \prod\limits_{i=1}^{N}p(s_i) \Big(\sum\limits_{i=1}^{N} \log p(y_i|x_i, s_i; \theta)\Big) d\mu(s_1) \ldots d\mu(s_N) \\ & = & \sum\limits_{i=1}^{N} \bigintsss_{\mathcal{S}} p(s_i) \log p(y_i|x_i, s_i; \theta) d\mu(s_i) \end{array} \end{displaymath} Because $\log(\cdot)$ is a concave function, from Jensen's Inequality, \begin{displaymath} \int_{\mathcal{S}} p(s) \log p(y|x, s; \theta) d\mu(s) \leq \log \int_{\mathcal{S}} p(s) p(y|x, s; \theta) d\mu(s) \end{displaymath} Thus \begin{displaymath} \mathrm{E}_{S_D}[-l(D, S_D; \theta)] \geq \sum\limits_{i=1}^{N} \log \int_{\mathcal{S}} p(s_i) p(y_i|x_i, s_i; \theta) d\mu(s_i) = -l(D;\theta). \end{displaymath} \end{proof} \section{Expectation-Linear Dropout Neural Networks} \subsection{Proof of Theorem 2}\label{appendix:proof:thm2} \begin{proof} Let $\gamma^* = \mathrm{E}[\Gamma]$, and \begin{displaymath} A \stackrel{\Delta}{=} \left\{x: \|\mathrm{E}[f(x \odot \Gamma; \theta)] - f(x \odot \gamma^*; \theta) \|_2 = 0 \right\} \end{displaymath} Let $X^* = \argmin\limits_{x \in A} \sup\limits_{\gamma \in \mathcal{S}} \|X \odot \gamma - x \odot \gamma \|_2$, and $X^- = X - X^*$. Then, \begin{displaymath} X \odot \gamma = X^* \odot \gamma + X^{-} \odot \gamma \end{displaymath} In the following, we omit the parameter $\theta$ for convenience. Moreover, we denote \begin{displaymath} \mathrm{E}_{\Gamma}\big[f(X \odot \Gamma; \theta)\big] \stackrel{\Delta}{=} \mathrm{E}\big[f(X \odot \Gamma; \theta) | X\big] \end{displaymath} From Taylor Series, there exit some $X', X'' \in \mathcal{X}$ satisfy that \begin{displaymath} \begin{array}{rcl} f(X \odot \Gamma) & = & f(X^* \odot \Gamma) + f'(X' \odot \Gamma) (X^{-} \odot \Gamma) \\ f(X \odot \gamma^*) & = & f(X^* \odot \gamma^*) + f'(X'' \odot \gamma^*) (X^{-} \odot \gamma^*) \end{array} \end{displaymath} where we denote $f'(x) = (\nabla_x f(x))^{T}$. Then, \begin{displaymath} \begin{array}{rl} & \mathrm{E}_\Gamma[f(X \odot \Gamma) - f(X \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[f(X^* \odot \Gamma + X^{-} \odot \Gamma) - f(X^* \odot \gamma^* + X^{-} \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*) + f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^{-} \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*)] + \mathrm{E}_\Gamma[f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^{-} \odot \gamma^*)] \end{array} \end{displaymath} Since $X^* \in A$, we have \begin{displaymath} \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*)] = 0. \end{displaymath} Then, \begin{displaymath} \begin{array}{rl} & \mathrm{E}_\Gamma[f(X \odot \Gamma) - f(X \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^{-} \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] + \mathrm{E}_\Gamma[f'(X''\odot\gamma^*)(X^- \odot \Gamma - X^- \odot \gamma^*)] \\ = & \mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \end{array} \end{displaymath} Then, \begin{displaymath} \begin{array}{rl} & \| \mathrm{E}_\Gamma[f(X \odot \Gamma)] - f(X \odot \gamma^*) \|_2 \\ = & \|\mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)]\|_2 \end{array} \end{displaymath} Since $\|X^- \odot \gamma'\|_2 \leq \sup\limits_{\gamma \in \mathcal{S}} \|X^- \odot \gamma\|_2 = \inf\limits_{x \in A} \sup\limits_{\gamma \in \mathcal{S}} \|X \odot \gamma - x \odot \gamma\|_2$, and from Jensen's inequality and property of operator norm, \begin{displaymath} \begin{array}{rl} & \|\mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)]\|_2 \\ \leq & \mathrm{E}_\Gamma\Big[\|f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*)\|_{op} \|X^- \odot \Gamma \|_2\Big] \\ \leq & 2B\mathrm{E}_\Gamma\Big[\|X^- \odot \Gamma \|_2\Big] \\ \leq & 2B\inf\limits_{x \in A} \sup\limits_{\gamma \in \mathcal{S}} \|X \odot \gamma - x \odot \gamma\|_2 \end{array} \end{displaymath} Finally we have, \begin{displaymath} \begin{array}{rl} & \mathrm{E}_X\bigg[ \|\mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)]\|_2\bigg] \\ \leq & 2B\mathrm{E}\bigg[\inf\limits_{x \in A} \sup\limits_{\gamma \in \mathcal{S}} \|X \odot \gamma - x \odot \gamma\|_2\bigg] \leq 2BC \end{array} \end{displaymath} \end{proof} \subsection{Proof of Theorem 3}\label{appendix:proof:thm3} \begin{proof} Induction on the number of the layers $L$. As before, we omit the parameter $\theta$. \\ \textbf{Initial step:} when $L=1$, the statement is obviously true. \\ \textbf{Induction on $L$:} Suppose that the statement is true for neural networks with $L$ layers.\\ Now we prove the case $L+1$. From the inductive assumption, we have, \begin{equation}\label{eq:induction} \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}(X, S_L)\big] - \mathbf{h}^{(L)}(X, \mathrm{E}[S_L]) \big\|_2 \Big] \leq \Delta_L \end{equation} where $S_L = \{\Gamma^{(1)}, \ldots, \Gamma^{(L)}\}$ is the dropout random variables for the first $L$ layers, and \begin{displaymath} \Delta_L = (B\gamma)^{L-1}\delta + (\delta + B\gamma\sigma)\bigg(\frac{1-(B\gamma)^{L-1}}{1-B\gamma}\bigg) \end{displaymath} In addition, the $L+1$ layer is $\delta$-approximately expectation-linear, we have: \begin{equation}\label{eq:step} \mathrm{E}_{\mathbf{H}^{(L)}}\Big[\big\|\mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \big\|_2\Big] \leq \delta \end{equation} Let $\mathrm{E}[\Gamma^{(l)}] = \gamma^{(l)}, \forall l \in \{1, \ldots, L+1\}$, and let $\mathbf{H}^{(l)}$ and $\mathbf{h}^{(l)}$ be short for $\mathbf{H}^{(l)}(X, S_l)$ and $\mathbf{h}^{(l)}(X, \mathrm{E}(S_l))$, respectively, when there is no ambiguity. Moreover, we denote \begin{displaymath} \mathrm{E}_{S}\big[\mathbf{H}^{(L)}(X, S; \theta)\big] = \mathrm{E}_{S}\big[\mathbf{H}^{(L)}(X, S; \theta) \big| X\big] \end{displaymath} for convenience. Then, \begin{displaymath} \begin{array}{rl} & \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S_{L+1}}\big[\mathbf{H}^{(L+1)}\big] - \mathbf{h}^{(L+1)} \big\|_2 \Big] \\ = & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[ \mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot\Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big] \\ & + \mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)}) \Big\|_2\bigg] \\ \leq & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[ \mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot\Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big] \Big\|_2\bigg] \\ & + \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)}) \Big\|_2\bigg] \end{array} \end{displaymath} From Eq.~\ref{eq:step} and Jensen's inequality, we have \begin{equation}\label{eq:part1} \begin{array}{rl} & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[ \mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot\Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big] \Big\|_2\bigg] \\ \leq & \mathrm{E}_{\mathbf{H}^{(L)}}\bigg[\Big\|\mathrm{E}_{\Gamma^{(L+1)}}\big[f_{L+1}(\mathbf{H}^{(L)} \odot\Gamma^{(L+1)})\big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big\|_2\bigg] \leq \delta \end{array} \end{equation} and \begin{equation}\label{eq:part2} \begin{array}{rl} & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)}) \Big\|_2\bigg] \\ = & \mathrm{E}_{X}\bigg[\Big\| \mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) \\ & + f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big\|_2\bigg] \\ \leq & \mathrm{E}_{X}\bigg[\Big\|\mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)})\Big\|_2\bigg] \\ & + \mathrm{E}_{X}\bigg[\Big\| f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big\|_2\bigg] \end{array} \end{equation} Using Jensen's inequality, property of operator norm and $\mathrm{E}\big[\mathrm{Var}[\mathbf{H}^{(l)}|X]\big] \leq \sigma^2$, we have \begin{equation}\label{eq:part3} \begin{array}{rl} & \mathrm{E}_{X}\bigg[\Big\|\mathrm{E}_{S_L}\Big[f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) \Big] - f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)})\Big\|_2\bigg] \\ \leq & \mathrm{E}_{\mathbf{H}^{(L)}}\bigg[\Big\| f_{L+1}(\mathbf{H}^{(L)}\odot\gamma^{(L+1)}) - f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) \Big\|_2\bigg] \\ \leq & B\gamma\mathrm{E}_{\mathbf{H}^{(L)}}\Big[\big\|\mathbf{H}^{(L)} - \mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big]\big\|_2\Big] \\ \leq & B\gamma\left( \mathrm{E}_{\mathbf{H}^{(L)}}\Big[\big\|\mathbf{H}^{(L)} - \mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big]\big\|^2_2\Big]\right)^{\frac{1}{2}} \leq B\gamma\sigma \end{array} \end{equation} From Eq.~\ref{eq:induction} \begin{equation}\label{eq:part4} \begin{array}{rl} & \mathrm{E}_{X}\bigg[\Big\| f_{L+1}(\mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)}\odot\gamma^{(L+1)})\Big\|_2\bigg] \\ = & B\gamma \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S_L}\big[\mathbf{H}^{(L)}\big] - \mathbf{h}^{(L)} \big\|_2 \Big] \leq B\gamma\Delta_L \end{array} \end{equation} Finally, to sum up with Eq.~\ref{eq:part1}, Eq.~\ref{eq:part2}, , Eq.~\ref{eq:part3}, , Eq.~\ref{eq:part4}, we have \begin{displaymath} \begin{array}{rl} & \mathrm{E}_{X}\Big[\big\| \mathrm{E}_{S_{L+1}}\big[\mathbf{H}^{(L+1)}\big] - \mathbf{h}^{(L+1)} \big\|_2 \Big] \\ \leq & \delta + B\gamma\sigma + B\gamma\Delta_L \\ = & (B\gamma)^{L}\delta + (\delta + B\gamma\sigma)\bigg(\frac{1-(B\gamma)^{L}}{1-B\gamma}\bigg) = \Delta_{L+1} \end{array} \end{displaymath} \end{proof} \section{Expectation-Linearization} \subsection{Proof of Theorem 4: Uniform Deviation Bound}\label{appendix:proof:thm4} Before proving Theorem~\ref{thm:complexity}, we first define the notations. Let $X^n = \{X_1, \ldots, X_n \}$ be a set of $n$ samples of input $X$. For a function space $\mathcal{F}: \mathcal{X} \rightarrow \mathcal{R}$, we use $Rad_n(\mathcal{F}, X^n)$ to denote the \emph{empirical Rademacher complexity} of $\mathcal{F}$, \begin{displaymath} Rad_n(\mathcal{F}, X^n) = \mathrm{E}_{\sigma}\bigg[ \sup\limits_{f \in \mathcal{F}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i f(X_i) \Big)\bigg] \end{displaymath} and the \emph{Rademacher complexity} is defined as \begin{displaymath} Rad_n(\mathcal{F}) = \mathrm{E}_{X^n}\Big[ Rad_n(\mathcal{F}, X^n) \Big] \end{displaymath} In addition, we import the definition of \emph{dropout Rademacher complexity} from \citet{gao2014dropout}: \begin{displaymath} \begin{array}{rcl} \mathcal{R}_n (\mathcal{H}, X^n, S^n) & = & \mathrm{E}_{\sigma}\bigg[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i h(X_i, S_i) \Big)\bigg] \\ \mathcal{R}_n (\mathcal{H}) & = & \mathrm{E}_{X^n,S^n}\Big[ Rad_n(\mathcal{H}, X^n, S^n) \Big] \end{array} \end{displaymath} where $\mathcal{H}: \mathcal{X} \times \mathcal{S} \rightarrow \mathcal{R}$ is a function space defined on input space $\mathcal{X}$ and dropout variable space $\mathcal{S}$. $\mathcal{R}_n (\mathcal{H}, X^n, S^n)$ and $\mathcal{R}_n (\mathcal{H})$ are the empirical dropout Rademacher complexity and dropout Rademacher complexity, respectively. We further denote $\mathcal{R}_n (\mathcal{H}, X^n) \stackrel{\Delta}{=} \mathrm{E}_{S^n}\Big[ Rad_n(\mathcal{H}, X^n, S^n) \Big]$. Now, we define the following function spaces: \begin{displaymath} \begin{array}{rcl} \mathcal{F} & = & \bigg\{f(x; \theta): f(x; \theta) = \mathrm{E}_{S}\Big[\mathbf{H}^{(L)}(x, S; \theta) \Big], \theta \in \Theta\bigg\} \\ \mathcal{G} & = & \bigg\{g(x; \theta): g(x; \theta) = \mathbf{h}^{(L)}(x, \mathrm{E}[S]; \theta), \theta \in \Theta\bigg\} \\ \mathcal{H} & = & \bigg\{h(x, s; \theta): h(x, s; \theta) = \mathbf{h}^{(L)}(x, s; \theta), \theta \in \Theta\bigg\} \end{array} \end{displaymath} Then, the function space of $v(x) = f(x) - g(x)$ is $\mathcal{V} = \{f(x) - g(x): f \in \mathcal{F}, g \in \mathcal{G}\}$. \begin{lemma}\label{lem:ineq:rad} \begin{displaymath} Rad_n(\mathcal{F}, X^n) \leq \mathcal{R}_n(\mathcal{H}, X^n) \end{displaymath} \end{lemma} \begin{proof} \begin{displaymath} \begin{array}{rl} & \mathcal{R}_n(\mathcal{H}, X^n) = \mathrm{E}_{S^n}\Big[ Rad_n(\mathcal{H}, X^n, S^n) \Big] \\ = & \mathrm{E}_{S^n} \bigg[ \mathrm{E}_{\sigma}\Big[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i h(X_i, S_i) \Big)\Big]\bigg] \\ = & \mathrm{E}_{\sigma} \bigg[ \mathrm{E}_{S^n}\Big[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i h(X_i, S_i) \Big)\Big]\bigg] \\ \geq & \mathrm{E}_{\sigma} \bigg[ \sup\limits_{h \in \mathcal{H}} \mathrm{E}_{S^n}\Big[ \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i h(X_i, S_i) \Big)\Big]\bigg] \\ = & \mathrm{E}_{\sigma} \bigg[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i \mathrm{E}_{S_i}\big[h(X_i, S_i)\big] \Big)\bigg] \\ = & \mathrm{E}_{\sigma} \bigg[ \sup\limits_{h \in \mathcal{H}} \Big( \frac{1}{n}\sum\limits_{i=1}^{n} \sigma_i \mathrm{E}_{S_i}\big[\mathbf{H}^{(L)}(X_i, S_i; \theta)\big] \Big)\bigg] = Rad_n(\mathcal{F}, X^n) \end{array} \end{displaymath} \end{proof} From Lemma~\ref{lem:ineq:rad}, we have $Rad_n(\mathcal{F}) \leq \mathcal{R}_n(\mathcal{H})$. \begin{lemma}\label{lem:drop:rad} \begin{displaymath} \begin{array}{rcl} \mathcal{R}_n(\mathcal{H}) & \leq & \frac{\alpha B^{L} \gamma^{L/2}}{\sqrt{n}} \\ Rad_n(\mathcal{G}) & \leq & \frac{\alpha B^{L}}{\sqrt{n}} \end{array} \end{displaymath} \end{lemma} \begin{proof} See Theorem 4 in \citet{gao2014dropout}. \end{proof} Now, we can prove Theorem~\ref{thm:complexity}. \paragraph{Proof of Theorem 4} \begin{proof} From Rademacher-based uniform bounds theorem, with probability $\geq 1 - \delta$, \begin{displaymath} \sup\limits_{v \in \mathcal{V}} |\Delta - \hat{\Delta}| < 2 Rad_n(\mathcal{V}) + \beta \sqrt{\frac{\log(1/\delta)}{n}} \end{displaymath} Since $\mathcal{V} = \mathcal{F} - \mathcal{G}$, we have \begin{displaymath} Rad_n(\mathcal{V}) = Rad_n(\mathcal{F} - \mathcal{G}) \leq Rad_n(\mathcal{F}) + Rad_n(\mathcal{G}) \leq \frac{\alpha B^{L} (\gamma^{L/2}+ 1)}{\sqrt{n}} \end{displaymath} Then, finally, we have that with probability $\geq 1 - \delta$, \begin{displaymath} \sup\limits_{\theta \in \Theta} |\Delta - \hat{\Delta}| < \frac{2\alpha B^{L} (\gamma^{L/2}+ 1)}{\sqrt{n}} + \beta \sqrt{\frac{\log(1/\delta)}{n}} \end{displaymath} \end{proof} \subsection{Proof of Theorem 5: Non-Uniform Bound of Model Accuracy}\label{appendix:proof:thm5} For convenience, we denote $\lambda = \{\theta_1, \ldots, \theta_{L-1}\}$. Then $\theta = \{\lambda, \eta\}$, and MLE $\hat{\theta} = \{\hat{\lambda}, \hat{\eta} \}$ \begin{lemma}\label{lem:softmax:op} \begin{equation} \|\nabla f_L(\cdot; \eta)^T \|_{op} \leq 2 \| \eta \|_2 \end{equation} \end{lemma} \begin{proof} denote \begin{displaymath} A = \nabla f_L(\cdot; \eta)^T = \left[ p_y (\eta_y - \overline{\eta})^T \right]\Big|_{y=1}^{k} \end{displaymath} where $p_y = p(y|x, s; \theta)$, $\overline{\eta} = \mathrm{E}\left[\eta_{Y}\right] = \sum\limits_{y=1}^{k}p_y \eta_y$. For each $v$ such that $\|v\|_2 = 1$, \begin{displaymath} \begin{array}{rcl} \|Av\|_2^2 & = & \sum\limits_{y \in \mathcal{Y}} \left( p_y \left( \eta_y - \overline{\eta} \right)^T v\right)^2 \leq \sum\limits_{y \in \mathcal{Y}} \|p_y \left( \eta_y - \overline{\eta} \right) \|_2^2 \|v\|_2^2 = \sum\limits_{y \in \mathcal{Y}} \|p_y \left( \eta_y - \overline{\eta} \right) \|_2^2 \\ & \leq & \sum\limits_{y \in \mathcal{Y}} p_y \|\eta_y - \overline{\eta}\|_2^2 \leq \sum\limits_{y \in \mathcal{Y}} 2 p_y \left( \|\eta\|_2^2 + \sum\limits_{y'\in\mathcal{Y}}p_{y'}\|\eta_{y'}\|_2^2\right) \\ & = & 4 \sum\limits_{y \in \mathcal{Y}} p_y \|\eta_y\|_2^2 \leq 4\|\eta\|_2^2 \end{array} \end{displaymath} So we have $\|A\|_{op} \leq 2\|\eta\|_2$. \end{proof} \begin{lemma}\label{lem:norm:constrain} If parameter $\tilde{\theta} = \{\hat{\lambda}, \eta\}$ satisfies that $\|\eta\|_2 \leq \frac{\delta}{4\beta}$, then $V(D; \tilde{\theta}) \leq \delta$, where $V(D; \theta)$ is defined in Eq.~\eqref{eq:penalty}. \end{lemma} \begin{proof} Let $S_L = \{\Gamma^{(1)}, \ldots, \Gamma^{(L)}\}$, and let $\mathbf{H}^{(l)}$ and $\mathbf{h}^{(l)}$ be short for $\mathbf{H}^{(l)}(X, S_l; \tilde{\theta})$ and $\mathbf{h}^{(l)}(X, \mathrm{E}(S_l); \tilde{\theta})$, respectively. From lemma~\ref{lem:softmax:op}, we have $\|f_L(x; \eta) - f_L(y; \eta)\|_2 \leq 2\|\eta\|_2 \|x - y\|_2$. Then, \begin{displaymath} \begin{array}{rcl} \left\|\mathrm{E}_{S_L}\left[\mathbf{H}^{L}\right] - \mathbf{h}^{L}\right\|_2 & = & \left\|\mathrm{E}_{S_{L-1}}\left[f_{L}(\mathbf{H}^{(L-1)}; \eta)\right] - f_{L}(\mathbf{h}^{(L-1)}; \eta)\right\|_2 \\ & \leq & \mathrm{E}_{S_{L-1}}\left\|f_{L}(\mathbf{H}^{(L-1)}; \eta) - f_{L}(\mathbf{h}^{(L-1)}; \eta) \right\|_2 \\ & \leq & 2 \|\eta\|_2 \left\| \mathbf{H}^{(L-1)} - \mathbf{h}^{(L-1)} \right\|_2 \\ & \leq & 4\beta\|\eta\|_2 \leq \delta \end{array} \end{displaymath} \end{proof} Lemma~\ref{lem:norm:constrain} says that we can get $\theta$ satisfying the expectation-linearization constrain by explicitly scaling down $\hat{\eta}$ while keeping $\hat{\lambda}$. In order to prove Theorem~\ref{thm:det:bound}, we make the following assumptions: \begin{itemize} \item The dimension of $\mathbf{h}^{(L-1)}$ is $d$, i.e. $\mathbf{h}^{(L-1)} \in \mathcal{R}^d$. \item Since $\forall y \in \mathcal{Y}, p(y|x; \hat{\theta}) > 0$, we assume $p(y|x; \hat{\theta}) \geq 1/b$, where $b \geq |\mathcal{Y}| = k$. \item As in the body text, let $p(y|x, s; \hat{\theta})$ be nonuniform, and in particular let \\ $\hat{\eta}_{y^*}^T\mathbf{h}^{(L-1)}(x, s; \hat{\lambda}) - \hat{\eta}_{y}^T\mathbf{h}^{(L-1)}(x, s; \hat{\lambda}) > c\|\hat{\eta}\|_2, \forall y \neq y^*$. \end{itemize} For convenience, we denote $\eta^T\mathbf{h}^{(L-1)}(x, s; \lambda) = \eta^T u_y (x, s; \lambda)$, where $u_y^T (x, s; \lambda) = (v_0^T, \ldots, v_k^T)$ and \begin{displaymath} v_i = \left\{\begin{array}{ll} \mathbf{h}^{(L-1)}(x, s; \lambda) & \textrm{if } i = y \\ 0 & \textrm{otherwise} \end{array}\right. \end{displaymath} To prove Theorem~\ref{thm:det:bound}, we first prove the following lemmas. \begin{lemma}\label{lem:prob:low:bound} If $p(y|x; \hat{\theta}) \geq 1/b$, then $\forall \alpha \in [0,1]$, for parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$, we have \begin{displaymath} p(y|x; \tilde{\theta}) \geq \frac{1}{b} \end{displaymath} \end{lemma} \begin{proof} We define \begin{displaymath} f(\alpha) \stackrel{\Delta}{=} (y|x, s; \tilde{\theta}) = \frac{e^{\alpha\eta_y^T \mathbf{h}^{(L - 1)}(x, s; \hat{\lambda})}}{\sum\limits_{y' \in \mathcal{Y}} e^{\alpha\eta_{y'}^T \mathbf{h}^{(L - 1)}(x,s; \hat{\lambda})}} = \frac{\Big(e^{\eta_y^T \mathbf{h}^{(L - 1)}(x, s; \hat{\lambda})}\Big)^\alpha}{\sum\limits_{y' \in \mathcal{Y}} \Big(e^{\eta_{y'}^T \mathbf{h}^{(L - 1)}(x,s; \hat{\lambda})}\Big)^\alpha} \end{displaymath} Since $\mathcal{Y} = \{1, \ldots, k\}$, for fixed $x \in \mathcal{X}, s \in \mathcal{S}$, $\log f(\alpha)$ is a concave function w.r.t $\alpha$. \\ Since $b \geq k$, we have \begin{displaymath} \log f(\alpha) \geq (1-\alpha) \log f(0) + \alpha \log f(1) \geq -\log b \end{displaymath} So we have $\forall x, s$, $p(y|x, s; \tilde{\theta}) \geq 1/b$. Then \begin{displaymath} p(y|x; \tilde{\theta}) = \mathrm{E}_S \left[ p(y|x, S; \hat{\theta})\right] \geq \frac{1}{b} \end{displaymath} \end{proof} \begin{lemma}\label{lem:prob:up:bound} if $y$ is not the majority class, i.e. $y \neq y^*$, then for parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$ \begin{displaymath} p(y|x, s, \tilde{\theta}) \leq e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} \begin{displaymath} p(y|x, s, \tilde{\theta}) = \frac{e^{\alpha \hat{\eta}^T u_y}}{\sum\limits_{y'\in\mathcal{Y}} e^{\alpha \hat{\eta}^T u_{y'}}} \leq \frac{e^{\alpha \hat{\eta}^T u_y}}{e^{\alpha \hat{\eta}^T u_{y^*}}} \leq e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{proof} \begin{lemma}\label{lem:hessian1:bound} For a fixed $x$ and $s$, the absolute value of the entry of the vector under the parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$: \begin{displaymath} |p(y|x, s; \tilde{\theta}) (u_y - \mathrm{E}_Y[u_Y])|_i \leq \beta(k-1)e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} Suppose $y$ is the majority class of $p(y|x, s; \tilde{\theta})$. Then, \begin{displaymath} u_y - \mathrm{E}_y[u_Y] = \left(v_{y'}\right)_{y'=1}^{k} \end{displaymath} where \begin{displaymath} v_y = \left\{ \begin{array}{ll} (1 - p(y|x, s; \tilde{\theta}) \mathbf{h}^{(L-1)} & \textrm{if } y = y^* \\ -p(y|x, s; \tilde{\theta}) \mathbf{h}^{(L-1)} & \textrm{otherwise} \end{array}\right. \end{displaymath} From Lemma~\ref{lem:prob:up:bound}, we have \begin{displaymath} |p(y|x, s; \tilde{\theta}) (u_y - \mathrm{E}_Y[u_Y])|_i \leq |(u_y - \mathrm{E}_Y[u_Y])|_i \leq \beta(k-1)e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} Now, we suppose $y$ is not the majority class of $p(y|x, s; \tilde{\theta})$. Then, \begin{displaymath} |p(y|x, s; \tilde{\theta}) (u_y - \mathrm{E}_Y[u_Y])|_i \leq p(y|x, s; \tilde{\theta}) \beta \leq \beta e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} Overall, the lemma follows. \end{proof} \begin{lemma}\label{lem:hessian2:bound} We denote the matrix \begin{displaymath} \begin{array}{rl} A \stackrel{\Delta}{=} & \mathrm{E}_S\left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y]) (u_y - \mathrm{E}_Y[u_Y])^T\right] \\ & - \mathrm{E}_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y])\right] \mathrm{E}_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y])\right]^T \end{array} \end{displaymath} Then the absolute value of the entry of $A$ under the parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$: \begin{displaymath} |A_{ij}| \leq 2b(k-1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} From Lemma~\ref{lem:prob:low:bound}, we have $p(y|x; \tilde{\theta}) \geq 1/b$. Additionally, the absolute value of the entry of $u_y - \mathrm{E}_Y[u_Y]$ is bounded by $\beta$. We have for each $i$ \begin{displaymath} \left| \mathrm{E}_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y])\right]\right|_i \leq \mathrm{E}_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} \beta\right] = \beta \end{displaymath} Then from Lemma~\ref{lem:hessian1:bound} \begin{displaymath} |A_{ij}| \leq 2b(k-1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{proof} \begin{lemma}\label{lem:hessian3:bound} We denote the matrix \begin{displaymath} B \stackrel{\Delta}{=} \mathrm{E}_S\left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} \left( \mathrm{E}_Y\left[ u_Y u_Y^T\right] - \mathrm{E}_Y[u_Y] \mathrm{E}_Y[u_Y]^T \right)\right] \end{displaymath} Then the absolute value of the entry of $B$ under the parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$: \begin{displaymath} |B_{ij}| \leq 2(k-1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} We only need to prove that for fixed $x$ and $s$, for each $i, j$: \begin{displaymath} \left|\mathrm{E}_Y\left[ u_Y u_Y^T\right] - \mathrm{E}_Y[u_Y] \mathrm{E}_Y[u_Y]^T\right|_{ij} \leq 2(k-1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} Since \begin{displaymath} \left|\mathrm{E}_Y\left[ u_Y u_Y^T\right] - \mathrm{E}_Y[u_Y] \mathrm{E}_Y[u_Y]^T\right|_{ij} = \left|\mathrm{Cov}_Y [(u_Y)_i, (u_Y)_j]\right| \leq \beta^2 \sum\limits_{y=1}^{k} p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \end{displaymath} Suppose $y$ is the majority class. Then from Lemma~\ref{lem:prob:up:bound}, \begin{displaymath} p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq 1 - p(y|x, s; \tilde{\theta}) \leq (k-1) e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} If $y$ is not the majority class. Then, \begin{displaymath} p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq p(y|x, s; \tilde{\theta}) \leq e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} So we have \begin{displaymath} \sum\limits_{y=1}^{k} p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq 2(k-1) e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} The lemma follows. \end{proof} \begin{lemma}\label{lem:eigen:bound} Under the parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta}\}$, the largest eigenvalue of the matrix \begin{equation}\label{eq:hessian} \frac{1}{n} \sum\limits_{i=1}^{n} \left(A(x_i, y_i) - B(x_i, y_i)\right) \end{equation} is at most \begin{displaymath} 2dk(k-1)(b+1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \end{displaymath} \end{lemma} \begin{proof} From Lemma~\ref{lem:hessian2:bound} and Lemma~\ref{lem:hessian3:bound}, each entry of the matrix in \eqref{eq:hessian} is at most $2(k-1)(b+1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2}$. Thus, by Gershgorin's theorem, the maximum eigenvalue of the matrix in \eqref{eq:hessian} is at most $2dk(k-1)(b+1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2}$. \end{proof} Now, we can prove Theorem~\ref{thm:det:bound} by constructing a scaled version of $\hat{\theta}$ that satisfies the expectation-linearization constraint. \paragraph{Proof of Theorem~\ref{thm:det:bound}} \begin{proof} Consider the likelihood evaluated at $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta} \}$, where $\alpha = \frac{\delta}{4\beta\|\hat{\eta}\|_2}$. If $\alpha > 1$, then $\|\eta\|_2 > \frac{\delta}{4\beta}$. We know the MLE $\hat{\theta}$ already satisfies the expectation-linearization constraint. So we can assume that $0 \leq \alpha \leq 1$, and we know that $\tilde{\theta}$ satisfies $V(D; \tilde{\theta}) \leq \delta$. Then, \begin{displaymath} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq \Delta_l(\hat{\theta}, \tilde{\theta}) = \frac{1}{n} (l(D; \hat{\theta}) - l(D; \tilde{\theta})) = g(\hat{\lambda}, \hat{\eta}) - g(\hat{\lambda}, \alpha\hat{\eta}) \end{displaymath} where $g(\lambda, \eta) = \frac{1}{n} l(D; (\lambda, \eta))$. Taking the second-order Taylor expansion about $\eta$, we have \begin{displaymath} g(\hat{\lambda}, \alpha\hat{\eta}) = g(\hat{\lambda}, \hat{\eta}) + \nabla_{\eta}^T g(\hat{\lambda}, \hat{\eta}) (\alpha\hat{\eta} - \hat{\eta}) + (\alpha\hat{\eta} - \hat{\eta})^T \nabla_{\eta}^2 g(\hat{\lambda}, \hat{\eta}) (\alpha\hat{\eta} - \hat{\eta}) \end{displaymath} Since $\hat{\theta}$ is the MLE, the first-order term $\nabla_{\eta}^T g(\hat{\lambda}, \hat{\eta}) (\alpha\hat{\eta} - \hat{\eta}) = 0$. The Hessian in the second-order term is just Eq.\eqref{eq:hessian}. Thus, from Lemma~\ref{lem:eigen:bound} we have \begin{displaymath} \begin{array}{rcl} g(\hat{\lambda}, \alpha\hat{\eta}) & \leq & g(\hat{\lambda}, \hat{\eta}) - (1-\alpha)^2\|\hat{\eta}\|_{2}^{2} 2dk(k-1)(b+1)\beta^2 e^{-c\alpha\|\hat{\eta}\|_2} \\ & = & g(\hat{\lambda}, \hat{\eta}) - 2dk(k-1)(b+1)\beta^2 \left(\|\hat{\eta}\|_2 -\frac{\delta}{4\beta}\right)^2 e^{-c\delta/4\beta} \\ & = & g(\hat{\lambda}, \hat{\eta}) - c_1 \beta^2 \left(\|\hat{\eta}\|_2 -\frac{\delta}{4\beta}\right)^2 e^{-c_2\delta/4\beta} \end{array} \end{displaymath} with setting $c1 = 2dk(k-1)(b+1)$ and $c2 = c$. Then the theorem follows. \end{proof} \subsection{Proof of Theorem 6: Uniform Bound of Model Accuracy}\label{appendix:proof:thm6} In the following, we denote $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta} \}$. \begin{lemma}\label{lem:convex0} For each $y\in \mathcal{Y}$, if $p(y|x, s;\hat{\theta}) \geq 1/k$, then $\forall \alpha \in [0, 1]$ \begin{displaymath} p(y|x, s; \tilde{\theta}) \geq \frac{1}{k} \end{displaymath} \end{lemma} \begin{proof} This lemma can be regarded as a corollary of Lemma~\ref{lem:prob:low:bound}. \end{proof} \begin{lemma}\label{lem:convex1} For a fixed $x$ and $s$, we denote $e^{\hat{\eta}_y^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda})} = w_y$. Then we have \begin{displaymath} p(y|x, s, \tilde{\theta}) = \frac{e^{\alpha\hat{\eta}_y^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda})}}{\sum\limits_{y'\in \mathcal{Y}} e^{\alpha\hat{\eta}_{y'}^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda})}} = \frac{(w_y)^\alpha}{\sum\limits_{y'\in\mathcal{Y}} (w_{y'})^\alpha} \end{displaymath} Additionally, we denote $g_s(\alpha) = \sum\limits_{y'\in\mathcal{Y}} p(y'|x, s; \tilde{\theta})\log w_{y'} - \log w_y$. We assume $g_s(0) \geq 0$. Then we have $\forall \alpha \geq 0$ \begin{displaymath} g_s(\alpha) \geq 0 \end{displaymath} \end{lemma} \begin{proof} \begin{displaymath} \frac{\partial{g_s(\alpha)}}{\partial{\alpha}} = \sum\limits_{y'\in \mathcal{Y}}\log w_{y'} \frac{\partial p(y'|x, s; \tilde{\theta})}{\partial \alpha} = \mathrm{Var}_Y\left[\log w_Y|X-x,S=s\right] \geq 0 \end{displaymath} So $g_s(\alpha)$ is non-decreasing. Since $g_s(0) \geq 0$, we have $g_s(\alpha) \geq 0$ when $\alpha \geq 0$. \end{proof} From above lemma, we have for each training instance $(x_i, y_i) \in D$, and $\forall \alpha\in [0,1]$, \begin{equation}\label{eq:msy} \mathrm{E}_Y \left[\log p(Y|x_i, s; \tilde{\theta})\right] \geq \log p(y_i|x_i, s; \tilde{\theta}) \end{equation} For convenience, we define \begin{displaymath} m(s, y) = \log p(y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[\log p(Y|x, s; \tilde{\theta})\right] \end{displaymath} \begin{lemma}\label{lem:convex2} If $y$ satisfies Lemma~\ref{lem:convex0} and $g_s(\alpha) \geq 0$, then \begin{displaymath} \mathrm{Var}_Y[m(s, Y)] \geq m(s, y)^2 \end{displaymath} \end{lemma} \begin{proof} First we have \begin{displaymath} m(s, y) = \log p(y|x, s; \tilde{\theta}) - \log 1/k - KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) \leq 0 \end{displaymath} So we have \begin{displaymath} \begin{array}{rcl} \left(\mathrm{Var}_Y \left[ m(s, Y)\right]\right)^{1/2} & = & \sqrt{\mathrm{E}_Y \left[\left(\log p(Y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[\log p(Y|x, s; \tilde{\theta})\right]\right)^2 \right]} \\ & \geq & \mathrm{E}_Y \left[\left|\log p(Y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[\log p(Y|x, s; \tilde{\theta})\right]\right| \right] \\ & = & \mathrm{E}_Y \left[\left|KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) + \log 1/k - \log p(Y|x, s; \tilde{\theta}) \right|\right] \\ & = & \mathrm{E}_Y \left[KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) + \left|\log 1/k - \log p(Y|x, s; \tilde{\theta}) \right|\right] \\ & \geq & KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) + \mathrm{E}_Y \left[\log p(Y|x, s; \tilde{\theta}) - \log 1/k\right] \\ & = & 2KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) \end{array} \end{displaymath} As $KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) \geq 0$ and $\log p(y|x, s; \tilde{\theta}) \geq \log 1/k$. So we have \begin{displaymath} 2KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) \geq KL\left(p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y})\right) + \log 1/k - \log p(y|x, s; \tilde{\theta}) = -m(s, y) \end{displaymath} Then the lemma follows. \end{proof} From Lemma~\ref{lem:convex2} and Eq.~\eqref{eq:msy}, we have for each training instance $(x_i, y_i) \in D$, and $\forall \alpha\in [0,1]$, \begin{equation}\label{eq:varsy} \mathrm{Var}_Y[m(s, Y)] \geq m(s, y_i)^2 \end{equation} \begin{lemma}\label{lem:convex3} For each training instance $(x_i, y_i) \in D$, and $\forall \alpha \in [0, 1]$, we have \begin{displaymath} \log p(y_i|x_i; \{\hat{\lambda}, \alpha\hat{\eta}\}) \geq (1-\alpha) \log p(y_i|x_i; \{\hat{\lambda},0\}) + \alpha \log p(y_i|x_i; \{\hat{\lambda}, \hat{\eta}\}) \end{displaymath} \end{lemma} \begin{proof} We define \begin{displaymath} f(\alpha) = \log p(y_i|x_i; \{\hat{\lambda}, \alpha\hat{\eta}\}) - (1-\alpha) \log p(y_i|x_i; \{\hat{\lambda},0\}) - \alpha \log p(y_i|x_i; \{\hat{\lambda}, \hat{\eta}\}) \end{displaymath} Because $f(0) = f(1) = 0$, we only need to prove that $f(\alpha)$ is concave on $[0, 1]$. We have \begin{displaymath} \nabla^2 f(\alpha) = - \mathrm{E}_{S|Y=y_i}\left[\mathrm{Var}_Y \left[ m(S, Y)\right]\right] + \mathrm{Var}_{S|Y=y_i}\left[ m(S, y_i)\right] \end{displaymath} where $S|Y=y_i$ is under the probability distribution $p(s|Y=y_i, x_i; \tilde{\theta}) = \frac{p(y_i|x_i, S; \tilde{\theta})p(s)}{p(y_i|x_i; \tilde{\theta})}$\\ From Eq.~\eqref{eq:varsy}, we have \begin{displaymath} \mathrm{E}_{S|Y=y_i}\left[\mathrm{Var}_Y \left[ m(S, Y)\right]\right] \geq \mathrm{E}_{S|Y=y_i}\left[ m(S, y_i)^2\right] \geq \mathrm{Var}_{S|Y=y_i}\left[ m(S, y_i)\right] \end{displaymath} So we have $\nabla^2 f(\alpha) \leq 0$. The lemma follows. \end{proof} Now, we can prove Theorem~\ref{thm:uniform:bound} by using the same construction of an expectation-linearizing parameter as in Theorem~\ref{thm:det:bound}. \paragraph{Proof of Theorem~\ref{thm:uniform:bound}} \begin{proof} Consider the same parameter $\tilde{\theta} = \{\hat{\lambda}, \alpha\hat{\eta} \}$, where $\alpha = \frac{\delta}{4\beta\|\hat{\eta}\|_2} \leq 1$. we know that $\tilde{\theta}$ satisfies $V(D; \tilde{\theta}) \leq \delta$. Then, \begin{displaymath} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq \Delta_l(\hat{\theta}, \tilde{\theta}) = \frac{1}{n} (l(D; \hat{\theta}) - l(D; \tilde{\theta})) \end{displaymath} From Lemma~\ref{lem:convex3} we have: \begin{displaymath} l(D; \tilde{\theta}) = l(D; \{\hat{\lambda}, \alpha\hat{\eta}\}) \geq (1-\alpha) l(D; \{\hat{\lambda}, 0\}) + \alpha l(D; \{\hat{\lambda}, \hat{\eta}\}) \end{displaymath} So \begin{displaymath} \begin{array}{rcl} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) & \leq & (1-\alpha) \frac{1}{n} \left( l(D; \hat{\theta}) - l(D; \{\hat{\lambda}, 0\})\right) \\ & = & (1 - \alpha) \frac{1}{n} \sum\limits_{i=1}^{n} \log p(y_i|x_i;\hat{\theta}) - \log\mathrm{Unif}(\mathcal{Y}) \\ & \asymp & (1 - \alpha)\mathrm{E}\left[\mathrm{KL}\left(p(\cdot|X; \theta) \| \mathrm{Unif}(\mathcal{Y})\right)\right] \\ & \leq & \left(1 - \frac{\delta}{4\beta\|\hat{\eta}\|_2}\right) \mathrm{E}\left[\mathrm{KL}\left(p(\cdot|X; \theta) \| \mathrm{Unif}(\mathcal{Y})\right) \right] \end{array} \end{displaymath} \end{proof} \section{Detailed Description of Experiments}\label{appendix:experiment} \subsection{Neural Network Architectures}\label{appendix:architec} \paragraph{MNIST} For MNIST, we train 6 different fully-connected (dense) neural networks with 2 or 3 layers (see Table~\ref{tab:results}). For all architectures, we used dropout rate $p=0.5$ for all hidden layers and $p=0.2$ for the input layer. \paragraph{CIFAR-10 and CIFAR-100} For the two CIFAR datasets, we used the same architecture in \citet{srivastava2014dropout} --- three convolutional layers followed by two fully-connected hidden layers. The convolutional layers have 96, 128, 265 filters respectively, with a $5\times5$ receptive field applied with a stride of 1. Each convolutional layer is followed by a max pooling layer pools $3\times3$ regions at strides of 2. The fully-connected layers have 2048 units each. All units use the rectified linear activation function. Dropout was applied to all the layers with dropout rate $p = (0.1, 0.25, 0.25, 0.5, 0.5, 0.5)$ for the layers going from input to convolutional layers to fully-connected layers. \subsection{Neural Network Training} Neural network training in all the experiments is performed with mini-batch stochastic gradient descent (SGD) with momentum. We choose an initial learning rate of $\eta_0$, and the learning rate is updated on each epoch of training as $\eta_t = \eta_0/(1 + \rho t)$, where $\rho$ is the decay rate and $t$ is the number of epoch completed. We run each experiment with 2,000 epochs and choose the parameters achieving the best performance on validation sets. Table~\ref{tab:hyper-parameter} summarizes the chosen hyper-parameters for all experiments. Most of the hyper-parameters are chosen from \citet{srivastava2014dropout}. But for some experiments, we cannot reproduce the performance reported in \citet{srivastava2014dropout} (We guess one of the possible reasons is that we used different library for implementation.). For these experiments, we tune the hyper-parameters on the validation sets by random search. Due to time constrains it is infeasible to do a random search across the full hyper-parameter space. Thus, we try to use as many hyper-parameters reported in \citet{srivastava2014dropout} as possible. \subsection{Effect of Expectation-linearization Rate $\lambda$} Table~\ref{tab:result:lambda} illustrates the detailed results of the experiments on the effect of $\lambda$. For MNIST, it lists the error rates under different $\lambda$ values for six different network architectures. For two datasets of CIFAR, it gives the error rates under different $\lambda$ values, among with the empirical expectation-linearization risk $\hat{\Delta}$. \end{document}
Pruning Convolutional Neural Networks for Resource Efficient Inference
1611.06440
Table 1: Spearman’s rank correlation of criteria vs. oracle for convolutional feature maps of VGG-16 and AlexNet fine-tuned on Birds-200 and Flowers-102 datasets, and AlexNet trained on ImageNet.
[ "[EMPTY]", "[BOLD] AlexNet / Flowers-102 [BOLD] Weight", "[BOLD] AlexNet / Flowers-102 [BOLD] Activation", "[BOLD] AlexNet / Flowers-102 [BOLD] Activation", "[BOLD] AlexNet / Flowers-102 [BOLD] Activation", "[BOLD] AlexNet / Flowers-102 [BOLD] OBD", "[BOLD] AlexNet / Flowers-102 [BOLD] Taylor", "[BOLD] VGG-16 / Birds-200 [BOLD] Weight", "[BOLD] VGG-16 / Birds-200 [BOLD] Activation", "[BOLD] VGG-16 / Birds-200 [BOLD] Activation", "[BOLD] VGG-16 / Birds-200 [BOLD] Activation", "[BOLD] VGG-16 / Birds-200 [BOLD] OBD", "[BOLD] VGG-16 / Birds-200 [BOLD] Taylor", "[BOLD] VGG-16 / Birds-200 [BOLD] Mutual" ]
[ [ "[EMPTY]", "[EMPTY]", "[ITALIC] Mean", "[ITALIC] S.d.", "[ITALIC] APoZ", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[ITALIC] Mean", "[ITALIC] S.d.", "[ITALIC] APoZ", "[EMPTY]", "[EMPTY]", "[BOLD] Info." ], [ "Per layer", "0.17", "0.65", "0.67", "0.54", "0.64", "[BOLD] 0.77", "0.27", "0.56", "0.57", "0.35", "0.59", "[BOLD] 0.73", "0.28" ], [ "All layers", "0.28", "0.51", "0.53", "0.41", "0.68", "0.37", "0.34", "0.35", "0.30", "0.43", "0.65", "0.14", "0.35" ], [ "(w/ ℓ2-norm)", "0.13", "0.63", "0.61", "0.60", "-", "[BOLD] 0.75", "0.33", "0.64", "0.66", "0.51", "-", "[BOLD] 0.73", "0.47" ], [ "[EMPTY]", "[BOLD] AlexNet / Birds-200", "[BOLD] AlexNet / Birds-200", "[BOLD] AlexNet / Birds-200", "[BOLD] AlexNet / Birds-200", "[BOLD] AlexNet / Birds-200", "[BOLD] AlexNet / Birds-200", "[BOLD] VGG-16 / Flowers-102", "[BOLD] VGG-16 / Flowers-102", "[BOLD] VGG-16 / Flowers-102", "[BOLD] VGG-16 / Flowers-102", "[BOLD] VGG-16 / Flowers-102", "[BOLD] VGG-16 / Flowers-102", "[BOLD] VGG-16 / Flowers-102" ], [ "Per layer", "0.36", "0.57", "0.65", "0.42", "0.54", "[BOLD] 0.81", "0.19", "0.51", "0.47", "0.36", "0.21", "[BOLD] 0.6", "[EMPTY]" ], [ "All layers", "0.32", "0.37", "0.51", "0.28", "0.61", "0.37", "0.35", "0.53", "0.45", "0.61", "0.28", "0.02", "[EMPTY]" ], [ "(w/ ℓ2-norm)", "0.23", "0.54", "0.57", "0.49", "-", "[BOLD] 0.78", "0.28", "0.66", "0.65", "0.61", "-", "[BOLD] 0.7", "[EMPTY]" ], [ "[EMPTY]", "[BOLD] AlexNet / ImageNet", "[BOLD] AlexNet / ImageNet", "[BOLD] AlexNet / ImageNet", "[BOLD] AlexNet / ImageNet", "[BOLD] AlexNet / ImageNet", "[BOLD] AlexNet / ImageNet", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]" ], [ "Per layer", "0.57", "0.09", "0.19", "−0.06", "[BOLD] 0.58", "[BOLD] 0.58", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]" ], [ "All layers", "0.67", "0.00", "0.13", "−0.08", "[BOLD] 0.72", "0.11", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]" ], [ "(w/ ℓ2-norm)", "0.44", "0.10", "0.19", "0.19", "-", "0.55", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]", "[EMPTY]" ] ]
/datasets some of which are going to be introduced later. Data-dependent criteria (all except weight magnitude) are computed on training data during the fine-tuning before or between pruning iterations. As a sanity check, we evaluate random ranking and observe 0.0 correlation across all layers. “Per layer” analysis shows ranking within each convolutional layer, while “All layers” describes ranking across layers. While several criteria do not scale well across layers with raw values, a layer-wise ℓ2-normalization significantly improves performance. The Taylor criterion has the highest correlation among the criteria, both within layers and across layers (with ℓ2 normalization). OBD shows the best correlation across layers when no normalization used; it also shows best results for correlation on ImageNet dataset. In Fig. Criteria We initially fine-tune the network for 20 epochs using a learning rate of 0.001, achieving a final test accuracy of 80.1%. Then pruning procedes as previously described for Birds-200, except with only 10 mini-batch updates between pruning iterations. We observe the superior performance of the Taylor and OBD criteria in both number of parameters and GFLOPs. We also test our pruning scheme on the large-scale ImageNet classification task. In the first experiment, we begin with a trained CaffeNet implementation of AlexNet with 79.2% top-5 validation accuracy. Between pruning iterations, we fine-tune with learning rate 10−4, momentum 0.9, weight decay 10−4, batch size 32, and drop-out 50%. Pruning traces are illustrated in Fig.
\documentclass{article} % For LaTeX2e \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % hyperlinks % simple URL typesetting % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. % microtypography \newcommand{\stnote}[1]{{\color{blue}[\textbf{Stephen}: #1]}} \newcommand{\pmnote}[1]{{\color{green}[\textbf{Pavlo}: #1]}} \newcommand{\D}{\mathcal{D}} \newcommand{\W}{\mathcal{W}} \newcommand{\C}{\mathcal{C}} \newcommand{\R}{\mathcal{R}} \newcommand{\X}{\mathcal{X}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\z}{{\mathbf{z}}} \newcommand{\x}{{\mathbf{x}}} \newcommand{\w}{{\mathbf{w}}} \newcommand{\ba}{{\mathbf{a}}} \newcommand{\eq}{\!=\!} \newcommand{\ta}[1]{\cellcolor[rgb]{0.95,0.95,.95}#1} \newcommand{\tb}[1]{\cellcolor[rgb]{0.80,0.80,.80}#1} \newcommand{\rot}[1]{\rotatebox[origin=c]{90}{\ #1\ }} \renewcommand{\topfraction}{.85} \renewcommand{\dbltopfraction}{0.85} \renewcommand{\bottomfraction}{.85} \renewcommand{\textfraction}{.1} \renewcommand{\floatpagefraction}{.8} \renewcommand{\dblfloatpagefraction}{0.8} \DeclareMathOperator{\st}{s.\!t.} \title{Pruning Convolutional Neural Networks \\ for Resource Efficient Inference} \author{Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz \\ NVIDIA\\ \texttt{\{pmolchanov, styree, tkarras, taila, jkautz\}@nvidia.com} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \input{abstract} \input{introduction} \input{related} \input{method} \input{results} \input{conclusions} \bibliographystyle{iclr2017_conference} \clearpage \appendix \input{appendix} \end{document} \section{Introduction} Convolutional neural networks (CNN) are used extensively in computer vision applications, including object classification and localization, pedestrian and car detection, and video classification. Many problems like these focus on specialized domains for which there are only small amounts of carefully curated training data. In these cases, accuracy may be improved by fine-tuning an existing deep network previously trained on a much larger labeled vision dataset, such as images from ImageNet~\citep{russakovsky2015imagenet} or videos from Sports-1M~\citep{karpathy2014large}. While transfer learning of this form supports state of the art accuracy, inference is expensive due to the time, power, and memory demanded by the heavyweight architecture of the fine-tuned network. While modern deep CNNs are composed of a variety of layer types, runtime during prediction is dominated by the evaluation of convolutional layers. With the goal of speeding up inference, we prune entire feature maps so the resulting networks may be run efficiently even on embedded devices. We interleave greedy criteria-based pruning with fine-tuning by backpropagation, a computationally efficient procedure that maintains good generalization in the pruned network. \section{Method} \begin{wrapfigure}{R}{0.3\textwidth} \vspace{-0.5em} \centering \captionsetup{justification=centering} \includegraphics[clip, trim={0 0.1cm 0 0},width=0.8\linewidth]{Pruning_scheme.pdf} \caption{Network pruning as a backward filter.} \label{fig:scheme} \vspace{-1em} \end{wrapfigure} The proposed method for pruning consists of the following steps: 1) Fine-tune the network until convergence on the target task; 2) Alternate iterations of pruning and further fine-tuning; 3) Stop pruning after reaching the target trade-off between accuracy and pruning objective, e.g. floating point operations (FLOPs) or memory utilization. The procedure is simple, but its success hinges on employing the right pruning criterion. In this section, we introduce several efficient pruning criteria and related technical considerations. Consider a set of training examples $\D = \big\{\X = \{ \x_0, \x_1, ..., \x_N \}, \Y = \{y_0, y_1, ..., y_N\}\big\}$, where $\x$ and $y$ represent an input and a target output, respectively. The network's parameters\footnote{A ``parameter'' $(w, b) \in \W$ might represent an individual weight, a convolutional kernel, or the entire set of kernels that compute a feature map; our experiments operate at the level of feature maps.} $\mathcal{W} = \{ (\w^1_1, b^1_1), (\w^2_1, b^2_1), ... (\w^{C_\ell}_L,b^{C_\ell}_L)\}$ are optimized to minimize a cost value $\C(\D | \W)$. The most common choice for a cost function $\C(\cdot)$ is a negative log-likelihood function. A cost function is selected independently of pruning and depends only on the task to be solved by the original network. In the case of transfer learning, we adapt a large network initialized with parameters $\W_0$ pretrained on a related but distinct dataset. During pruning, we refine a subset of parameters %, such that $\W'=\W g$, which preserves the accuracy of the adapted network, $\C(\D|\W') \approx \C(\D|\W)$. %, when all other weights are omitted. This corresponds to a combinatorial optimization: \begin{equation} \min_{\W'} \bigg|\C(\D|\W') - \C(\D| \W)\bigg| \quad \st \quad {||\W'||}_0 \leq B, \label{eq:optimization} \end{equation} where the $\ell_0$ norm in $||\W'||_0$ bounds the number of non-zero parameters $B$ in $W'$. Intuitively, if $\W' = \W$ we reach the global minimum of the error function, however $||\W'||_0$ will also have its maximum. Finding a good subset of parameters while maintaining a cost value as close as possible to the original is a combinatorial problem. It will require $2^{|\mathcal{W}|}$ evaluations of the cost function for a selected subset of data. For current networks it would be impossible to compute: for example, VGG-16 has $|\mathcal{W}| = 4224$ convolutional feature maps. While it is impossible to solve this optimization exactly for networks of any reasonable size, in this work we investigate a class of greedy methods. Starting with a full set of parameters $\W$, we iteratively identify and remove the least important parameters, as illustrated in Figure~\ref{fig:scheme}. By removing parameters at each iteration, we ensure the eventual satisfaction of the $\ell_0$ bound on $\W'$. Since we focus our analysis on pruning feature maps from convolutional layers, let us denote a set of image feature maps by $\z_\ell \in \mathbb{R}^{H_\ell \times W_\ell \times C_\ell}$ with dimensionality $H_\ell\times W_\ell$ and $C_\ell$ individual maps (or channels).\footnote{While our notation is at times specific to 2D convolutions, the methods are applicable to 3D convolutions, as well as fully connected layers.} The feature maps can either be the input to the network, $\z_0$, or the output from a convolutional layer, $\z_\ell$ with $\ell \in [1,2,...,L]$. Individual feature maps are denoted $\z_\ell^{(k)}$ for $k \in [1,2,...,C_\ell]$. A convolutional layer $\ell$ applies the convolution operation ($\ast$) to a set of input feature maps $\z_{\ell-1}$ with kernels parameterized by $\w_{\ell}^{(k)} \in \mathbb{R}^{C_{\ell-1} \times p \times p}$: \begin{equation} \z_\ell^{(k)} = \textbf{g}_\ell^{(k)}\R\big(\z_{\ell-1} \ast \w_{\ell}^{(k)} + b_\ell^{(k)}\big), \label{eq:conv} \end{equation} where $\z_\ell^{(k)} \in \mathbb{R}^{H_\ell \times W_\ell}$ is the result of convolving each of $C_{\ell-1}$ kernels of size $p \times p$ with its respective input feature map and adding bias $b_\ell^{(k)}$. We introduce a pruning gate $\textbf{g}_l \in \{0, 1\}^{C_l}$, an external switch which determines if a particular feature map is included or pruned during feed-forward propagation, such that when $\textbf{g}$ is vectorized: $\W' = \textbf{g}\W$. \subsection{Oracle pruning} Minimizing the difference in accuracy between the full and pruned models depends on the criterion for identifying the ``least important'' parameters, called \textit{saliency}, at each step. The best criterion would be an exact empirical evaluation of each parameter, which we denote the \textit{oracle} criterion, accomplished by ablating each non-zero parameter $w\in\W'$ in turn and recording the cost's difference. We distinguish two ways of using this oracle estimation of importance: 1) \emph{oracle-loss} quantifies importance as the signed change in loss, $\C(\D|\mathcal{W'}) - \C(\D| \W)$, and 2) \emph{oracle-abs} adopts the absolute difference, $|\C(\D|\mathcal{W'}) - \C(\D| \W)|$. While both discourage pruning which increases the loss, the oracle-loss version encourages pruning which may decrease the loss, while oracle-abs penalizes any pruning in proportion to its change in loss, regardless of the direction of change. While the oracle is optimal for this greedy procedure, it is prohibitively costly to compute, requiring $||W'||_0$ evaluations on a training dataset, one evaluation for each remaining non-zero parameter. Since estimation of parameter importance is key to both the accuracy and the efficiency of this pruning approach, we propose and evaluate several criteria in terms of performance and estimation cost. \subsection{Criteria for pruning} There are many heuristic criteria which are much more computationally efficient than the oracle. For the specific case of evaluating the importance of a feature map (and implicitly the set of convolutional kernels from which it is computed), reasonable criteria include: the combined $\ell_2$-norm of the kernel weights, the mean, standard deviation or percentage of the feature map's activation, and mutual information between activations and predictions. We describe these criteria in the following paragraphs and propose a new criterion which is based on the Taylor expansion. \paragraph{Minimum weight.} Pruning by magnitude of kernel weights is perhaps the simplest possible criterion, and it does not require any additional computation during the fine-tuning process. In case of pruning according to the norm of a set of weights, the criterion is evaluated as: $\Theta_{MW} : \mathbb{R}^{C_{\ell-1} \times p \times p} \to \mathbb{R}$, with $\Theta_{MW}(\w) = \frac{1}{|\w|} \sum_i w_i^2$, where $|\w|$ is dimensionality of the set of weights after vectorization. The motivation to apply this type of pruning is that a convolutional kernel with low $\ell_2$ norm detects less important features than those with a high norm. This can be aided during training by applying $\ell_1$ or $\ell_2$ regularization, which will push unimportant kernels to have smaller values. \paragraph{Activation.} One of the reasons for the popularity of the ReLU activation is the sparsity in activation that is induced, allowing convolutional layers to act as feature detectors. Therefore it is reasonable to assume that if an activation value (an output feature map) is small then this feature detector is not important for prediction task at hand. We may evaluate this by mean activation, $\Theta_{MA}: \mathbb{R}^{H_l \times W_\ell \times C_\ell} \to \mathbb{R}$, with $\Theta_{MA}(\ba) = \frac{1}{|\ba|}\sum_i a_i$ for activation $\ba=\z_l^{(k)}$, or by the standard deviation of the activation, $\Theta_{MA\_std}(\ba) = \sqrt{\frac{1}{|\ba|}\sum_i (a_i - \mu_\ba)^2}$. \paragraph{Mutual information.} Mutual information (MI) is a measure of how much information is present in one variable about another variable. %It captures linear and non-linear correlations between two variables. We apply MI as a criterion for pruning, $\Theta_{MI} : \mathbb{R}^{H_l\times W_\ell \times C_\ell} \to \mathbb{R}$, with $\Theta_{MI}(\ba) = MI(\ba, y)$, where $y$ is the target of neural network. MI is defined for continuous variables, so to simplify computation, we exchange it with information gain (IG), which is defined for quantized variables $IG(y|x) = H(x) + H(y) - H(x,y)$, where $H(x)$ is the entropy of variable $x$. We accumulate statistics on activations and ground truth for a number of updates, then quantize the values and compute IG. \paragraph{Taylor expansion.} We phrase pruning as an optimization problem, trying to find $\mathcal{W'}$ with bounded number of non-zero elements that minimize $\big| \Delta C(h_i) \big| = |\C(\D|\mathcal{W'}) - \C(\D| \W)|$. With this approach based on the Taylor expansion, we directly approximate change in the loss function from removing a particular parameter. Let $h_i$ be the output produced from parameter $i$. In the case of feature maps, $h = \{ z_0^{(1)}, z_0^{(2)}, ..., z_L^{(C_\ell)}\}$. For notational convenience, we consider the cost function equally dependent on parameters and outputs computed from parameters: $\C(\D| h_i) = \C(\D| (\w,b)_i)$. Assuming independence of parameters, we have: \begin{equation} \big| \Delta \C(h_i) \big| = \big|\C(\D, h_i = 0) - \C(\D,h_i)\big|, \label{eq:delta} \end{equation} where $\C(\D,h_i=0)$ is a cost value if output $h_i$ is pruned, while $\C(\D,h_i)$ is the cost if it is not pruned. While parameters are in reality inter-dependent, we already make an independence assumption at each gradient step during training. To approximate $\Delta \C(h_i)$, we use the first-degree Taylor polynomial. For a function $f(x)$, the Taylor expansion at point $x=a$ is \begin{equation} f(x) = \sum_{p=0}^P \frac{f^{(p)}(a)}{p!}(x - a)^p + R_p(x), \end{equation} where $f^{(p)}(a)$ is the $p$-th derivative of $f$ evaluated at point $a$, and $R_p(x)$ is the $p$-th order remainder. Approximating $\C(\D, h_i = 0)$ with a first-order Taylor polynomial near $h_i = 0$, we have: \begin{equation} \C(\D, h_i = 0) \enspace=\enspace \C(\D, h_i) - \frac{\delta \C}{\delta h_i}h_i + R_1(h_i = 0). \label{eq:approx} \end{equation} The remainder $R_1(h_i=0)$ can be calculated through the Lagrange form: \begin{equation} R_1(h_i = 0) = \frac{\delta^2 \C}{\delta (h_i^2 = \xi)}\frac{h_i^2}{2}, \end{equation} where $\xi$ is a real number between $0$ and $h_i$. However, we neglect this first-order remainder, largely due to the significant calculation required, but also in part because the widely-used ReLU activation function encourages a smaller second order term. Finally, by substituting Eq.~(\ref{eq:approx}) into Eq.~(\ref{eq:delta}) and ignoring the remainder, we have $\Theta_{TE} : \mathbb{R}^{H_l\times W_l \times C_l} \to \mathbb{R}^+$, with \begin{align} \Theta_{TE}(h_i) = \big| \Delta \C(h_i) \big| &= \big|\C(\D, h_i) - \frac{\delta \C}{\delta h_i}h_i - \C(\D,h_i)\big| = \bigg|\frac{\delta \C}{\delta h_i}h_i\bigg|. \label{eq:thetach} \end{align} Intuitively, this criterion prunes parameters that have an almost flat gradient of the cost function w.r.t. feature map $h_i$. This approach requires accumulation of the product of the activation and the gradient of the cost function w.r.t. to the activation, which is easily computed from the same computations for back-propagation. $\Theta_{TE}$ is computed for a multi-variate output, such as a feature map, by \begin{equation} \Theta_{TE}(z_l^{(k)}) = \bigg|\frac{1}{M} \sum_m \frac{\delta C}{\delta z_{l,m}^{(k)}}z_{l,m}^{(k)}\bigg|, \end{equation} where $M$ is length of vectorized feature map. For a minibatch with $T>1$ examples, the criterion is computed for each example separately and averaged over $T$. Independently of our work, \cite{figurnov2016perforatedcnns} came up with similar metric based on the Taylor expansion, called \textit{impact}, to evaluate importance of spatial cells in a convolutional layer. It shows that the same metric can be applied to evaluate importance of different groups of parameters. \paragraph{Relation to Optimal Brain Damage.} The Taylor criterion proposed above relies on approximating the change in loss caused by removing a feature map. The core idea is the same as in Optimal Brain Damage (OBD)~\citep{lecun1990optimal}. Here we consider the differences more carefully. The primary difference is the treatment of the first-order term of the Taylor expansion, in our notation $y=\frac{\delta \C}{\delta h}h$ for cost function $\C$ and hidden layer activation $h$. After sufficient training epochs, the gradient term tends to zero: $\frac{\delta \C}{\delta h} \to 0$ and $\mathbb{E}(y) = 0$. At face value $y$ offers little useful information, hence OBD regards the term as zero and focuses on the second-order term. However, the \emph{variance} of $y$ is non-zero and correlates with the stability of the local function w.r.t. activation $h$. By considering the absolute change in the cost\footnote{OBD approximates the signed difference in loss, while our method approximates absolute difference in loss. We find in our results that pruning based on absolute difference yields better accuracy.} induced by pruning (as in Eq.~\ref{eq:delta}), we use the absolute value of the first-order term, $|y|$. Under assumption that samples come from independent and identical distribution, $\mathbb{E}(|y|)=\sigma\sqrt{2}/\sqrt{\pi}$ where $\sigma$ is the standard deviation of $y$, known as the expected value of the half-normal distribution. So, while $y$ tends to zero, the expectation of $|y|$ is proportional to the variance of $y$, a value which is empirically more informative as a pruning criterion. As an additional benefit, we avoid the computation of the second-order Taylor expansion term, or its simplification - diagonal of the Hessian, as required in OBD. We found important to compare proposed Taylor criteria to OBD. As described in the original papers \citep{lecun1990optimal, LeCun1998}, OBD can be efficiently implemented similarly to standard back propagation algorithm doubling backward propagation time and memory usage when used together with standard fine-tuning. Efficient implementation of the original OBD algorithm might require significant changes to the framework based on automatic differentiation like Theano to efficiently compute only diagonal of the Hessian instead of the full matrix. Several researchers tried to tackle this problem with approximation techniques~\citep{martens2010deep, martens2012estimating}. In our implementation, we use efficient way of computing Hessian-vector product~\citep{Pearlmutter94fastexact} and matrix diagonal approximation proposed by \citep{bekas2007estimator}, please refer to more details in appendix. With current implementation, OBD is 30 times slower than Taylor technique for saliency estimation, and 3 times slower for iterative pruning, however with different implementation can only be 50\% slower as mentioned in the original paper. \paragraph{Average Percentage of Zeros (APoZ).} \cite{hu2016network} proposed to explore sparsity in activations for network pruning. ReLU activation function imposes sparsity during inference, and average percentage of positive activations at the output can determine importance of the neuron. Intuitively, it is a good criteria, however feature maps at the first layers have similar APoZ regardless of the network's target as they learn to be Gabor like filters. We will use APoZ to estimate saliency of feature maps. \subsection{Normalization} Some criteria return ``raw'' values, whose scale varies with the depth of the parameter's layer in the network. A simple layer-wise $\ell_2$-normalization can achieve adequate rescaling across layers: \[ \hat{\Theta}(\z_l^{(k)})\!=\!\frac{\Theta(\z_l^{(k)})}{\sqrt{\sum_j \big(\Theta(\z_l^{(j)})\big)^2}}. \] \subsection{FLOPs regularized pruning} One of the main reasons to apply pruning is to reduce number of operations in the network. Feature maps from different layers require different amounts of computation due the number and sizes of input feature maps and convolution kernels. To take this into account we introduce FLOPs regularization: \begin{equation} \Theta(\z_l^{(k)}) = \Theta(\z_l^{(k)}) - \lambda \Theta^{flops}_l, \end{equation} where $\lambda$ controls the amount of regularization. For our experiments, we use $\lambda = 10^{-3}$. $\Theta^{flops}$ is computed under the assumption that convolution is implemented as a sliding window (see Appendix). Other regularization conditions may be applied, e.g. storage size, kernel sizes, or memory footprint. \begin{abstract} We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation---a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than $10\times$ theoretical reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach. \end{abstract} Neural network pruning was pioneered in the early development of neural networks~\citep{reed1993pruning}. Optimal Brain Damage~\citep{lecun1990optimal} and Optimal Brain Surgeon ~\citep{hassibi1993second} leverage a second-order Taylor expansion to select parameters for deletion, using pruning as regularization to improve training and generalization. This method requires computation of the Hessian matrix partially or completely, which adds memory and computation costs to standard fine-tuning. In line with our work, \cite{anwar2015structured} describe structured pruning in convolutional layers at the level of feature maps and kernels, as well as strided sparsity to prune with regularity within kernels. Pruning is accomplished by particle filtering wherein configurations are weighted by misclassification rate. The method demonstrates good results on small CNNs, but larger CNNs are not addressed. \cite{han2015learning} introduce a simpler approach by fine-tuning with a strong $\ell_2$ regularization term and dropping parameters with values below a predefined threshold. Such unstructured pruning is very effective for network compression, and this approach demonstrates good performance for intra-kernel pruning. But compression may not translate directly to faster inference since modern hardware exploits regularities in computation for high throughput. So specialized hardware may be needed for efficient inference of a network with intra-kernel sparsity~\citep{HanLMPPHD16}. This approach also requires long fine-tuning times that may exceed the original network training by a factor of $3$ or larger. Group sparsity based regularization of network parameters was proposed to penalize unimportant parameters~\citep{wen2016learning, Zhou2016, joseproximity, lebedev2016fast}. Regularization-based pruning techniques require per layer sensitivity analysis which adds extra computations. In contrast, our approach relies on global rescaling of criteria for all layers and does not require sensitivity estimation. Moreover, our approach is faster as we directly prune unimportant parameters instead of waiting for their values to be made sufficiently small by optimization under regularization. Other approaches include combining parameters with correlated weights \citep{BMVC2015_31}, reducing precision~\citep{gupta2015deep,BinaryECCV2016} or tensor decomposition~\citep{kim2015compression}. These approaches usually require a separate training procedure or significant fine-tuning, but potentially may be combined with our method for additional speedups. \section{Results} We empirically study the pruning criteria and procedure detailed in the previous section for a variety of problems. We focus many experiments on transfer learning problems, a setting where pruning seems to excel. We also present results for pruning large networks on their original tasks for more direct comparison with the existing pruning literature. Experiments are performed within Theano~\citep{TheanoAll}. Training and pruning are performed on the respective training sets for each problem, while results are reported on appropriate holdout sets, unless otherwise indicated. For all experiments we prune \textit{a single feature map} at every pruning iteration, allowing fine-tuning and re-evaluation of the criterion to account for dependency between parameters. \subsection{Characterizing the oracle ranking} We begin by explicitly computing the oracle for a single pruning iteration of a visual transfer learning problem. We fine-tune the VGG-16 network~\citep{simonyan2014very} for classification of bird species using the Caltech-UCSD Birds 200-2011 dataset \citep{wah2011caltech}. The dataset consists of nearly $6000$ training images and $5700$ test images, covering $200$ species. We fine-tune VGG-16 for $60$ epochs with learning rate $0.0001$ to achieve a test accuracy of $72.2\%$ using uncropped images. To compute the oracle, we evaluate the change in loss caused by removing each individual feature map from the fine-tuned VGG-16 network. (See Appendix~\ref{sec:oracle_vgg16} for additional analysis.) We rank feature maps by their contributions to the loss, where rank $1$ indicates the most important feature map---removing it results in the highest increase in loss---and rank $4224$ indicates the least important. Statistics of global ranks are shown in Fig.~\ref{fig:oracle_stat} grouped by convolutional layer. We observe: (1) Median global importance tends to decrease with depth. (2) Layers with max-pooling tend to be more important than those without. (VGG-16 has pooling after layers $2$, $4$, $7$, $10$, and $13$.) However, (3) maximum and minimum ranks show that every layer has some feature maps that are globally important and others that are globally less important. Taken together with the results of subsequent experiments, we opt for encouraging a balanced pruning that distributes selection across all layers. Next, we iteratively prune the network using pre-computed oracle ranking. In this experiment, we do not update the parameters of the network or the oracle ranking between iterations. Training accuracy is illustrated in Fig.~\ref{fig:oracle} over many pruning iterations. Surprisingly, pruning by smallest absolute change in loss (Oracle-abs) yields higher accuracy than pruning by the net effect on loss (Oracle-loss). Even though the oracle indicates that removing some feature maps individually may decrease loss, instability accumulates due the large absolute changes that are induced. These results support pruning by \emph{absolute} difference in cost, as constructed in Eq.~\ref{eq:optimization}. \subsection{Evaluating proposed criteria versus the oracle} To evaluate computationally efficient criteria as substitutes for the oracle, we compute Spearman's rank correlation, an estimate of how well two predictors provide monotonically related outputs, even if their relationship is not linear. Given the difference between oracle\footnote{We use Oracle-abs because of better performance in previous experiment} and criterion ranks $d_i = rank(\Theta_{oracle}(i)) - rank(\Theta_{criterion}(i))$ for each parameter $i$, the rank correlation is computed: \begin{equation} \mathcal{S} = 1 - \frac{6}{N(N^2 - 1)}\sum_{i=1}^N {d_i}^2, \end{equation} where $N$ is the number of parameters (and the highest rank). This correlation coefficient takes values in $[-1,1]$, where $-1$ implies full negative correlation, $0$ no correlation, and $1$ full positive correlation. \input{spearman_corr_vgg_plus_alexnet_obd.tex} We show Spearman's correlation in Table~~\ref{tab:spearman_all} to compare the oracle-abs ranking to rankings by different criteria on a set of networks/datasets some of which are going to be introduced later. Data-dependent criteria (all except weight magnitude) are computed on training data during the fine-tuning before or between pruning iterations. As a sanity check, we evaluate random ranking and observe $0.0$ correlation across all layers. ``Per layer'' analysis shows ranking \emph{within} each convolutional layer, while ``All layers'' describes ranking across layers. While several criteria do not scale well across layers with raw values, a layer-wise $\ell_2$-normalization significantly improves performance. The Taylor criterion has the highest correlation among the criteria, both within layers and across layers (with $\ell_2$ normalization). OBD shows the best correlation across layers when no normalization used; it also shows best results for correlation on ImageNet dataset. (See Appendix \ref{sec:appendix_normalization} for further analysis.) \subsection{Pruning fine-tuned ImageNet networks} We now evaluate the full iterative pruning procedure on two transfer learning problems. We focus on reducing the number of convolutional feature maps and the total estimated floating point operations (FLOPs). Fine-grained recognition is difficult for relatively small datasets without relying on transfer learning. \cite{branson2014bird} show that training CNN from scratch on the Birds-200 dataset achieves test accuracy of only $10.9\%$. We compare results to training a randomly initialized CNN with half the number of parameters per layer, denoted "from scratch". Fig.~\ref{fig:pruning_ft30} shows pruning of VGG-16 after fine-tuning on the Birds-$200$ dataset (as described previously). At each pruning iteration, we remove a single feature map and then perform $30$ minibatch SGD updates with batch-size $32$, momentum $0.9$, learning rate $10^{-4}$, and weight decay $10^{-4}$. The figure depicts accuracy relative to the pruning rate (left) and estimated GFLOPs (right). The Taylor criterion shows the highest accuracy for nearly the entire range of pruning ratios, and with FLOPs regularization demonstrates the best performance relative to the number of operations. OBD shows slightly worse performance of pruning in terms of parameters, however significantly worse in terms of FLOPs. In Fig.~\ref{fig:pruning_flowers_all_all}, we show pruning of the CaffeNet implementation of AlexNet \citep{krizhevsky2012imagenet} after adapting to the Oxford Flowers $102$ dataset \citep{Nilsback08}, with $2040$ training and $6129$ test images from $102$ species of flowers. Criteria correlation with oracle-abs is summarized in Table~\ref{tab:spearman_all}. We initially fine-tune the network for $20$ epochs using a learning rate of $0.001$, achieving a final test accuracy of $80.1\%$. Then pruning procedes as previously described for Birds-$200$, except with only $10$ mini-batch updates between pruning iterations. We observe the superior performance of the Taylor and OBD criteria in both number of parameters and GFLOPs. We observed that Taylor criterion shows the best performance which is closely followed by OBD with a bit lower Spearman's rank correlation coefficient. Implementing OBD takes more effort because of computation of diagonal of the Hessian and it is 50\% to 300\% slower than Taylor criteria that relies on first order gradient only. Fig.~\ref{fig:pruning_flowers_zol2} shows pruning with the Taylor technique and a varying number of fine-tuning updates between pruning iterations. Increasing the number of updates results in higher accuracy, but at the cost of additional runtime of the pruning procedure. During pruning we observe a small drop in accuracy. One of the reasons is fine-tuning between pruning iterations. Accuracy of the initial network can be improved with longer fine tunning and search of better optimization parameters. For example accuracy of unpruned VGG16 network on Birds-200 goes up to $75\%$ after extra 128k updates. And AlexNet on Flowers-102 goes up to $82.9\%$ after 130k updates. It should be noted that with farther fine-tuning of pruned networks we can achieve higher accuracy as well, therefore the one-to-one comparison of accuracies is rough. \subsection{Pruning a recurrent 3D-CNN network for hand gesture recognition} \cite{Molchanov_2016_CVPR} learn to recognize $25$ dynamic hand gestures in streaming video with a large recurrent neural network. The network is constructed by adding recurrent connections to a 3D-CNN pretrained on the Sports-1M video dataset \citep{karpathy2014large} and fine tuning on a gesture dataset. The full network achieves an accuracy of $80.7\%$ when trained on the depth modality, but a single inference requires an estimated $37.8$ GFLOPs, too much for deployment on an embedded GPU. After several iterations of pruning with the Taylor criterion with learning rate $0.0003$, momentum $0.9$, FLOPs regularization $10^{-3}$, we reduce inference to $3.0$ GFLOPs, as shown in Fig.~\ref{fig:pruning_nvg_depth}. While pruning increases classification error by nearly $6\%$, additional fine-tuning restores much of the lost accuracy, yielding a final pruned network with a $12.6\times$ reduction in GFLOPs and only a $2.5\%$ loss in accuracy. \subsection{Pruning networks for ImageNet} \begin{wrapfigure}{R}{0.5\textwidth} \centering \captionsetup{justification=centering} \includegraphics[width=0.99\linewidth]{imagenet_vgg16_flops_vs_test_error} \label{fig:pruning_vggonimagenet} \caption{Pruning of the VGG-16 network on ImageNet, with additional following fine-tuning at $11.5$ and $8$ GFLOPs.} \end{wrapfigure} We also test our pruning scheme on the large-scale ImageNet classification task. In the first experiment, we begin with a trained CaffeNet implementation of AlexNet with $79.2\%$ top-5 validation accuracy. Between pruning iterations, we fine-tune with learning rate $10^{-4}$, momentum $0.9$, weight decay $10^{-4}$, batch size $32$, and drop-out $50\%$. Using a subset of $5000$ training images, we compute oracle-abs and Spearman's rank correlation with the criteria, as shown in Table~\ref{tab:spearman_all}. Pruning traces are illustrated in Fig.~\ref{fig:pruning_alexnetonimagenet}. We observe: 1) Taylor performs better than random or minimum weight pruning when $100$ updates are used between pruning iterations. When results are displayed w.r.t. FLOPs, the difference with random pruning is only $0\%\!-\!4\%$, but the difference is higher, $1\%\!-\!10\%$, when plotted with the number of feature maps pruned. 2) Increasing the number of updates from $100$ to $1000$ improves performance of pruning significantly for both the Taylor criterion and random pruning. For a second experiment, we prune a trained VGG-16 network with the same parameters as before, except enabling FLOPs regularization. We stop pruning at two points, $11.5$ and $8.0$ GFLOPs, and fine-tune both models for an additional five epochs with learning rate $10^{-4}$. %$\lambda = 0.0001$. Fine-tuning after pruning significantly improves results: the network pruned to $11.5$ GFLOPs improves from $83\%$ to $87\%$ top-5 validation accuracy, and the network pruned to $8.0$ GFLOPs improves from $77.8\%$ to $84.5\%$. \subsection{Speed up measurements} During pruning we were measuring reduction in computations by FLOPs, which is a common practice \citep{han2015learning, Lavin15, lavin2015fast}. Improvements in FLOPs result in monotonically decreasing inference time of the networks because of removing entire feature map from the layer. However, time consumed by inference dependents on particular implementation of convolution operator, parallelization algorithm, hardware, scheduling, memory transfer rate etc. Therefore we measure improvement in the inference time for selected networks to see real speed up compared to unpruned networks in Table~\ref{tab:speed_up}. We observe significant speed ups by proposed pruning scheme. \input{speedup_measurements_pytorch.tex} \section{Appendix} \label{sec:appendix} \subsection{FLOPs computation} \label{sec:appendix_flops} To compute the number of floating-point operations (FLOPs), we assume convolution is implemented as a sliding window and that the nonlinearity function is computed for free. For convolutional kernels we have: \begin{equation} \mbox{FLOPs} = 2HW(C_{in}K^2+1)C_{out}, \end{equation} where $H$, $W$ and $C_{in}$ are height, width and number of channels of the input feature map, $K$ is the kernel width (assumed to be symmetric), and $C_{out}$ is the number of output channels. For fully connected layers we compute FLOPs as: \begin{equation} \mbox{FLOPs} = (2I - 1)O, \end{equation} where $I$ is the input dimensionality and $O$ is the output dimensionality. We apply FLOPs regularization during pruning to prune neurons with higher FLOPs first. FLOPs per convolutional neuron in every layer: \begin{align*} \mbox{VGG16:}\ \Theta^{flops}&=[3.1, 57.8, 14.1, 28.9, 7.0, 14.5, 14.5, 3.5, 7.2, 7.2, 1.8, 1.8, 1.8, 1.8] \\ \mbox{AlexNet:}\ \Theta^{flops}&=[2.3, 1.7, 0.8, 0.6, 0.6] \\ \mbox{R3DCNN:}\ \Theta^{flops}&=[5.6, 86.9, 21.7, 43.4, 5.4, 10.8, 1.4, 1.4] \end{align*} \subsection{Normalization across layers} \label{sec:appendix_normalization} Scaling a criterion across layers is very important for pruning. If the criterion is not properly scaled, then a hand-tuned multiplier would need to be selected for each layer. Statistics of feature map ranking by different criteria are shown in Fig.~\ref{fig:spearman_reg}. Without normalization (Fig.~\ref{fig:spfig2}--\ref{fig:spfig4}), the weight magnitude criterion tends to rank feature maps from the first layers more important than last layers; the activation criterion ranks middle layers more important; and Taylor ranks first layers higher. After $\ell_2$ normalization (Fig.~\ref{fig:spfig6}--\ref{fig:spfig8}), all criteria have a shape more similar to the oracle, where each layer has some feature maps which are highly important and others which are unimportant. \subsection{Oracle computation for VGG-16 on Birds-200} \label{sec:oracle_vgg16} We compute the change in the loss caused by removing individual feature maps from the VGG-16 network, after fine-tuning on the Birds-200 dataset. Results are illustrated in Fig.~\ref{fig:oracle_change1}-\ref{fig:oracle_change2} for each feature map in layers $1$ and $13$, respectively. To compute the oracle estimate for a feature map, we remove the feature map and compute the network prediction for each image in the training set using the central crop with no data augmentation or dropout. We draw the following conclusions: \begin{itemize} \item The contribution of feature maps range from positive (above the red line) to slightly negative (below the red line), implying the existence of some feature maps which decrease the training cost when removed. \item There are many feature maps with little contribution to the network output, indicated by almost zero change in loss when removed. \item Both layers contain a small number of feature maps which induce a significant increase in the loss when removed. \end{itemize} \input{spearman_corr_birds2.tex} Table~\ref{tab:spearman_birds2} contains a layer-by-layer listing of Spearman's rank correlation of several criteria with the ranking of oracle-abs. In this more detailed comparison, we see the Taylor criterion shows higher correlation for all individual layers. For several methods including Taylor, the worst correlations are observed for the middle of the network, layers $5$-$10$. We also evaluate several techniques for normalization of the raw criteria values for comparison across layers. The table shows the best performance is obtained by $\ell_2$ normalization, hence we select it for our method. \subsection{Comparison with weight regularization} \cite{han2015learning} find that fine-tuning with high $\ell_1$ or $\ell_2$ regularization causes unimportant connections to be suppressed. Connections with energy lower than some threshold can be removed on the assumption that they do not contribute much to subsequent layers. The same work also finds that thresholds must be set separately for each layer depending on its sensitivity to pruning. The procedure to evaluate sensitivity is time-consuming as it requires pruning layers independently during evaluation. The idea of pruning with high regularization can be extended to removing the kernels for an entire feature map if the $\ell_2$ norm of those kernels is below a predefined threshold. We compare our approach with this regularization-based pruning for the task of pruning the last convolutional layer of VGG-16 fine-tuned for Birds-200. By considering only a single layer, we avoid the need to compute layerwise sensitivity. Parameters for optimization during fine-tuning are the same as other experiments with the Birds-200 dataset. For the regularization technique, the pruning threshold is set to $\sigma = 10^{-5}$ while we vary the regularization coefficient $\gamma$ of the $\ell_2$ norm on each feature map kernel.\footnote{In our implementation, the regularization coefficient is multiplied by the learning rate equal to $10^{-4}$.} We prune only kernel weights, while keeping the bias to maintain the same expected output. A comparison between pruning based on regularization and our greedy scheme is illustrated in Fig.~\ref{fig:comparisonwithregul}. We observe that our approach has higher test accuracy for the same number of remaining unpruned feature maps, when pruning $85\%$ or more of the feature maps. We observe that with high regularization all weights tend to zero, not only unimportant weights as \cite{han2015learning} observe in the case of ImageNet networks. The intuition here is that with regularization we push all weights down and potentially can affect important connections for transfer learning, whereas in our iterative procedure we only remove unimportant parameters leaving others untouched. \subsection{Combination of criteria} One of the possibilities to improve saliency estimation is to combine several criteria together. One of the straight forward combinations is Taylor and mean activation of the neuron. We compute the joint criteria as $\Theta_{joint}(\mathbf{z}_l^{(k)}) = (1-\lambda)\hat{\Theta}_{Taylor}(\mathbf{z}_l^{(k)}) + \lambda\hat{\Theta}_{Activation}(\mathbf{z}_l^{(k)})$ and perform a grid search of parameter $\lambda$ in Fig.\ref{fig:criteria_combination_fig}. The highest correlation value for each dataset is marked with with vertical bar with $\lambda$ and gain. We observe that the gain of linearly combining criteria is negligibly small (see $\Delta$'s in the figure). \subsection{Optimal Brain Damage implementation} OBD computes saliency of a parameter by computing a product of the squared magnitude of the parameter and the corresponding element on the diagonal of the Hessian. For many deep learning frameworks, an efficient implementation of the diagonal evaluation is not straightforward and approximation techniques must be applied. Our implementation of Hessian diagonal computation was inspired by~\cite{dauphin2015equilibrated} work, where the technique proposed by \cite{bekas2007estimator} was used to evaluate SGD preconditioned with the Jacobi preconditioner. It was shown that diagonal of the Hessian can be approximated as: \begin{equation} \mbox{diag}(\textbf{H})=\mathbb{E}[\textbf{v}\odot\textbf{Hv}]=\mathbb{E}[\textbf{v}\odot\nabla({\nabla{\C}\cdot\textbf{v}})], \end{equation} where $\odot$ is the element-wise product, $\textbf{v}$ are random vectors with entries $\pm1$, and $\nabla$ is the gradient operator. To compute saliency with OBD, we randomly draw \textbf{v} and compute the diagonal over 10 iterations for a single minibatch for 1000 mini batches. We found that this number of mini batches is required to compute close approximation of the Hessian's diagonal (which we verified). Computing saliency this way is computationally expensive for iterative pruning, and we use a slightly different but more efficient procedure. Before the first pruning iteration, saliency is initialized from values computed off-line with 1000 minibatches and 10 iterations, as described above. Then, at every minibatch we compute the OBD criteria with only one iteration and apply an exponential moving averaging with a coefficient of $0.99$. We verified that this computes a close approximation to the Hessian's diagonal. \subsection{Correlation of Taylor criterion with gradient and activation} The Taylor criterion is composed of both an activation term and a gradient term. In Figure \ref{fig:corr_gradient_activation}, we depict the correlation between the Taylor criterion and each constituent part. We consider expected absolute value of the gradient instead of the mean, because otherwise it tends to zero. The plots are computed from pruning criteria for an unpruned VGG network fine-tuned for the Birds-200 dataset. (Values are shown after layer-wise normalization). Figure \ref{fig:corr_gradient_activation}(a-b) depict the Taylor criterion in the y-axis for all neurons w.r.t. the gradient and activation components, respectively. The bottom $10\%$ of neurons (lowest Taylor criterion, most likely to be pruned) are depicted in red, while the top $10\%$ are shown in green. Considering all neurons, both gradient and activation components demonstrate a linear trend with the Taylor criterion. However, for the bottom $10\%$ of neurons, as shown in Figure \ref{fig:corr_gradient_activation}(c-d), the activation criterion shows much stronger correlation, with lower activations indicating lower Taylor scores. \section{Conclusions} We propose a new scheme for iteratively pruning deep convolutional neural networks. We find: 1) CNNs may be successfully pruned by iteratively removing the least important parameters---feature maps in this case---according to heuristic selection criteria; 2) a Taylor expansion-based criterion demonstrates significant improvement over other criteria; 3) per-layer normalization of the criterion is important to obtain global scaling.
Pruning Convolutional Neural Networks for Resource Efficient Inference
1611.06440
Table 3: Spearman’s rank correlation of criteria vs oracle-abs in VGG-16 fine-tuned on Birds 200.
[ "[EMPTY]", "[BOLD] MI", "[BOLD] Weight", "[BOLD] Activation Mean", "[BOLD] Activation S.d.", "[BOLD] Activation APoZ", "[BOLD] OBD", "[BOLD] Taylor" ]
[ [ "[BOLD] Per layer", "[BOLD] Per layer", "[BOLD] Per layer", "[BOLD] Per layer", "[BOLD] Per layer", "[BOLD] Per layer", "[BOLD] Per layer", "[BOLD] Per layer" ], [ "Layer 1", "0.41", "0.40", "0.65", "0.78", "0.36", "0.54", "[BOLD] 0.95" ], [ "Layer 2", "0.23", "0.57", "0.56", "0.59", "0.33", "0.78", "[BOLD] 0.90" ], [ "Layer 3", "0.14", "0.55", "0.48", "0.45", "0.51", "0.66", "[BOLD] 0.74" ], [ "Layer 4", "0.26", "0.23", "0.58", "0.42", "0.10", "0.36", "[BOLD] 0.80" ], [ "Layer 5", "0.17", "0.28", "0.49", "0.52", "0.15", "0.54", "[BOLD] 0.69" ], [ "Layer 6", "0.21", "0.18", "0.41", "0.48", "0.16", "0.49", "[BOLD] 0.63" ], [ "Layer 7", "0.12", "0.19", "0.54", "0.49", "0.38", "0.55", "[BOLD] 0.71" ], [ "Layer 8", "0.18", "0.23", "0.43", "0.42", "0.30", "0.50", "[BOLD] 0.54" ], [ "Layer 9", "0.21", "0.18", "0.50", "0.55", "0.35", "0.53", "[BOLD] 0.61" ], [ "Layer 10", "0.26", "0.15", "0.59", "0.60", "0.45", "0.61", "[BOLD] 0.66" ], [ "Layer 11", "0.41", "0.12", "0.61", "0.65", "0.45", "0.64", "[BOLD] 0.72" ], [ "Layer 12", "0.47", "0.15", "0.60", "0.66", "0.39", "0.66", "[BOLD] 0.72" ], [ "Layer 13", "0.61", "0.21", "[BOLD] 0.77", "0.76", "0.65", "0.76", "[BOLD] 0.77" ], [ "Mean", "0.28", "0.27", "0.56", "0.57", "0.35", "0.59", "[BOLD] 0.73" ], [ "[BOLD] All layers", "[BOLD] All layers", "[BOLD] All layers", "[BOLD] All layers", "[BOLD] All layers", "[BOLD] All layers", "[BOLD] All layers", "[BOLD] All layers" ], [ "No normalization", "0.35", "0.34", "0.35", "0.30", "0.43", "0.65", "0.14" ], [ "ℓ1 normalization", "0.47", "0.37", "0.63", "0.63", "0.52", "0.65", "0.71" ], [ "ℓ2 normalization", "0.47", "0.33", "0.64", "0.66", "0.51", "0.60", "[BOLD] 0.73" ], [ "Min-max normalization", "0.27", "0.17", "0.52", "0.57", "0.42", "0.54", "0.67" ] ]
In this more detailed comparison, we see the Taylor criterion shows higher correlation for all individual layers. For several methods including Taylor, the worst correlations are observed for the middle of the network, layers 5-10. We also evaluate several techniques for normalization of the raw criteria values for comparison across layers. The table shows the best performance is obtained by ℓ2 normalization, hence we select it for our method.
\documentclass{article} % For LaTeX2e \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % hyperlinks % simple URL typesetting % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. % microtypography \newcommand{\stnote}[1]{{\color{blue}[\textbf{Stephen}: #1]}} \newcommand{\pmnote}[1]{{\color{green}[\textbf{Pavlo}: #1]}} \newcommand{\D}{\mathcal{D}} \newcommand{\W}{\mathcal{W}} \newcommand{\C}{\mathcal{C}} \newcommand{\R}{\mathcal{R}} \newcommand{\X}{\mathcal{X}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\z}{{\mathbf{z}}} \newcommand{\x}{{\mathbf{x}}} \newcommand{\w}{{\mathbf{w}}} \newcommand{\ba}{{\mathbf{a}}} \newcommand{\eq}{\!=\!} \newcommand{\ta}[1]{\cellcolor[rgb]{0.95,0.95,.95}#1} \newcommand{\tb}[1]{\cellcolor[rgb]{0.80,0.80,.80}#1} \newcommand{\rot}[1]{\rotatebox[origin=c]{90}{\ #1\ }} \renewcommand{\topfraction}{.85} \renewcommand{\dbltopfraction}{0.85} \renewcommand{\bottomfraction}{.85} \renewcommand{\textfraction}{.1} \renewcommand{\floatpagefraction}{.8} \renewcommand{\dblfloatpagefraction}{0.8} \DeclareMathOperator{\st}{s.\!t.} \title{Pruning Convolutional Neural Networks \\ for Resource Efficient Inference} \author{Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz \\ NVIDIA\\ \texttt{\{pmolchanov, styree, tkarras, taila, jkautz\}@nvidia.com} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \input{abstract} \input{introduction} \input{related} \input{method} \input{results} \input{conclusions} \bibliographystyle{iclr2017_conference} \clearpage \appendix \input{appendix} \end{document} \section{Introduction} Convolutional neural networks (CNN) are used extensively in computer vision applications, including object classification and localization, pedestrian and car detection, and video classification. Many problems like these focus on specialized domains for which there are only small amounts of carefully curated training data. In these cases, accuracy may be improved by fine-tuning an existing deep network previously trained on a much larger labeled vision dataset, such as images from ImageNet~\citep{russakovsky2015imagenet} or videos from Sports-1M~\citep{karpathy2014large}. While transfer learning of this form supports state of the art accuracy, inference is expensive due to the time, power, and memory demanded by the heavyweight architecture of the fine-tuned network. While modern deep CNNs are composed of a variety of layer types, runtime during prediction is dominated by the evaluation of convolutional layers. With the goal of speeding up inference, we prune entire feature maps so the resulting networks may be run efficiently even on embedded devices. We interleave greedy criteria-based pruning with fine-tuning by backpropagation, a computationally efficient procedure that maintains good generalization in the pruned network. \section{Method} \begin{wrapfigure}{R}{0.3\textwidth} \vspace{-0.5em} \centering \captionsetup{justification=centering} \includegraphics[clip, trim={0 0.1cm 0 0},width=0.8\linewidth]{Pruning_scheme.pdf} \caption{Network pruning as a backward filter.} \label{fig:scheme} \vspace{-1em} \end{wrapfigure} The proposed method for pruning consists of the following steps: 1) Fine-tune the network until convergence on the target task; 2) Alternate iterations of pruning and further fine-tuning; 3) Stop pruning after reaching the target trade-off between accuracy and pruning objective, e.g. floating point operations (FLOPs) or memory utilization. The procedure is simple, but its success hinges on employing the right pruning criterion. In this section, we introduce several efficient pruning criteria and related technical considerations. Consider a set of training examples $\D = \big\{\X = \{ \x_0, \x_1, ..., \x_N \}, \Y = \{y_0, y_1, ..., y_N\}\big\}$, where $\x$ and $y$ represent an input and a target output, respectively. The network's parameters\footnote{A ``parameter'' $(w, b) \in \W$ might represent an individual weight, a convolutional kernel, or the entire set of kernels that compute a feature map; our experiments operate at the level of feature maps.} $\mathcal{W} = \{ (\w^1_1, b^1_1), (\w^2_1, b^2_1), ... (\w^{C_\ell}_L,b^{C_\ell}_L)\}$ are optimized to minimize a cost value $\C(\D | \W)$. The most common choice for a cost function $\C(\cdot)$ is a negative log-likelihood function. A cost function is selected independently of pruning and depends only on the task to be solved by the original network. In the case of transfer learning, we adapt a large network initialized with parameters $\W_0$ pretrained on a related but distinct dataset. During pruning, we refine a subset of parameters %, such that $\W'=\W g$, which preserves the accuracy of the adapted network, $\C(\D|\W') \approx \C(\D|\W)$. %, when all other weights are omitted. This corresponds to a combinatorial optimization: \begin{equation} \min_{\W'} \bigg|\C(\D|\W') - \C(\D| \W)\bigg| \quad \st \quad {||\W'||}_0 \leq B, \label{eq:optimization} \end{equation} where the $\ell_0$ norm in $||\W'||_0$ bounds the number of non-zero parameters $B$ in $W'$. Intuitively, if $\W' = \W$ we reach the global minimum of the error function, however $||\W'||_0$ will also have its maximum. Finding a good subset of parameters while maintaining a cost value as close as possible to the original is a combinatorial problem. It will require $2^{|\mathcal{W}|}$ evaluations of the cost function for a selected subset of data. For current networks it would be impossible to compute: for example, VGG-16 has $|\mathcal{W}| = 4224$ convolutional feature maps. While it is impossible to solve this optimization exactly for networks of any reasonable size, in this work we investigate a class of greedy methods. Starting with a full set of parameters $\W$, we iteratively identify and remove the least important parameters, as illustrated in Figure~\ref{fig:scheme}. By removing parameters at each iteration, we ensure the eventual satisfaction of the $\ell_0$ bound on $\W'$. Since we focus our analysis on pruning feature maps from convolutional layers, let us denote a set of image feature maps by $\z_\ell \in \mathbb{R}^{H_\ell \times W_\ell \times C_\ell}$ with dimensionality $H_\ell\times W_\ell$ and $C_\ell$ individual maps (or channels).\footnote{While our notation is at times specific to 2D convolutions, the methods are applicable to 3D convolutions, as well as fully connected layers.} The feature maps can either be the input to the network, $\z_0$, or the output from a convolutional layer, $\z_\ell$ with $\ell \in [1,2,...,L]$. Individual feature maps are denoted $\z_\ell^{(k)}$ for $k \in [1,2,...,C_\ell]$. A convolutional layer $\ell$ applies the convolution operation ($\ast$) to a set of input feature maps $\z_{\ell-1}$ with kernels parameterized by $\w_{\ell}^{(k)} \in \mathbb{R}^{C_{\ell-1} \times p \times p}$: \begin{equation} \z_\ell^{(k)} = \textbf{g}_\ell^{(k)}\R\big(\z_{\ell-1} \ast \w_{\ell}^{(k)} + b_\ell^{(k)}\big), \label{eq:conv} \end{equation} where $\z_\ell^{(k)} \in \mathbb{R}^{H_\ell \times W_\ell}$ is the result of convolving each of $C_{\ell-1}$ kernels of size $p \times p$ with its respective input feature map and adding bias $b_\ell^{(k)}$. We introduce a pruning gate $\textbf{g}_l \in \{0, 1\}^{C_l}$, an external switch which determines if a particular feature map is included or pruned during feed-forward propagation, such that when $\textbf{g}$ is vectorized: $\W' = \textbf{g}\W$. \subsection{Oracle pruning} Minimizing the difference in accuracy between the full and pruned models depends on the criterion for identifying the ``least important'' parameters, called \textit{saliency}, at each step. The best criterion would be an exact empirical evaluation of each parameter, which we denote the \textit{oracle} criterion, accomplished by ablating each non-zero parameter $w\in\W'$ in turn and recording the cost's difference. We distinguish two ways of using this oracle estimation of importance: 1) \emph{oracle-loss} quantifies importance as the signed change in loss, $\C(\D|\mathcal{W'}) - \C(\D| \W)$, and 2) \emph{oracle-abs} adopts the absolute difference, $|\C(\D|\mathcal{W'}) - \C(\D| \W)|$. While both discourage pruning which increases the loss, the oracle-loss version encourages pruning which may decrease the loss, while oracle-abs penalizes any pruning in proportion to its change in loss, regardless of the direction of change. While the oracle is optimal for this greedy procedure, it is prohibitively costly to compute, requiring $||W'||_0$ evaluations on a training dataset, one evaluation for each remaining non-zero parameter. Since estimation of parameter importance is key to both the accuracy and the efficiency of this pruning approach, we propose and evaluate several criteria in terms of performance and estimation cost. \subsection{Criteria for pruning} There are many heuristic criteria which are much more computationally efficient than the oracle. For the specific case of evaluating the importance of a feature map (and implicitly the set of convolutional kernels from which it is computed), reasonable criteria include: the combined $\ell_2$-norm of the kernel weights, the mean, standard deviation or percentage of the feature map's activation, and mutual information between activations and predictions. We describe these criteria in the following paragraphs and propose a new criterion which is based on the Taylor expansion. \paragraph{Minimum weight.} Pruning by magnitude of kernel weights is perhaps the simplest possible criterion, and it does not require any additional computation during the fine-tuning process. In case of pruning according to the norm of a set of weights, the criterion is evaluated as: $\Theta_{MW} : \mathbb{R}^{C_{\ell-1} \times p \times p} \to \mathbb{R}$, with $\Theta_{MW}(\w) = \frac{1}{|\w|} \sum_i w_i^2$, where $|\w|$ is dimensionality of the set of weights after vectorization. The motivation to apply this type of pruning is that a convolutional kernel with low $\ell_2$ norm detects less important features than those with a high norm. This can be aided during training by applying $\ell_1$ or $\ell_2$ regularization, which will push unimportant kernels to have smaller values. \paragraph{Activation.} One of the reasons for the popularity of the ReLU activation is the sparsity in activation that is induced, allowing convolutional layers to act as feature detectors. Therefore it is reasonable to assume that if an activation value (an output feature map) is small then this feature detector is not important for prediction task at hand. We may evaluate this by mean activation, $\Theta_{MA}: \mathbb{R}^{H_l \times W_\ell \times C_\ell} \to \mathbb{R}$, with $\Theta_{MA}(\ba) = \frac{1}{|\ba|}\sum_i a_i$ for activation $\ba=\z_l^{(k)}$, or by the standard deviation of the activation, $\Theta_{MA\_std}(\ba) = \sqrt{\frac{1}{|\ba|}\sum_i (a_i - \mu_\ba)^2}$. \paragraph{Mutual information.} Mutual information (MI) is a measure of how much information is present in one variable about another variable. %It captures linear and non-linear correlations between two variables. We apply MI as a criterion for pruning, $\Theta_{MI} : \mathbb{R}^{H_l\times W_\ell \times C_\ell} \to \mathbb{R}$, with $\Theta_{MI}(\ba) = MI(\ba, y)$, where $y$ is the target of neural network. MI is defined for continuous variables, so to simplify computation, we exchange it with information gain (IG), which is defined for quantized variables $IG(y|x) = H(x) + H(y) - H(x,y)$, where $H(x)$ is the entropy of variable $x$. We accumulate statistics on activations and ground truth for a number of updates, then quantize the values and compute IG. \paragraph{Taylor expansion.} We phrase pruning as an optimization problem, trying to find $\mathcal{W'}$ with bounded number of non-zero elements that minimize $\big| \Delta C(h_i) \big| = |\C(\D|\mathcal{W'}) - \C(\D| \W)|$. With this approach based on the Taylor expansion, we directly approximate change in the loss function from removing a particular parameter. Let $h_i$ be the output produced from parameter $i$. In the case of feature maps, $h = \{ z_0^{(1)}, z_0^{(2)}, ..., z_L^{(C_\ell)}\}$. For notational convenience, we consider the cost function equally dependent on parameters and outputs computed from parameters: $\C(\D| h_i) = \C(\D| (\w,b)_i)$. Assuming independence of parameters, we have: \begin{equation} \big| \Delta \C(h_i) \big| = \big|\C(\D, h_i = 0) - \C(\D,h_i)\big|, \label{eq:delta} \end{equation} where $\C(\D,h_i=0)$ is a cost value if output $h_i$ is pruned, while $\C(\D,h_i)$ is the cost if it is not pruned. While parameters are in reality inter-dependent, we already make an independence assumption at each gradient step during training. To approximate $\Delta \C(h_i)$, we use the first-degree Taylor polynomial. For a function $f(x)$, the Taylor expansion at point $x=a$ is \begin{equation} f(x) = \sum_{p=0}^P \frac{f^{(p)}(a)}{p!}(x - a)^p + R_p(x), \end{equation} where $f^{(p)}(a)$ is the $p$-th derivative of $f$ evaluated at point $a$, and $R_p(x)$ is the $p$-th order remainder. Approximating $\C(\D, h_i = 0)$ with a first-order Taylor polynomial near $h_i = 0$, we have: \begin{equation} \C(\D, h_i = 0) \enspace=\enspace \C(\D, h_i) - \frac{\delta \C}{\delta h_i}h_i + R_1(h_i = 0). \label{eq:approx} \end{equation} The remainder $R_1(h_i=0)$ can be calculated through the Lagrange form: \begin{equation} R_1(h_i = 0) = \frac{\delta^2 \C}{\delta (h_i^2 = \xi)}\frac{h_i^2}{2}, \end{equation} where $\xi$ is a real number between $0$ and $h_i$. However, we neglect this first-order remainder, largely due to the significant calculation required, but also in part because the widely-used ReLU activation function encourages a smaller second order term. Finally, by substituting Eq.~(\ref{eq:approx}) into Eq.~(\ref{eq:delta}) and ignoring the remainder, we have $\Theta_{TE} : \mathbb{R}^{H_l\times W_l \times C_l} \to \mathbb{R}^+$, with \begin{align} \Theta_{TE}(h_i) = \big| \Delta \C(h_i) \big| &= \big|\C(\D, h_i) - \frac{\delta \C}{\delta h_i}h_i - \C(\D,h_i)\big| = \bigg|\frac{\delta \C}{\delta h_i}h_i\bigg|. \label{eq:thetach} \end{align} Intuitively, this criterion prunes parameters that have an almost flat gradient of the cost function w.r.t. feature map $h_i$. This approach requires accumulation of the product of the activation and the gradient of the cost function w.r.t. to the activation, which is easily computed from the same computations for back-propagation. $\Theta_{TE}$ is computed for a multi-variate output, such as a feature map, by \begin{equation} \Theta_{TE}(z_l^{(k)}) = \bigg|\frac{1}{M} \sum_m \frac{\delta C}{\delta z_{l,m}^{(k)}}z_{l,m}^{(k)}\bigg|, \end{equation} where $M$ is length of vectorized feature map. For a minibatch with $T>1$ examples, the criterion is computed for each example separately and averaged over $T$. Independently of our work, \cite{figurnov2016perforatedcnns} came up with similar metric based on the Taylor expansion, called \textit{impact}, to evaluate importance of spatial cells in a convolutional layer. It shows that the same metric can be applied to evaluate importance of different groups of parameters. \paragraph{Relation to Optimal Brain Damage.} The Taylor criterion proposed above relies on approximating the change in loss caused by removing a feature map. The core idea is the same as in Optimal Brain Damage (OBD)~\citep{lecun1990optimal}. Here we consider the differences more carefully. The primary difference is the treatment of the first-order term of the Taylor expansion, in our notation $y=\frac{\delta \C}{\delta h}h$ for cost function $\C$ and hidden layer activation $h$. After sufficient training epochs, the gradient term tends to zero: $\frac{\delta \C}{\delta h} \to 0$ and $\mathbb{E}(y) = 0$. At face value $y$ offers little useful information, hence OBD regards the term as zero and focuses on the second-order term. However, the \emph{variance} of $y$ is non-zero and correlates with the stability of the local function w.r.t. activation $h$. By considering the absolute change in the cost\footnote{OBD approximates the signed difference in loss, while our method approximates absolute difference in loss. We find in our results that pruning based on absolute difference yields better accuracy.} induced by pruning (as in Eq.~\ref{eq:delta}), we use the absolute value of the first-order term, $|y|$. Under assumption that samples come from independent and identical distribution, $\mathbb{E}(|y|)=\sigma\sqrt{2}/\sqrt{\pi}$ where $\sigma$ is the standard deviation of $y$, known as the expected value of the half-normal distribution. So, while $y$ tends to zero, the expectation of $|y|$ is proportional to the variance of $y$, a value which is empirically more informative as a pruning criterion. As an additional benefit, we avoid the computation of the second-order Taylor expansion term, or its simplification - diagonal of the Hessian, as required in OBD. We found important to compare proposed Taylor criteria to OBD. As described in the original papers \citep{lecun1990optimal, LeCun1998}, OBD can be efficiently implemented similarly to standard back propagation algorithm doubling backward propagation time and memory usage when used together with standard fine-tuning. Efficient implementation of the original OBD algorithm might require significant changes to the framework based on automatic differentiation like Theano to efficiently compute only diagonal of the Hessian instead of the full matrix. Several researchers tried to tackle this problem with approximation techniques~\citep{martens2010deep, martens2012estimating}. In our implementation, we use efficient way of computing Hessian-vector product~\citep{Pearlmutter94fastexact} and matrix diagonal approximation proposed by \citep{bekas2007estimator}, please refer to more details in appendix. With current implementation, OBD is 30 times slower than Taylor technique for saliency estimation, and 3 times slower for iterative pruning, however with different implementation can only be 50\% slower as mentioned in the original paper. \paragraph{Average Percentage of Zeros (APoZ).} \cite{hu2016network} proposed to explore sparsity in activations for network pruning. ReLU activation function imposes sparsity during inference, and average percentage of positive activations at the output can determine importance of the neuron. Intuitively, it is a good criteria, however feature maps at the first layers have similar APoZ regardless of the network's target as they learn to be Gabor like filters. We will use APoZ to estimate saliency of feature maps. \subsection{Normalization} Some criteria return ``raw'' values, whose scale varies with the depth of the parameter's layer in the network. A simple layer-wise $\ell_2$-normalization can achieve adequate rescaling across layers: \[ \hat{\Theta}(\z_l^{(k)})\!=\!\frac{\Theta(\z_l^{(k)})}{\sqrt{\sum_j \big(\Theta(\z_l^{(j)})\big)^2}}. \] \subsection{FLOPs regularized pruning} One of the main reasons to apply pruning is to reduce number of operations in the network. Feature maps from different layers require different amounts of computation due the number and sizes of input feature maps and convolution kernels. To take this into account we introduce FLOPs regularization: \begin{equation} \Theta(\z_l^{(k)}) = \Theta(\z_l^{(k)}) - \lambda \Theta^{flops}_l, \end{equation} where $\lambda$ controls the amount of regularization. For our experiments, we use $\lambda = 10^{-3}$. $\Theta^{flops}$ is computed under the assumption that convolution is implemented as a sliding window (see Appendix). Other regularization conditions may be applied, e.g. storage size, kernel sizes, or memory footprint. \begin{abstract} We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation---a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than $10\times$ theoretical reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach. \end{abstract} Neural network pruning was pioneered in the early development of neural networks~\citep{reed1993pruning}. Optimal Brain Damage~\citep{lecun1990optimal} and Optimal Brain Surgeon ~\citep{hassibi1993second} leverage a second-order Taylor expansion to select parameters for deletion, using pruning as regularization to improve training and generalization. This method requires computation of the Hessian matrix partially or completely, which adds memory and computation costs to standard fine-tuning. In line with our work, \cite{anwar2015structured} describe structured pruning in convolutional layers at the level of feature maps and kernels, as well as strided sparsity to prune with regularity within kernels. Pruning is accomplished by particle filtering wherein configurations are weighted by misclassification rate. The method demonstrates good results on small CNNs, but larger CNNs are not addressed. \cite{han2015learning} introduce a simpler approach by fine-tuning with a strong $\ell_2$ regularization term and dropping parameters with values below a predefined threshold. Such unstructured pruning is very effective for network compression, and this approach demonstrates good performance for intra-kernel pruning. But compression may not translate directly to faster inference since modern hardware exploits regularities in computation for high throughput. So specialized hardware may be needed for efficient inference of a network with intra-kernel sparsity~\citep{HanLMPPHD16}. This approach also requires long fine-tuning times that may exceed the original network training by a factor of $3$ or larger. Group sparsity based regularization of network parameters was proposed to penalize unimportant parameters~\citep{wen2016learning, Zhou2016, joseproximity, lebedev2016fast}. Regularization-based pruning techniques require per layer sensitivity analysis which adds extra computations. In contrast, our approach relies on global rescaling of criteria for all layers and does not require sensitivity estimation. Moreover, our approach is faster as we directly prune unimportant parameters instead of waiting for their values to be made sufficiently small by optimization under regularization. Other approaches include combining parameters with correlated weights \citep{BMVC2015_31}, reducing precision~\citep{gupta2015deep,BinaryECCV2016} or tensor decomposition~\citep{kim2015compression}. These approaches usually require a separate training procedure or significant fine-tuning, but potentially may be combined with our method for additional speedups. \section{Results} We empirically study the pruning criteria and procedure detailed in the previous section for a variety of problems. We focus many experiments on transfer learning problems, a setting where pruning seems to excel. We also present results for pruning large networks on their original tasks for more direct comparison with the existing pruning literature. Experiments are performed within Theano~\citep{TheanoAll}. Training and pruning are performed on the respective training sets for each problem, while results are reported on appropriate holdout sets, unless otherwise indicated. For all experiments we prune \textit{a single feature map} at every pruning iteration, allowing fine-tuning and re-evaluation of the criterion to account for dependency between parameters. \subsection{Characterizing the oracle ranking} We begin by explicitly computing the oracle for a single pruning iteration of a visual transfer learning problem. We fine-tune the VGG-16 network~\citep{simonyan2014very} for classification of bird species using the Caltech-UCSD Birds 200-2011 dataset \citep{wah2011caltech}. The dataset consists of nearly $6000$ training images and $5700$ test images, covering $200$ species. We fine-tune VGG-16 for $60$ epochs with learning rate $0.0001$ to achieve a test accuracy of $72.2\%$ using uncropped images. To compute the oracle, we evaluate the change in loss caused by removing each individual feature map from the fine-tuned VGG-16 network. (See Appendix~\ref{sec:oracle_vgg16} for additional analysis.) We rank feature maps by their contributions to the loss, where rank $1$ indicates the most important feature map---removing it results in the highest increase in loss---and rank $4224$ indicates the least important. Statistics of global ranks are shown in Fig.~\ref{fig:oracle_stat} grouped by convolutional layer. We observe: (1) Median global importance tends to decrease with depth. (2) Layers with max-pooling tend to be more important than those without. (VGG-16 has pooling after layers $2$, $4$, $7$, $10$, and $13$.) However, (3) maximum and minimum ranks show that every layer has some feature maps that are globally important and others that are globally less important. Taken together with the results of subsequent experiments, we opt for encouraging a balanced pruning that distributes selection across all layers. Next, we iteratively prune the network using pre-computed oracle ranking. In this experiment, we do not update the parameters of the network or the oracle ranking between iterations. Training accuracy is illustrated in Fig.~\ref{fig:oracle} over many pruning iterations. Surprisingly, pruning by smallest absolute change in loss (Oracle-abs) yields higher accuracy than pruning by the net effect on loss (Oracle-loss). Even though the oracle indicates that removing some feature maps individually may decrease loss, instability accumulates due the large absolute changes that are induced. These results support pruning by \emph{absolute} difference in cost, as constructed in Eq.~\ref{eq:optimization}. \subsection{Evaluating proposed criteria versus the oracle} To evaluate computationally efficient criteria as substitutes for the oracle, we compute Spearman's rank correlation, an estimate of how well two predictors provide monotonically related outputs, even if their relationship is not linear. Given the difference between oracle\footnote{We use Oracle-abs because of better performance in previous experiment} and criterion ranks $d_i = rank(\Theta_{oracle}(i)) - rank(\Theta_{criterion}(i))$ for each parameter $i$, the rank correlation is computed: \begin{equation} \mathcal{S} = 1 - \frac{6}{N(N^2 - 1)}\sum_{i=1}^N {d_i}^2, \end{equation} where $N$ is the number of parameters (and the highest rank). This correlation coefficient takes values in $[-1,1]$, where $-1$ implies full negative correlation, $0$ no correlation, and $1$ full positive correlation. \input{spearman_corr_vgg_plus_alexnet_obd.tex} We show Spearman's correlation in Table~~\ref{tab:spearman_all} to compare the oracle-abs ranking to rankings by different criteria on a set of networks/datasets some of which are going to be introduced later. Data-dependent criteria (all except weight magnitude) are computed on training data during the fine-tuning before or between pruning iterations. As a sanity check, we evaluate random ranking and observe $0.0$ correlation across all layers. ``Per layer'' analysis shows ranking \emph{within} each convolutional layer, while ``All layers'' describes ranking across layers. While several criteria do not scale well across layers with raw values, a layer-wise $\ell_2$-normalization significantly improves performance. The Taylor criterion has the highest correlation among the criteria, both within layers and across layers (with $\ell_2$ normalization). OBD shows the best correlation across layers when no normalization used; it also shows best results for correlation on ImageNet dataset. (See Appendix \ref{sec:appendix_normalization} for further analysis.) \subsection{Pruning fine-tuned ImageNet networks} We now evaluate the full iterative pruning procedure on two transfer learning problems. We focus on reducing the number of convolutional feature maps and the total estimated floating point operations (FLOPs). Fine-grained recognition is difficult for relatively small datasets without relying on transfer learning. \cite{branson2014bird} show that training CNN from scratch on the Birds-200 dataset achieves test accuracy of only $10.9\%$. We compare results to training a randomly initialized CNN with half the number of parameters per layer, denoted "from scratch". Fig.~\ref{fig:pruning_ft30} shows pruning of VGG-16 after fine-tuning on the Birds-$200$ dataset (as described previously). At each pruning iteration, we remove a single feature map and then perform $30$ minibatch SGD updates with batch-size $32$, momentum $0.9$, learning rate $10^{-4}$, and weight decay $10^{-4}$. The figure depicts accuracy relative to the pruning rate (left) and estimated GFLOPs (right). The Taylor criterion shows the highest accuracy for nearly the entire range of pruning ratios, and with FLOPs regularization demonstrates the best performance relative to the number of operations. OBD shows slightly worse performance of pruning in terms of parameters, however significantly worse in terms of FLOPs. In Fig.~\ref{fig:pruning_flowers_all_all}, we show pruning of the CaffeNet implementation of AlexNet \citep{krizhevsky2012imagenet} after adapting to the Oxford Flowers $102$ dataset \citep{Nilsback08}, with $2040$ training and $6129$ test images from $102$ species of flowers. Criteria correlation with oracle-abs is summarized in Table~\ref{tab:spearman_all}. We initially fine-tune the network for $20$ epochs using a learning rate of $0.001$, achieving a final test accuracy of $80.1\%$. Then pruning procedes as previously described for Birds-$200$, except with only $10$ mini-batch updates between pruning iterations. We observe the superior performance of the Taylor and OBD criteria in both number of parameters and GFLOPs. We observed that Taylor criterion shows the best performance which is closely followed by OBD with a bit lower Spearman's rank correlation coefficient. Implementing OBD takes more effort because of computation of diagonal of the Hessian and it is 50\% to 300\% slower than Taylor criteria that relies on first order gradient only. Fig.~\ref{fig:pruning_flowers_zol2} shows pruning with the Taylor technique and a varying number of fine-tuning updates between pruning iterations. Increasing the number of updates results in higher accuracy, but at the cost of additional runtime of the pruning procedure. During pruning we observe a small drop in accuracy. One of the reasons is fine-tuning between pruning iterations. Accuracy of the initial network can be improved with longer fine tunning and search of better optimization parameters. For example accuracy of unpruned VGG16 network on Birds-200 goes up to $75\%$ after extra 128k updates. And AlexNet on Flowers-102 goes up to $82.9\%$ after 130k updates. It should be noted that with farther fine-tuning of pruned networks we can achieve higher accuracy as well, therefore the one-to-one comparison of accuracies is rough. \subsection{Pruning a recurrent 3D-CNN network for hand gesture recognition} \cite{Molchanov_2016_CVPR} learn to recognize $25$ dynamic hand gestures in streaming video with a large recurrent neural network. The network is constructed by adding recurrent connections to a 3D-CNN pretrained on the Sports-1M video dataset \citep{karpathy2014large} and fine tuning on a gesture dataset. The full network achieves an accuracy of $80.7\%$ when trained on the depth modality, but a single inference requires an estimated $37.8$ GFLOPs, too much for deployment on an embedded GPU. After several iterations of pruning with the Taylor criterion with learning rate $0.0003$, momentum $0.9$, FLOPs regularization $10^{-3}$, we reduce inference to $3.0$ GFLOPs, as shown in Fig.~\ref{fig:pruning_nvg_depth}. While pruning increases classification error by nearly $6\%$, additional fine-tuning restores much of the lost accuracy, yielding a final pruned network with a $12.6\times$ reduction in GFLOPs and only a $2.5\%$ loss in accuracy. \subsection{Pruning networks for ImageNet} \begin{wrapfigure}{R}{0.5\textwidth} \centering \captionsetup{justification=centering} \includegraphics[width=0.99\linewidth]{imagenet_vgg16_flops_vs_test_error} \label{fig:pruning_vggonimagenet} \caption{Pruning of the VGG-16 network on ImageNet, with additional following fine-tuning at $11.5$ and $8$ GFLOPs.} \end{wrapfigure} We also test our pruning scheme on the large-scale ImageNet classification task. In the first experiment, we begin with a trained CaffeNet implementation of AlexNet with $79.2\%$ top-5 validation accuracy. Between pruning iterations, we fine-tune with learning rate $10^{-4}$, momentum $0.9$, weight decay $10^{-4}$, batch size $32$, and drop-out $50\%$. Using a subset of $5000$ training images, we compute oracle-abs and Spearman's rank correlation with the criteria, as shown in Table~\ref{tab:spearman_all}. Pruning traces are illustrated in Fig.~\ref{fig:pruning_alexnetonimagenet}. We observe: 1) Taylor performs better than random or minimum weight pruning when $100$ updates are used between pruning iterations. When results are displayed w.r.t. FLOPs, the difference with random pruning is only $0\%\!-\!4\%$, but the difference is higher, $1\%\!-\!10\%$, when plotted with the number of feature maps pruned. 2) Increasing the number of updates from $100$ to $1000$ improves performance of pruning significantly for both the Taylor criterion and random pruning. For a second experiment, we prune a trained VGG-16 network with the same parameters as before, except enabling FLOPs regularization. We stop pruning at two points, $11.5$ and $8.0$ GFLOPs, and fine-tune both models for an additional five epochs with learning rate $10^{-4}$. %$\lambda = 0.0001$. Fine-tuning after pruning significantly improves results: the network pruned to $11.5$ GFLOPs improves from $83\%$ to $87\%$ top-5 validation accuracy, and the network pruned to $8.0$ GFLOPs improves from $77.8\%$ to $84.5\%$. \subsection{Speed up measurements} During pruning we were measuring reduction in computations by FLOPs, which is a common practice \citep{han2015learning, Lavin15, lavin2015fast}. Improvements in FLOPs result in monotonically decreasing inference time of the networks because of removing entire feature map from the layer. However, time consumed by inference dependents on particular implementation of convolution operator, parallelization algorithm, hardware, scheduling, memory transfer rate etc. Therefore we measure improvement in the inference time for selected networks to see real speed up compared to unpruned networks in Table~\ref{tab:speed_up}. We observe significant speed ups by proposed pruning scheme. \input{speedup_measurements_pytorch.tex} \section{Appendix} \label{sec:appendix} \subsection{FLOPs computation} \label{sec:appendix_flops} To compute the number of floating-point operations (FLOPs), we assume convolution is implemented as a sliding window and that the nonlinearity function is computed for free. For convolutional kernels we have: \begin{equation} \mbox{FLOPs} = 2HW(C_{in}K^2+1)C_{out}, \end{equation} where $H$, $W$ and $C_{in}$ are height, width and number of channels of the input feature map, $K$ is the kernel width (assumed to be symmetric), and $C_{out}$ is the number of output channels. For fully connected layers we compute FLOPs as: \begin{equation} \mbox{FLOPs} = (2I - 1)O, \end{equation} where $I$ is the input dimensionality and $O$ is the output dimensionality. We apply FLOPs regularization during pruning to prune neurons with higher FLOPs first. FLOPs per convolutional neuron in every layer: \begin{align*} \mbox{VGG16:}\ \Theta^{flops}&=[3.1, 57.8, 14.1, 28.9, 7.0, 14.5, 14.5, 3.5, 7.2, 7.2, 1.8, 1.8, 1.8, 1.8] \\ \mbox{AlexNet:}\ \Theta^{flops}&=[2.3, 1.7, 0.8, 0.6, 0.6] \\ \mbox{R3DCNN:}\ \Theta^{flops}&=[5.6, 86.9, 21.7, 43.4, 5.4, 10.8, 1.4, 1.4] \end{align*} \subsection{Normalization across layers} \label{sec:appendix_normalization} Scaling a criterion across layers is very important for pruning. If the criterion is not properly scaled, then a hand-tuned multiplier would need to be selected for each layer. Statistics of feature map ranking by different criteria are shown in Fig.~\ref{fig:spearman_reg}. Without normalization (Fig.~\ref{fig:spfig2}--\ref{fig:spfig4}), the weight magnitude criterion tends to rank feature maps from the first layers more important than last layers; the activation criterion ranks middle layers more important; and Taylor ranks first layers higher. After $\ell_2$ normalization (Fig.~\ref{fig:spfig6}--\ref{fig:spfig8}), all criteria have a shape more similar to the oracle, where each layer has some feature maps which are highly important and others which are unimportant. \subsection{Oracle computation for VGG-16 on Birds-200} \label{sec:oracle_vgg16} We compute the change in the loss caused by removing individual feature maps from the VGG-16 network, after fine-tuning on the Birds-200 dataset. Results are illustrated in Fig.~\ref{fig:oracle_change1}-\ref{fig:oracle_change2} for each feature map in layers $1$ and $13$, respectively. To compute the oracle estimate for a feature map, we remove the feature map and compute the network prediction for each image in the training set using the central crop with no data augmentation or dropout. We draw the following conclusions: \begin{itemize} \item The contribution of feature maps range from positive (above the red line) to slightly negative (below the red line), implying the existence of some feature maps which decrease the training cost when removed. \item There are many feature maps with little contribution to the network output, indicated by almost zero change in loss when removed. \item Both layers contain a small number of feature maps which induce a significant increase in the loss when removed. \end{itemize} \input{spearman_corr_birds2.tex} Table~\ref{tab:spearman_birds2} contains a layer-by-layer listing of Spearman's rank correlation of several criteria with the ranking of oracle-abs. In this more detailed comparison, we see the Taylor criterion shows higher correlation for all individual layers. For several methods including Taylor, the worst correlations are observed for the middle of the network, layers $5$-$10$. We also evaluate several techniques for normalization of the raw criteria values for comparison across layers. The table shows the best performance is obtained by $\ell_2$ normalization, hence we select it for our method. \subsection{Comparison with weight regularization} \cite{han2015learning} find that fine-tuning with high $\ell_1$ or $\ell_2$ regularization causes unimportant connections to be suppressed. Connections with energy lower than some threshold can be removed on the assumption that they do not contribute much to subsequent layers. The same work also finds that thresholds must be set separately for each layer depending on its sensitivity to pruning. The procedure to evaluate sensitivity is time-consuming as it requires pruning layers independently during evaluation. The idea of pruning with high regularization can be extended to removing the kernels for an entire feature map if the $\ell_2$ norm of those kernels is below a predefined threshold. We compare our approach with this regularization-based pruning for the task of pruning the last convolutional layer of VGG-16 fine-tuned for Birds-200. By considering only a single layer, we avoid the need to compute layerwise sensitivity. Parameters for optimization during fine-tuning are the same as other experiments with the Birds-200 dataset. For the regularization technique, the pruning threshold is set to $\sigma = 10^{-5}$ while we vary the regularization coefficient $\gamma$ of the $\ell_2$ norm on each feature map kernel.\footnote{In our implementation, the regularization coefficient is multiplied by the learning rate equal to $10^{-4}$.} We prune only kernel weights, while keeping the bias to maintain the same expected output. A comparison between pruning based on regularization and our greedy scheme is illustrated in Fig.~\ref{fig:comparisonwithregul}. We observe that our approach has higher test accuracy for the same number of remaining unpruned feature maps, when pruning $85\%$ or more of the feature maps. We observe that with high regularization all weights tend to zero, not only unimportant weights as \cite{han2015learning} observe in the case of ImageNet networks. The intuition here is that with regularization we push all weights down and potentially can affect important connections for transfer learning, whereas in our iterative procedure we only remove unimportant parameters leaving others untouched. \subsection{Combination of criteria} One of the possibilities to improve saliency estimation is to combine several criteria together. One of the straight forward combinations is Taylor and mean activation of the neuron. We compute the joint criteria as $\Theta_{joint}(\mathbf{z}_l^{(k)}) = (1-\lambda)\hat{\Theta}_{Taylor}(\mathbf{z}_l^{(k)}) + \lambda\hat{\Theta}_{Activation}(\mathbf{z}_l^{(k)})$ and perform a grid search of parameter $\lambda$ in Fig.\ref{fig:criteria_combination_fig}. The highest correlation value for each dataset is marked with with vertical bar with $\lambda$ and gain. We observe that the gain of linearly combining criteria is negligibly small (see $\Delta$'s in the figure). \subsection{Optimal Brain Damage implementation} OBD computes saliency of a parameter by computing a product of the squared magnitude of the parameter and the corresponding element on the diagonal of the Hessian. For many deep learning frameworks, an efficient implementation of the diagonal evaluation is not straightforward and approximation techniques must be applied. Our implementation of Hessian diagonal computation was inspired by~\cite{dauphin2015equilibrated} work, where the technique proposed by \cite{bekas2007estimator} was used to evaluate SGD preconditioned with the Jacobi preconditioner. It was shown that diagonal of the Hessian can be approximated as: \begin{equation} \mbox{diag}(\textbf{H})=\mathbb{E}[\textbf{v}\odot\textbf{Hv}]=\mathbb{E}[\textbf{v}\odot\nabla({\nabla{\C}\cdot\textbf{v}})], \end{equation} where $\odot$ is the element-wise product, $\textbf{v}$ are random vectors with entries $\pm1$, and $\nabla$ is the gradient operator. To compute saliency with OBD, we randomly draw \textbf{v} and compute the diagonal over 10 iterations for a single minibatch for 1000 mini batches. We found that this number of mini batches is required to compute close approximation of the Hessian's diagonal (which we verified). Computing saliency this way is computationally expensive for iterative pruning, and we use a slightly different but more efficient procedure. Before the first pruning iteration, saliency is initialized from values computed off-line with 1000 minibatches and 10 iterations, as described above. Then, at every minibatch we compute the OBD criteria with only one iteration and apply an exponential moving averaging with a coefficient of $0.99$. We verified that this computes a close approximation to the Hessian's diagonal. \subsection{Correlation of Taylor criterion with gradient and activation} The Taylor criterion is composed of both an activation term and a gradient term. In Figure \ref{fig:corr_gradient_activation}, we depict the correlation between the Taylor criterion and each constituent part. We consider expected absolute value of the gradient instead of the mean, because otherwise it tends to zero. The plots are computed from pruning criteria for an unpruned VGG network fine-tuned for the Birds-200 dataset. (Values are shown after layer-wise normalization). Figure \ref{fig:corr_gradient_activation}(a-b) depict the Taylor criterion in the y-axis for all neurons w.r.t. the gradient and activation components, respectively. The bottom $10\%$ of neurons (lowest Taylor criterion, most likely to be pruned) are depicted in red, while the top $10\%$ are shown in green. Considering all neurons, both gradient and activation components demonstrate a linear trend with the Taylor criterion. However, for the bottom $10\%$ of neurons, as shown in Figure \ref{fig:corr_gradient_activation}(c-d), the activation criterion shows much stronger correlation, with lower activations indicating lower Taylor scores. \section{Conclusions} We propose a new scheme for iteratively pruning deep convolutional neural networks. We find: 1) CNNs may be successfully pruned by iteratively removing the least important parameters---feature maps in this case---according to heuristic selection criteria; 2) a Taylor expansion-based criterion demonstrates significant improvement over other criteria; 3) per-layer normalization of the criterion is important to obtain global scaling.
Extrapolation and learning equations
1610.02995
Table 1: Numeric results on pendulum dataset. Reported are the mean and standard deviation of the root mean squares error (RMS) (√E, Eq. (1)) on different test sets for 10 random initializations.
[ "[EMPTY]", "interpolation", "extrapol. (near)", "extrapol. (far)" ]
[ [ "EQL", "0.0102±0.0000", "0.012±0.002", "0.016±0.007" ], [ "MLP", "0.0138±0.0002", "0.150±0.012", "0.364±0.036" ], [ "SVR", "0.0105", "0.041", "0.18" ] ]
Numeric results are reported in Tab. As expected all models are able to interpolate well with a test error on the order of the noise level (σ=0.01). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. EQL, on the other hand, extrapolates well, both near and far away from the training domain.
\section{Introduction}\label{sec:intro} The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, \ie in a way to not overfit the data, the regression problem is well understood and can -- at least conceptually -- be considered solved. % However, when working with data from real-world devices, \eg controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, \eg when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call \emph{extrapolation generalization}, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, \eg mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. %\sec{sec:method}) We present our results in the Section \emph{Experimental evaluation} %\sec{sec:results} and close with conclusions. \subsection{Related work}% In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, \eg a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR)~\cite{williams2006gaussian} or Support Vector Regression (SVR)~\cite{smola2004tutorial}, or a multi-layer network of suitable expressive power~\cite{specht1991general}. The goal is to find a prediction function that leads to a small expected error on future data, not necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a ``biologically plausible'' model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, \eg allowing only for sparse linear models. The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of \emph{system identification}. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common. \emph{Causal learning} is an area of recent research that aims at identifying a causal relation between multiple observables, % phenomena, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence~\cite{Pearl2000}. Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. %\Geo[]{actually they explain it in terms of the conditional probability.}. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions~\cite{PetersMJS2014}. % one expects to observe The topic of learning a regression function with emphasis on \emph{extrapolation} performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, \ie predict the next value(s)~\cite{wiener1949extrapolation}. By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution~\cite{muller1997predicting,gyorfi2013nonparametric}. Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the \emph{domain adaptation} setting. In particular, since we assume a common labeling function, our setting would fall under the \emph{covariate shift} setting~\cite{quionero2009dataset}. Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time~\cite{ben2010theory}. In %,ben2010impossibility our setting this is not possible to obtain. On the technical level, \method{} networks are an instance of general feed-forward networks for function approximation~\cite{bishop1995neural}. In contrast to recent trends towards \emph{deep learning}~\cite{bengio2009learning,bengio2013representation}, our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, \method{} networks resemble \emph{sum-product networks (SPNs)}~\cite{PoonDomingos2011:sum-product-networks} and \emph{Pi-Sigma networks (PSNs)}~\cite{ShinGhosh1991:pi-sigma}, in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas \method{} networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in \method{} multiplication is optional. Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities~\cite{SchmidtLipson2009:learnnaturallaws}. Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In \cite{ZarembaFergus2014:LearnMathIdentities} this was done using machine learning to overcome the potentially exponential search space. \section{Experimental evaluation}\label{sec:results} We demonstrate the ability of \method{} to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and evaluation procedure in \emph{python} based on the \emph{theano} framework~\cite{2016arXiv160502688short}. We will make the code for training and evaluation public after acceptance of the manuscript. %Todo \paragraph{Pendulum.} We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is $X=\Real\times\Real$ where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as $(\theta,\omega)$, but for our purposes, we call them $(x_1,x_2)$ in order to keep the notation consistent between experiments. The pendulum's dynamic behavior is governed by the following two ordinary differential equations: \begin{equation} \dot x_1 = x_2 \qquad\qquad\text{and}\qquad\qquad \dot x_2 = -g \sin(x_1)\,,\label{eqn:pend} % y_2 := \end{equation} where $g=9.81$ is the gravitation constant. We divide each equation by $g$ in order to balance the output scales and form a regression problem with two output values, $y_1=\frac{1}{g}x_2$ and $y_2=-\sin(x_1)$. As training data, we sample 1000 points uniformly in the hypercube {\small $[-h,h] \times [-h,h]$} for $h=2$. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation $\sigma=0.01$. We also define three test sets, each with 1000 points. The \emph{interpolation test set} is sampled from the same data distribution as the training set. The \emph{extrapolation (near) test set} contains data sampled uniformly from the data domain {\small $[-\frac32 h,\frac32 h] \times [-\frac32 h,\frac32 h]\setminus [-h,h] \times [-h,h]$}, which is relatively near the training region and the \emph{extrapolation (far) test set} extends the region to further outside: {\small $[-2h,2h] \times [-2h,2h]\setminus [-h,h] \times [-h,h]$}. We train a 2-layer \method{} and perform model selection among the hyper-parameters: the regularization strength {\small $\lambda\in10^{\{-7,-6.3,-6,-5.3,-5,-4.3,-4,-3.3,-3\}}$} and the number of nodes {\small $\frac 1 4 u=v\in\{1,3,5\}$}. All weights are randomly initialized from a normal distribution with {\small $\sigma = \sqrt{1/(k'+d)}$}. The unit selection $\typ{}$ is set such that all unit types are equally often. To ensure convergence we chose $T=10000$ epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with $\tanh$ activation functions and possible hyperparameters: $\lambda$ as for \method, number of layers {\small $L\in\{2,3\}$}, and number of neurons {\small $k\in\{5,10,20\}$}. A second baseline is given by epsilon support vector regression (SVR)~\cite{basak2007:SVR} with two hyperparameters {\small $C\in10^{\{-3,-2,-1,0,1,2,3,3.5\}}$} and {\small $\epsilon \in 10^{\{-3,-2,-1,0\}}$} using radial basis function kernel with width {\small $\gamma\in \{0.05,0.1,0.2,0.5,1.0\}$}. Numeric results are reported in \tab{tab:pend:results}. As expected all models are able to interpolate well with a test error on the order of the noise level ($\sigma=0.01$). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. \method, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure~\ref{fig:pend}: while the MLP and SVR simply learns a function that interpolates the training values, \method{} finds the correct functional expression and therefore predicts the correct values for any input data. \paragraph{Double pendulum kinematics.} The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum \cite{SchmidtLipson2009:learnnaturallaws}. The task here is to learn the position of the tips of the double pendulum segments from the given joint angles ($x_1,x_2$). These positions where not measured such that we supply them by the following formula: $y_1=\cos(x_1), y_2=\cos(x_1)+\cos(x_1+x_2), y_3=\sin(x_1), y_4=\sin(x_1)+\sin(x_1+x_2)$ where $(y_1,y_3)$ and $(y_2,y_4)$ correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first covers only part of the domain (input as well as output) and consists of 819 samples where 10\% was used as validation set (randomly sampled), see \fig{fig:dpk}(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to $[-\pi,\pi]$. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in \fig{fig:dpk}(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see \fig{fig:dpk}(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in \fig{fig:dpk}(d). Model selection is performed to determine $\lambda$ as above, $u=v\in\{3,5\}$, (MLP: $k\in\{5,10,20\}$) and layer number $L\in\{2,3\}$. \paragraph{Robotic arms.} A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in $[-\nicefrac{\pi}{2},\nicefrac{\pi}{2}]$, each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude $[-\pi,\pi]$ was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (\emph{kin-3-end}, \emph{kin-4-end}) and the coordinates of all segment positions \emph{kin-5-all}. The numerical results, see \tab{tab:kin}, shows that our method is able to extrapolate in these cases. Model selection as above with $u=v\in\{10,20\}$, (MLP: $k\in\{10,50\}$) and layer number $L\in\{2,3,4\}$. To illustrate the dependence on the amount of noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance. % linear relationship? \paragraph{Learning complex formula.} In order to find out whether \method{} can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output: \begin{align} y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + \sin\left(2 \pi x_2 + \nicefrac{\pi}{8}\right)+x_2 - x_3 x_4 \right)&\text{F-1}\label{eqn:syn1}\\ y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + x_2 \cos(2\pi x_1 + \nicefrac{\pi}{4}) + x_3-x_4^2\right) &\text{F-2}\label{eqn:syn2}\\ y &= \nicefrac{1}{3} \left( (1+x_2) \sin(\pi x_1) + x_2 x_3 x_4\right) &\text{F-3}\label{eqn:syn3} \end{align} The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of $x_2$ and $\cos$ and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with $h=1$ as input data range. We use 10000 points for training set and validation set (90\%-10\% split) and 5000 points for each of the test sets. Model selection for \method{} is performed as above using the number of layers {\small $L\in{2,3,4}$}. The number of units is set to $\frac{1}{4}u=v=10$. For the MLP, we select $L$ and $\lambda$ from the same set as above as well as {\small $k\in\{10,30\}$}. \Tab{tab:syn:results} shows the numerical results. Again, all methods are able to interpolate, but only \method{} achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure~\fig{fig:syn} illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement $x_2\cos(a x_1+b)$? Apparently it uses $1.21 \cos(a x_1 + \pi + b + 0.41 x_2) + \sin(a x_1 + b + 0.41 x_2)$ which is a good approximation for $x_2 \in [-2,2]$. The sparsity of this solution is $5$ whereas the true solution needs at least $6$, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating $(1+x_2)\sin(x_1)$ using $(x_1 + x_1 x_2)\cos(\beta x_1)$, which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see \fig{fig:syn}(c). \paragraph{X-Ray transition energies.} As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from~\cite{Deslattes2003:XrayTransEnergies}, where we consider one specific transition, called the $K\,\alpha_2$ line, because it was measured for all elements. The true relationship between atomic number $Z$ and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is $K\,\alpha_2 \propto Z^2$ according to Moseley's law. Further correction terms for elements with larger $Z$ are potentially of higher order. We have data for elements with $10\le Z \le 100$, which is split into training/validation sets in the range $[10,91]$ (70/10 data points) and extrapolation test set in the interval $[92,100]$ (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation splits. The data is scaled to lie in $[0,1]$, \ie $x= Z/100$ and $y=K\alpha_2/100000$. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the $Z^2$ relationship. Mini-batch size is 2 here and $T=50000$ was used. \Fig{fig:xray} presents the data, the predictions, the learned formulas and the numerical results. \method{} and SVR achieve similar performance and MLP is significantly worse. However, \method{} also yields interpretable formulas, see \fig{fig:xray}(e) that can be used to gain insights into the potential relationship. \subsection{Poor extrapolation out of model class --- cart-pendulum system} Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set. Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see \fig{fig:cp}(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector $x=(x_1,\dots,x_4)$. We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where $y_1 = \dot x_1 = x_3$, $y_2 = \dot x_2 = x_4$ and \begin{align} y_3&= \frac{-x_1-0.01 x_3+x_4^2 \sin\left(x_2\right)+0.1 x_4 \cos \left(x_2\right)+9.81 \sin \left(x_2\right) \cos \left(x_2\right)}{\sin ^2\left(x_2\right)+1}\label{eqn:cp}, \\ y_4&= \frac{-0.2 x_4 - 19.62 \sin \left(x_2\right) + x_1 \cos \left(x_2\right) + 0.01 x_3 \cos \left(x_2\right) - x_4^2 \sin \left(x_2\right)\cos \left(x_2\right)} {\sin^2\left(x_2\right)+1}.\nonumber \end{align} The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we {\bf cannot} expect great extrapolation performance and this is confirmed by the experiments. In \fig{fig:cp}(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both \method{} and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in \tab{tab:cp:results}. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used. \section{Regression and extrapolation}\label{sec:setting} We consider a multivariate regression problem with a training set $\{(x_1,y_1),\dots,(x_N,y_N)\}$ with $x \in \Real^n$, $y\in \Real^m$. % sampled from a data distribution $p(x,y)$. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), $\phi:\Real^n\to\Real^m$ with additive zero-mean noise, $\xi$, \ie $y=\phi(x)+\xi$ and $\mathbb{E}\xi=0$. The function $\phi$ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function $\psi:\Real^n\to\Real^m$ that approximates the true functional relation as well as possible in the squared loss sense, \ie achieves minimal expected error $\mathbb{E}\|\psi(x) - \phi(x)\|^2$. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on training or test data $D$, %=[(x_i,y_i)]$, \begin{align} E(D)&=\frac{1}{N}\sum^{N}_{i=1}\|\psi(x_i) - y_i\|^2\,. \label{eqn:error} \end{align} If training and test data are sampled from the same distribution then we speak about an \emph{interpolation} problem. In the \emph{extrapolation} setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, \eg for higher velocities. As usual, we split the data that is available at training time into a part for model training %, $\D^\train$, and a part for validation or model selection. %, $\D^\val$. \section{Learning a network for function extrapolation}\label{sec:method}%Learning physical equations The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an $L$-layer network, there are $L-1$ hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure ($k'$ inputs, $k$ outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match. The linear mapping at level $l$ maps the $k'$-dimensional input $y^{\lm}$ to the $d$-dimensional intermediate representation $z$ given by \begin{align} z^\l &= W^\l y^\lm + b^\l, \end{align} where $y^\lm$ is the output of the previous layer, with the convention $y^{(0)}=x$. The weight matrix $W^\l\in \Real^{d \times k'}$ and the bias vector $b^\l\in\Real^{d}$ are free parameters that are learned during training. The non-linear transformation contains $u$ \emph{unary units}, $f_i:\Real\to\Real$, for $i=1,\dots,u$, and $v$ \emph{binary units}, $g_j:\Real\times\Real\to\Real$ for $j=1,\dots,v$. Their outputs are concatenated to form the layer output \begin{align} y^\l &:= \Big(f_1(z^\l_1),f_2(z^\l_2),\dots,f_{u}(z^\l_{u}),\nonumber\\ & \qquad g_{1}(z^\l_{u+1},z^\l_{u+2}),\dots,g_{v}(z^\l_{u+2v-1},z^\l_{u+2v}) \Big)\,. \end{align} In total, the nonlinear stage has $k = u + v$ outputs and $d = u + 2 v$ inputs. The unary units, $f_1,\dots,f_u$ receive the respective component, $z_1,\dots,z_u$ as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter $\typ_i\in\{0,1,2,3\}$ \begin{align} f_i(z_i) &:= \begin{cases} z_i & \text{ if } \typ_i=0,\\ \sin(z_i) & \text{ if } \typ_i=1,\\ \cos(z_i) & \text{ if } \typ_i=2,\\ \sigm(z_i) & \text{ if } \typ_i=3, \end{cases}&\text{ for } i=1,\dots,u, \end{align} where $\sigm(z)=\frac{1}{1+e^{-z}}$ is the standard sigmoid function. The binary units, $g_1,\dots,g_v$ receive the remaining component, $z_{u+1},\dots,z_{u+2v}$, as input in pairs of two. They are \emph{multiplication units} that compute the product of their two input values: \begin{align} g_j(z_{u+2j-1}, z_{u+2j}) &:= z_{u+2j-1} \cdot z_{u+2j}&\text{ for }j=1,\dots,v. \end{align} Finally, the $L$-th and last layer computes the regression values by a linear read-out \begin{align} y^{\layer{L}} &:= W^{\layer{L}} y^{\layer{L-1}} + b^{\layer{L}}. \end{align} The architecture is depicted in \fig{fig:network}. We call the new architecture Equation Learner (\method{}) and denote the function it defines by $\psi$. \subsection{Discussion of the architecture} The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of \emph{sine} and \emph{cosine} as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space. \emph{Sigmoid} nonlinearities are the canonical choice of \emph{activation function} for \emph{artificial neural networks} (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular \emph{radial basis functions}~\cite{broomhead1988radial} we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as \emph{(square) roots} and \emph{logarithms}, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work. The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units~\cite{DurbinRumelhart1989:ProductUnits} and Pi-Sigma-unit~\cite{ShinGhosh1991:pi-sigma}. The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network. Finally, each layer of the network contains unary units that act as \emph{identity} maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths. \subsection{Network training} The \method{} is fully differentiable in its free parameters $\theta=\{W^{(1)},\dots,W^{(L)},b^{(1)},\dots,b^{(L)}\}$, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective~\cite{tibshirani1996regression}, \begin{align} L(D)&=\frac{1}{N}\sum^{|D|}_{i=1}\|\psi(x_i) - y_i\|^2 + \lambda \sum_{l=1}^L\big|W^\l\big|_1\,,\label{eqn:loss} \end{align} that is, a linear combination of $L_2$ loss and $L_1$ regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam~\cite{KingmaBa2015:Adam} for calculating the updates: \begin{align} \theta_{t+1} &= \theta_{t} + \text{Adam}\left(\frac{\partial L(D_{(t)})}{\partial \theta}, \alpha\right), \end{align} where $D_{(t)}$ denotes the current mini-batch and $\alpha$ is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use $\alpha=0.001$ and a mini-batch size of 20. The role of the $L_1$ regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values. Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure ($t<t_1$) we use no regularization ($\lambda=0$), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting $\lambda$ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training ($t>t_2$) we disable $L_1$ regularization ($\lambda=0$) but enforce the same $L_0$ norm of the weights. This is achieved by keeping all weights $w\in W^{1\dots L}$ that are close to 0 at 0, \ie if $|w|<0.001$ then $w=0$ during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints $t_1$ and $t_2$ is not critical. In practice, we use $t_1 = \frac{1}{4} T$ and $t_2=\frac{19}{20} T$, where $T$ is total number of update steps. $T$ was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous. \subsection{Model selection for extrapolation}\label{sec:modelsel} \method{} networks have a number of hyper-parameters, \eg the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the ``right'' formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between $cos(x)$ and its truncated power series approximation $1-x^2/2 + x^4/24$, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 %\sec{sec:modelsel:app} for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, \ie it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances \wrt validation error and sparsity and select the one with the smallest $L_2$ norm (in rank-space), see \eqn{eqn:model:sel}. Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations. \documentclass[a4paper]{article} % For LaTeX2e \usepackage[margin=2.5cm,top=2cm]{geometry} \usepackage[square,sort]{natbib} \bibliographystyle{abbrvnat} \renewcommand{\cite}[1]{\citep{#1}} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % professional-quality tables % compact symbols for 1/2, etc. % microtypography % hyperlinks % simple URL typesetting \graphicspath{{../graphics/}} \newcommand{\Fig}[1]{Figure~\ref{#1}} \newcommand{\fig}[1]{Fig.~\ref{#1}} \newcommand{\Tab}[1]{Table~\ref{#1}} \newcommand{\tab}[1]{Tab.~\ref{#1}} \newcommand{\Eqn}[1]{Equation \eqref{#1}} \newcommand{\eqn}[1]{Eq.~\eqref{#1}} % Eq. (1.1) \newcommand{\eqnp}[1]{(Eq.~\ref{#1})} % (Eq. 1.1) \renewcommand{\sec}[1]{Section~\ref{#1}} % Section 1 \newcommand{\ie}{i.\,e.~} \newcommand{\eg}{e.\,g.~} \newcommand{\wrt}{w.\,r.\,t.~} \newcommand{\Real}{\ensuremath{\mathbb R}} % Real numbers \newcommand{\Unit}{\ensuremath{\mathbb I}} % Unit Matrix \newcommand{\T}{\ensuremath{\top}} % Transpose \newcommand{\sigm}{\ensuremath{\text{sigm}}} % sigmoid \newcommand\Tstrut{\rule{0pt}{2.6ex}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\method}{EQL} %{\textcolor{green}{EQL}}%{EQL,ABFNet} \newcommand{\typ}{\ensuremath{I}} % unit type \newcommand{\D}{\ensuremath{\mathbf{D}}} % dataset \newcommand{\train}{\ensuremath{\text{Tr}}} % training \newcommand{\test}{\ensuremath{\text{T}}} % training \newcommand{\val}{\ensuremath{\text{V}}} % training \newcommand{\extra}{\ensuremath{\text{X}}} % training \newcommand{\restr}{\ensuremath{\text{XTr}}} % training \newcommand{\restrval}{\ensuremath{\text{XV}}} % training \newcommand{\layer}[1]{{(#1)}} % layer l \renewcommand{\l}{{\layer{l}}} % layer l \newcommand{\lm}{{\layer{l-1}}} % layer l-1 \usepackage[disable]{todonotes} \setlength{\marginparwidth}{3cm} \newcommand{\Geo}[2][inline]{\todo[#1,color=yellow!60,size=\scriptsize]{#2}} \newcommand{\Chl}[2][inline]{\todo[#1,color=green!70,size=\scriptsize]{#2}} \pdfinfo{ /Title (Extrapolation and learning equations) /Author (Georg Martius and Christoph H. Lampert)} \setcounter{secnumdepth}{0} \begin{document} \title{Extrapolation and learning equations} \author{% Georg Martius \& Christoph H. Lampert\\ IST Austria\\ Am Campus 1, 3400 Klosterneuburg, Austria\\ \texttt{\{gmartius,chl\}@ist.ac.at} } \maketitle \begin{abstract} In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. \end{abstract} \input{intro.tex} \input{methods.tex} \input{relatedwork.tex} \input{results.tex} \vspace*{-.2em} \section{Conclusions}\vspace*{-.2em} We presented a new network architecture called \method{} that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing $L_1$ regularization and fixing $L_0$ norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies. The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor. If the origin of the data is not in the hypothesis class, \ie the underlying expression cannot be represented by the network, good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense. \subsubsection*{Acknowledgments} GM received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no.~[291734]. \begin{thebibliography}{24} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Basak et~al.(2007)Basak, Pal, and Patranabis]{basak2007:SVR} D.~Basak, S.~Pal, and D.~C. Patranabis. \newblock Support vector regression. \newblock \emph{Neural Information Processing-Letters and Reviews}, 11\penalty0 (10):\penalty0 203--224, 2007. \bibitem[Ben-David et~al.(2010)Ben-David, Blitzer, Crammer, Kulesza, Pereira, and Vaughan]{ben2010theory} S.~Ben-David, J.~Blitzer, K.~Crammer, A.~Kulesza, F.~Pereira, and J.~W. Vaughan. \newblock A theory of learning from different domains. \newblock \emph{Machine Learning}, 79\penalty0 (1-2):\penalty0 151--175, 2010. \bibitem[Bengio(2009)]{bengio2009learning} Y.~Bengio. \newblock Learning deep architectures for {AI}. \newblock \emph{Foundations and Trends in Machine Learning}, 2\penalty0 (1):\penalty0 1--127, 2009. \bibitem[Bengio et~al.(2013)Bengio, Courville, and Vincent]{bengio2013representation} Y.~Bengio, A.~Courville, and P.~Vincent. \newblock Representation learning: A review and new perspectives. \newblock \emph{IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, 35\penalty0 (8):\penalty0 1798--1828, 2013. \bibitem[Bishop(1995)]{bishop1995neural} C.~M. Bishop. \newblock \emph{Neural networks for pattern recognition}. \newblock Oxford University Press, 1995. \bibitem[Broomhead and Lowe(1988)]{broomhead1988radial} D.~S. Broomhead and D.~Lowe. \newblock Radial basis functions, multi-variable functional interpolation and adaptive networks. \newblock Technical report, DTIC Document, 1988. \bibitem[Deslattes et~al.(2003)Deslattes, Kessler~Jr, Indelicato, De~Billy, Lindroth, and Anton]{Deslattes2003:XrayTransEnergies} R.~D. Deslattes, E.~G. Kessler~Jr, P.~Indelicato, L.~De~Billy, E.~Lindroth, and J.~Anton. \newblock X-ray transition energies: new approach to a comprehensive evaluation. \newblock \emph{Reviews of Modern Physics}, 75\penalty0 (1):\penalty0 35, 2003. \bibitem[Durbin and Rumelhart(1989)]{DurbinRumelhart1989:ProductUnits} R.~Durbin and D.~E. Rumelhart. \newblock Product units: A computationally powerful and biologically plausible extension to backpropagation networks. \newblock \emph{Neural Computation}, 1\penalty0 (1):\penalty0 133--142, Mar. 1989. \newblock ISSN 0899-7667. \newblock \doi{10.1162/neco.1989.1.1.133}. \newblock URL \url{http://dx.doi.org/10.1162/neco.1989.1.1.133}. \bibitem[Gy{\"o}rfi et~al.(2013)Gy{\"o}rfi, H{\"a}rdle, Sarda, and Vieu]{gyorfi2013nonparametric} L.~Gy{\"o}rfi, W.~H{\"a}rdle, P.~Sarda, and P.~Vieu. \newblock \emph{Nonparametric curve estimation from time series}, volume~60. \newblock Springer, 2013. \bibitem[Kingma and Ba(2015)]{KingmaBa2015:Adam} D.~Kingma and J.~Ba. \newblock Adam: A method for stochastic optimization. \newblock In \emph{in Proceedings of ICLR}, 2015. \bibitem[M{\"u}ller et~al.(1997)M{\"u}ller, Smola, R{\"a}tsch, Sch{\"o}lkopf, Kohlmorgen, and Vapnik]{muller1997predicting} K.-R. M{\"u}ller, A.~J. Smola, G.~R{\"a}tsch, B.~Sch{\"o}lkopf, J.~Kohlmorgen, and V.~Vapnik. \newblock Predicting time series with support vector machines. \newblock In \emph{Artificial Neural Networks (ICANN)}, pages 999--1004. Springer, 1997. \bibitem[Pearl(2000)]{Pearl2000} J.~Pearl. \newblock \emph{Causality}. \newblock Cambridge {U}niversity {P}ress, 2000. \bibitem[Peters et~al.(2014)Peters, Mooij, Janzing, and Sch{\"o}lkopf]{PetersMJS2014} J.~Peters, J.~Mooij, D.~Janzing, and B.~Sch{\"o}lkopf. \newblock Causal discovery with continuous additive noise models. \newblock \emph{Journal of Machine Learning Research (JMLR)}, 15:\penalty0 2009--2053, 2014. \bibitem[Poon and Domingos(2012)]{PoonDomingos2011:sum-product-networks} H.~Poon and P.~M. Domingos. \newblock Sum-product networks: {A} new deep architecture, 2012. \bibitem[Quionero-Candela et~al.(2009)Quionero-Candela, Sugiyama, Schwaighofer, and Lawrence]{quionero2009dataset} J.~Quionero-Candela, M.~Sugiyama, A.~Schwaighofer, and N.~D. Lawrence. \newblock \emph{Dataset shift in machine learning}. \newblock The MIT Press, 2009. \bibitem[Schmidt and Lipson(2009)]{SchmidtLipson2009:learnnaturallaws} M.~Schmidt and H.~Lipson. \newblock Distilling free-form natural laws from experimental data. \newblock \emph{Science}, 324\penalty0 (5923):\penalty0 81--85, 2009. \newblock ISSN 0036-8075. \newblock \doi{10.1126/science.1165893}. \newblock URL \url{http://science.sciencemag.org/content/324/5923/81}. \bibitem[Shin and Ghosh(1991)]{ShinGhosh1991:pi-sigma} Y.~Shin and J.~Ghosh. \newblock The pi-sigma network : An efficient higher-order neural network for pattern classification and function approximation. \newblock In \emph{in Proceedings of the International Joint Conference on Neural Networks}, pages 13--18, 1991. \bibitem[Smola and Sch{\"o}lkopf(2004)]{smola2004tutorial} A.~J. Smola and B.~Sch{\"o}lkopf. \newblock A tutorial on support vector regression. \newblock \emph{Statistics and computing}, 14\penalty0 (3):\penalty0 199--222, 2004. \bibitem[Specht(1991)]{specht1991general} D.~F. Specht. \newblock A general regression neural network. \newblock \emph{IEEE Transactions on Neural Networks (TNN)}, 2\penalty0 (6):\penalty0 568--576, 1991. \bibitem[{Theano Development Team}(2016)]{2016arXiv160502688short} {Theano Development Team}. \newblock {Theano: A {Python} framework for fast computation of mathematical expressions}. \newblock \emph{arXiv e-prints}, abs/1605.02688, May 2016. \newblock URL \url{http://arxiv.org/abs/1605.02688}. \bibitem[Tibshirani(1996)]{tibshirani1996regression} R.~Tibshirani. \newblock Regression shrinkage and selection via the lasso. \newblock \emph{Journal of the Royal Statistical Society. Series B (Methodological)}, pages 267--288, 1996. \bibitem[Wiener(1949)]{wiener1949extrapolation} N.~Wiener. \newblock \emph{Extrapolation, interpolation, and smoothing of stationary time series}, volume~2. \newblock The MIT Press, 1949. \bibitem[Williams and Rasmussen(2006)]{williams2006gaussian} C.~K.~I. Williams and C.~E. Rasmussen. \newblock \emph{Gaussian processes for machine learning}. \newblock The MIT Press, 2006. \bibitem[Zaremba et~al.(2014)Zaremba, Kurach, and Fergus]{ZarembaFergus2014:LearnMathIdentities} W.~Zaremba, K.~Kurach, and R.~Fergus. \newblock Learning to discover efficient mathematical identities. \newblock In Z.~Ghahramani, M.~Welling, C.~Cortes, N.~Lawrence, and K.~Weinberger, editors, \emph{Advances in Neural Information Processing Systems 27}, pages 1278--1286. Curran Associates, Inc., 2014. \end{thebibliography} \input{appendix.tex} \end{document} \appendix \section{Appendix} \section{A1: Model selection details}\label{sec:modelsel:app} \subsection{Quantifying sparsity} We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by $s$. For a given network $phi$ we get \begin{align} s(\phi) = \sum_{l=1}^L\sum_{i=1}^k\Theta( |W^\l_{i,\cdot}| * |W^{\layer{l+1}}_{\cdot,i}| - 0.01)\,,\label{eqn:s} \end{align} where $\Theta$ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula). \subsection{Selection criteria} As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let $r^v(\phi)$ and $r^s(\phi)$ be the ranks of the network $\phi$ \wrt the validation error and sparsity $s(\phi)$respectively, then the network with minimal squared rank norm is selected: \begin{align} \argmin_\phi\left[ r^v(\phi)^2 + r^s(\phi)^2\right] \label{eqn:model:sel} \end{align} In \fig{fig:model:sel} the extrapolation performance of all considered networks for the kin2D-3 dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error. \section{A2: Dependence on noise and number of data points}\label{sec:dep:noise-pts} In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in \fig{fig:dep:noise-pts}. In general the method is robust to noise and as expected, more noise can be compensated by more data.
Extrapolation and learning equations
1610.02995
Figure 3: Double pendulum kinematics. (a) training trajectory (in y-space). (b) extrapolation test trajectory (in y-space) with output of a learned EQL instance. (c) slices of output y4 for inputs x1=x2=x for the true system, one of EQL, MLP, and SVR instances. (d) numeric results, see Tab. 1 for details. Note, that predicting 0 would yield a mean error of 0.84.
[ "[EMPTY]", "EQL", "MLP", "SVR" ]
[ [ "extrapolation error", "0.0003±0.00003", "0.58±0.03", "0.25" ] ]
The second system we consider real double pendulum where the forward kinematics should be learned. The task here is to learn the position of the tips of the double pendulum segments from the given joint angles (x1,x2). These positions where not measured such that we supply them by the following formula: y1=cos(x1),y2= cos(x1)+cos(x1+x2),y3= sin(x1),y4=sin(x1)+sin(x1+x2) where (y1,y3) and (y2,y4) correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first covers only part of the domain (input as well as output) and consists of 819 samples where 10% was used as validation set (randomly sampled), see Fig. The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to [−π,π]. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in Fig. The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see Fig. The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in Fig.
\section{Introduction}\label{sec:intro} The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, \ie in a way to not overfit the data, the regression problem is well understood and can -- at least conceptually -- be considered solved. % However, when working with data from real-world devices, \eg controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, \eg when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call \emph{extrapolation generalization}, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, \eg mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. %\sec{sec:method}) We present our results in the Section \emph{Experimental evaluation} %\sec{sec:results} and close with conclusions. \subsection{Related work}% In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, \eg a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR)~\cite{williams2006gaussian} or Support Vector Regression (SVR)~\cite{smola2004tutorial}, or a multi-layer network of suitable expressive power~\cite{specht1991general}. The goal is to find a prediction function that leads to a small expected error on future data, not necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a ``biologically plausible'' model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, \eg allowing only for sparse linear models. The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of \emph{system identification}. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common. \emph{Causal learning} is an area of recent research that aims at identifying a causal relation between multiple observables, % phenomena, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence~\cite{Pearl2000}. Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. %\Geo[]{actually they explain it in terms of the conditional probability.}. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions~\cite{PetersMJS2014}. % one expects to observe The topic of learning a regression function with emphasis on \emph{extrapolation} performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, \ie predict the next value(s)~\cite{wiener1949extrapolation}. By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution~\cite{muller1997predicting,gyorfi2013nonparametric}. Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the \emph{domain adaptation} setting. In particular, since we assume a common labeling function, our setting would fall under the \emph{covariate shift} setting~\cite{quionero2009dataset}. Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time~\cite{ben2010theory}. In %,ben2010impossibility our setting this is not possible to obtain. On the technical level, \method{} networks are an instance of general feed-forward networks for function approximation~\cite{bishop1995neural}. In contrast to recent trends towards \emph{deep learning}~\cite{bengio2009learning,bengio2013representation}, our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, \method{} networks resemble \emph{sum-product networks (SPNs)}~\cite{PoonDomingos2011:sum-product-networks} and \emph{Pi-Sigma networks (PSNs)}~\cite{ShinGhosh1991:pi-sigma}, in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas \method{} networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in \method{} multiplication is optional. Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities~\cite{SchmidtLipson2009:learnnaturallaws}. Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In \cite{ZarembaFergus2014:LearnMathIdentities} this was done using machine learning to overcome the potentially exponential search space. \section{Experimental evaluation}\label{sec:results} We demonstrate the ability of \method{} to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and evaluation procedure in \emph{python} based on the \emph{theano} framework~\cite{2016arXiv160502688short}. We will make the code for training and evaluation public after acceptance of the manuscript. %Todo \paragraph{Pendulum.} We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is $X=\Real\times\Real$ where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as $(\theta,\omega)$, but for our purposes, we call them $(x_1,x_2)$ in order to keep the notation consistent between experiments. The pendulum's dynamic behavior is governed by the following two ordinary differential equations: \begin{equation} \dot x_1 = x_2 \qquad\qquad\text{and}\qquad\qquad \dot x_2 = -g \sin(x_1)\,,\label{eqn:pend} % y_2 := \end{equation} where $g=9.81$ is the gravitation constant. We divide each equation by $g$ in order to balance the output scales and form a regression problem with two output values, $y_1=\frac{1}{g}x_2$ and $y_2=-\sin(x_1)$. As training data, we sample 1000 points uniformly in the hypercube {\small $[-h,h] \times [-h,h]$} for $h=2$. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation $\sigma=0.01$. We also define three test sets, each with 1000 points. The \emph{interpolation test set} is sampled from the same data distribution as the training set. The \emph{extrapolation (near) test set} contains data sampled uniformly from the data domain {\small $[-\frac32 h,\frac32 h] \times [-\frac32 h,\frac32 h]\setminus [-h,h] \times [-h,h]$}, which is relatively near the training region and the \emph{extrapolation (far) test set} extends the region to further outside: {\small $[-2h,2h] \times [-2h,2h]\setminus [-h,h] \times [-h,h]$}. We train a 2-layer \method{} and perform model selection among the hyper-parameters: the regularization strength {\small $\lambda\in10^{\{-7,-6.3,-6,-5.3,-5,-4.3,-4,-3.3,-3\}}$} and the number of nodes {\small $\frac 1 4 u=v\in\{1,3,5\}$}. All weights are randomly initialized from a normal distribution with {\small $\sigma = \sqrt{1/(k'+d)}$}. The unit selection $\typ{}$ is set such that all unit types are equally often. To ensure convergence we chose $T=10000$ epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with $\tanh$ activation functions and possible hyperparameters: $\lambda$ as for \method, number of layers {\small $L\in\{2,3\}$}, and number of neurons {\small $k\in\{5,10,20\}$}. A second baseline is given by epsilon support vector regression (SVR)~\cite{basak2007:SVR} with two hyperparameters {\small $C\in10^{\{-3,-2,-1,0,1,2,3,3.5\}}$} and {\small $\epsilon \in 10^{\{-3,-2,-1,0\}}$} using radial basis function kernel with width {\small $\gamma\in \{0.05,0.1,0.2,0.5,1.0\}$}. Numeric results are reported in \tab{tab:pend:results}. As expected all models are able to interpolate well with a test error on the order of the noise level ($\sigma=0.01$). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. \method, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure~\ref{fig:pend}: while the MLP and SVR simply learns a function that interpolates the training values, \method{} finds the correct functional expression and therefore predicts the correct values for any input data. \paragraph{Double pendulum kinematics.} The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum \cite{SchmidtLipson2009:learnnaturallaws}. The task here is to learn the position of the tips of the double pendulum segments from the given joint angles ($x_1,x_2$). These positions where not measured such that we supply them by the following formula: $y_1=\cos(x_1), y_2=\cos(x_1)+\cos(x_1+x_2), y_3=\sin(x_1), y_4=\sin(x_1)+\sin(x_1+x_2)$ where $(y_1,y_3)$ and $(y_2,y_4)$ correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first covers only part of the domain (input as well as output) and consists of 819 samples where 10\% was used as validation set (randomly sampled), see \fig{fig:dpk}(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to $[-\pi,\pi]$. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in \fig{fig:dpk}(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see \fig{fig:dpk}(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in \fig{fig:dpk}(d). Model selection is performed to determine $\lambda$ as above, $u=v\in\{3,5\}$, (MLP: $k\in\{5,10,20\}$) and layer number $L\in\{2,3\}$. \paragraph{Robotic arms.} A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in $[-\nicefrac{\pi}{2},\nicefrac{\pi}{2}]$, each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude $[-\pi,\pi]$ was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (\emph{kin-3-end}, \emph{kin-4-end}) and the coordinates of all segment positions \emph{kin-5-all}. The numerical results, see \tab{tab:kin}, shows that our method is able to extrapolate in these cases. Model selection as above with $u=v\in\{10,20\}$, (MLP: $k\in\{10,50\}$) and layer number $L\in\{2,3,4\}$. To illustrate the dependence on the amount of noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance. % linear relationship? \paragraph{Learning complex formula.} In order to find out whether \method{} can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output: \begin{align} y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + \sin\left(2 \pi x_2 + \nicefrac{\pi}{8}\right)+x_2 - x_3 x_4 \right)&\text{F-1}\label{eqn:syn1}\\ y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + x_2 \cos(2\pi x_1 + \nicefrac{\pi}{4}) + x_3-x_4^2\right) &\text{F-2}\label{eqn:syn2}\\ y &= \nicefrac{1}{3} \left( (1+x_2) \sin(\pi x_1) + x_2 x_3 x_4\right) &\text{F-3}\label{eqn:syn3} \end{align} The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of $x_2$ and $\cos$ and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with $h=1$ as input data range. We use 10000 points for training set and validation set (90\%-10\% split) and 5000 points for each of the test sets. Model selection for \method{} is performed as above using the number of layers {\small $L\in{2,3,4}$}. The number of units is set to $\frac{1}{4}u=v=10$. For the MLP, we select $L$ and $\lambda$ from the same set as above as well as {\small $k\in\{10,30\}$}. \Tab{tab:syn:results} shows the numerical results. Again, all methods are able to interpolate, but only \method{} achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure~\fig{fig:syn} illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement $x_2\cos(a x_1+b)$? Apparently it uses $1.21 \cos(a x_1 + \pi + b + 0.41 x_2) + \sin(a x_1 + b + 0.41 x_2)$ which is a good approximation for $x_2 \in [-2,2]$. The sparsity of this solution is $5$ whereas the true solution needs at least $6$, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating $(1+x_2)\sin(x_1)$ using $(x_1 + x_1 x_2)\cos(\beta x_1)$, which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see \fig{fig:syn}(c). \paragraph{X-Ray transition energies.} As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from~\cite{Deslattes2003:XrayTransEnergies}, where we consider one specific transition, called the $K\,\alpha_2$ line, because it was measured for all elements. The true relationship between atomic number $Z$ and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is $K\,\alpha_2 \propto Z^2$ according to Moseley's law. Further correction terms for elements with larger $Z$ are potentially of higher order. We have data for elements with $10\le Z \le 100$, which is split into training/validation sets in the range $[10,91]$ (70/10 data points) and extrapolation test set in the interval $[92,100]$ (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation splits. The data is scaled to lie in $[0,1]$, \ie $x= Z/100$ and $y=K\alpha_2/100000$. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the $Z^2$ relationship. Mini-batch size is 2 here and $T=50000$ was used. \Fig{fig:xray} presents the data, the predictions, the learned formulas and the numerical results. \method{} and SVR achieve similar performance and MLP is significantly worse. However, \method{} also yields interpretable formulas, see \fig{fig:xray}(e) that can be used to gain insights into the potential relationship. \subsection{Poor extrapolation out of model class --- cart-pendulum system} Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set. Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see \fig{fig:cp}(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector $x=(x_1,\dots,x_4)$. We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where $y_1 = \dot x_1 = x_3$, $y_2 = \dot x_2 = x_4$ and \begin{align} y_3&= \frac{-x_1-0.01 x_3+x_4^2 \sin\left(x_2\right)+0.1 x_4 \cos \left(x_2\right)+9.81 \sin \left(x_2\right) \cos \left(x_2\right)}{\sin ^2\left(x_2\right)+1}\label{eqn:cp}, \\ y_4&= \frac{-0.2 x_4 - 19.62 \sin \left(x_2\right) + x_1 \cos \left(x_2\right) + 0.01 x_3 \cos \left(x_2\right) - x_4^2 \sin \left(x_2\right)\cos \left(x_2\right)} {\sin^2\left(x_2\right)+1}.\nonumber \end{align} The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we {\bf cannot} expect great extrapolation performance and this is confirmed by the experiments. In \fig{fig:cp}(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both \method{} and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in \tab{tab:cp:results}. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used. \section{Regression and extrapolation}\label{sec:setting} We consider a multivariate regression problem with a training set $\{(x_1,y_1),\dots,(x_N,y_N)\}$ with $x \in \Real^n$, $y\in \Real^m$. % sampled from a data distribution $p(x,y)$. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), $\phi:\Real^n\to\Real^m$ with additive zero-mean noise, $\xi$, \ie $y=\phi(x)+\xi$ and $\mathbb{E}\xi=0$. The function $\phi$ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function $\psi:\Real^n\to\Real^m$ that approximates the true functional relation as well as possible in the squared loss sense, \ie achieves minimal expected error $\mathbb{E}\|\psi(x) - \phi(x)\|^2$. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on training or test data $D$, %=[(x_i,y_i)]$, \begin{align} E(D)&=\frac{1}{N}\sum^{N}_{i=1}\|\psi(x_i) - y_i\|^2\,. \label{eqn:error} \end{align} If training and test data are sampled from the same distribution then we speak about an \emph{interpolation} problem. In the \emph{extrapolation} setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, \eg for higher velocities. As usual, we split the data that is available at training time into a part for model training %, $\D^\train$, and a part for validation or model selection. %, $\D^\val$. \section{Learning a network for function extrapolation}\label{sec:method}%Learning physical equations The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an $L$-layer network, there are $L-1$ hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure ($k'$ inputs, $k$ outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match. The linear mapping at level $l$ maps the $k'$-dimensional input $y^{\lm}$ to the $d$-dimensional intermediate representation $z$ given by \begin{align} z^\l &= W^\l y^\lm + b^\l, \end{align} where $y^\lm$ is the output of the previous layer, with the convention $y^{(0)}=x$. The weight matrix $W^\l\in \Real^{d \times k'}$ and the bias vector $b^\l\in\Real^{d}$ are free parameters that are learned during training. The non-linear transformation contains $u$ \emph{unary units}, $f_i:\Real\to\Real$, for $i=1,\dots,u$, and $v$ \emph{binary units}, $g_j:\Real\times\Real\to\Real$ for $j=1,\dots,v$. Their outputs are concatenated to form the layer output \begin{align} y^\l &:= \Big(f_1(z^\l_1),f_2(z^\l_2),\dots,f_{u}(z^\l_{u}),\nonumber\\ & \qquad g_{1}(z^\l_{u+1},z^\l_{u+2}),\dots,g_{v}(z^\l_{u+2v-1},z^\l_{u+2v}) \Big)\,. \end{align} In total, the nonlinear stage has $k = u + v$ outputs and $d = u + 2 v$ inputs. The unary units, $f_1,\dots,f_u$ receive the respective component, $z_1,\dots,z_u$ as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter $\typ_i\in\{0,1,2,3\}$ \begin{align} f_i(z_i) &:= \begin{cases} z_i & \text{ if } \typ_i=0,\\ \sin(z_i) & \text{ if } \typ_i=1,\\ \cos(z_i) & \text{ if } \typ_i=2,\\ \sigm(z_i) & \text{ if } \typ_i=3, \end{cases}&\text{ for } i=1,\dots,u, \end{align} where $\sigm(z)=\frac{1}{1+e^{-z}}$ is the standard sigmoid function. The binary units, $g_1,\dots,g_v$ receive the remaining component, $z_{u+1},\dots,z_{u+2v}$, as input in pairs of two. They are \emph{multiplication units} that compute the product of their two input values: \begin{align} g_j(z_{u+2j-1}, z_{u+2j}) &:= z_{u+2j-1} \cdot z_{u+2j}&\text{ for }j=1,\dots,v. \end{align} Finally, the $L$-th and last layer computes the regression values by a linear read-out \begin{align} y^{\layer{L}} &:= W^{\layer{L}} y^{\layer{L-1}} + b^{\layer{L}}. \end{align} The architecture is depicted in \fig{fig:network}. We call the new architecture Equation Learner (\method{}) and denote the function it defines by $\psi$. \subsection{Discussion of the architecture} The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of \emph{sine} and \emph{cosine} as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space. \emph{Sigmoid} nonlinearities are the canonical choice of \emph{activation function} for \emph{artificial neural networks} (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular \emph{radial basis functions}~\cite{broomhead1988radial} we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as \emph{(square) roots} and \emph{logarithms}, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work. The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units~\cite{DurbinRumelhart1989:ProductUnits} and Pi-Sigma-unit~\cite{ShinGhosh1991:pi-sigma}. The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network. Finally, each layer of the network contains unary units that act as \emph{identity} maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths. \subsection{Network training} The \method{} is fully differentiable in its free parameters $\theta=\{W^{(1)},\dots,W^{(L)},b^{(1)},\dots,b^{(L)}\}$, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective~\cite{tibshirani1996regression}, \begin{align} L(D)&=\frac{1}{N}\sum^{|D|}_{i=1}\|\psi(x_i) - y_i\|^2 + \lambda \sum_{l=1}^L\big|W^\l\big|_1\,,\label{eqn:loss} \end{align} that is, a linear combination of $L_2$ loss and $L_1$ regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam~\cite{KingmaBa2015:Adam} for calculating the updates: \begin{align} \theta_{t+1} &= \theta_{t} + \text{Adam}\left(\frac{\partial L(D_{(t)})}{\partial \theta}, \alpha\right), \end{align} where $D_{(t)}$ denotes the current mini-batch and $\alpha$ is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use $\alpha=0.001$ and a mini-batch size of 20. The role of the $L_1$ regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values. Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure ($t<t_1$) we use no regularization ($\lambda=0$), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting $\lambda$ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training ($t>t_2$) we disable $L_1$ regularization ($\lambda=0$) but enforce the same $L_0$ norm of the weights. This is achieved by keeping all weights $w\in W^{1\dots L}$ that are close to 0 at 0, \ie if $|w|<0.001$ then $w=0$ during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints $t_1$ and $t_2$ is not critical. In practice, we use $t_1 = \frac{1}{4} T$ and $t_2=\frac{19}{20} T$, where $T$ is total number of update steps. $T$ was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous. \subsection{Model selection for extrapolation}\label{sec:modelsel} \method{} networks have a number of hyper-parameters, \eg the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the ``right'' formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between $cos(x)$ and its truncated power series approximation $1-x^2/2 + x^4/24$, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 %\sec{sec:modelsel:app} for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, \ie it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances \wrt validation error and sparsity and select the one with the smallest $L_2$ norm (in rank-space), see \eqn{eqn:model:sel}. Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations. \documentclass[a4paper]{article} % For LaTeX2e \usepackage[margin=2.5cm,top=2cm]{geometry} \usepackage[square,sort]{natbib} \bibliographystyle{abbrvnat} \renewcommand{\cite}[1]{\citep{#1}} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % professional-quality tables % compact symbols for 1/2, etc. % microtypography % hyperlinks % simple URL typesetting \graphicspath{{../graphics/}} \newcommand{\Fig}[1]{Figure~\ref{#1}} \newcommand{\fig}[1]{Fig.~\ref{#1}} \newcommand{\Tab}[1]{Table~\ref{#1}} \newcommand{\tab}[1]{Tab.~\ref{#1}} \newcommand{\Eqn}[1]{Equation \eqref{#1}} \newcommand{\eqn}[1]{Eq.~\eqref{#1}} % Eq. (1.1) \newcommand{\eqnp}[1]{(Eq.~\ref{#1})} % (Eq. 1.1) \renewcommand{\sec}[1]{Section~\ref{#1}} % Section 1 \newcommand{\ie}{i.\,e.~} \newcommand{\eg}{e.\,g.~} \newcommand{\wrt}{w.\,r.\,t.~} \newcommand{\Real}{\ensuremath{\mathbb R}} % Real numbers \newcommand{\Unit}{\ensuremath{\mathbb I}} % Unit Matrix \newcommand{\T}{\ensuremath{\top}} % Transpose \newcommand{\sigm}{\ensuremath{\text{sigm}}} % sigmoid \newcommand\Tstrut{\rule{0pt}{2.6ex}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\method}{EQL} %{\textcolor{green}{EQL}}%{EQL,ABFNet} \newcommand{\typ}{\ensuremath{I}} % unit type \newcommand{\D}{\ensuremath{\mathbf{D}}} % dataset \newcommand{\train}{\ensuremath{\text{Tr}}} % training \newcommand{\test}{\ensuremath{\text{T}}} % training \newcommand{\val}{\ensuremath{\text{V}}} % training \newcommand{\extra}{\ensuremath{\text{X}}} % training \newcommand{\restr}{\ensuremath{\text{XTr}}} % training \newcommand{\restrval}{\ensuremath{\text{XV}}} % training \newcommand{\layer}[1]{{(#1)}} % layer l \renewcommand{\l}{{\layer{l}}} % layer l \newcommand{\lm}{{\layer{l-1}}} % layer l-1 \usepackage[disable]{todonotes} \setlength{\marginparwidth}{3cm} \newcommand{\Geo}[2][inline]{\todo[#1,color=yellow!60,size=\scriptsize]{#2}} \newcommand{\Chl}[2][inline]{\todo[#1,color=green!70,size=\scriptsize]{#2}} \pdfinfo{ /Title (Extrapolation and learning equations) /Author (Georg Martius and Christoph H. Lampert)} \setcounter{secnumdepth}{0} \begin{document} \title{Extrapolation and learning equations} \author{% Georg Martius \& Christoph H. Lampert\\ IST Austria\\ Am Campus 1, 3400 Klosterneuburg, Austria\\ \texttt{\{gmartius,chl\}@ist.ac.at} } \maketitle \begin{abstract} In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. \end{abstract} \input{intro.tex} \input{methods.tex} \input{relatedwork.tex} \input{results.tex} \vspace*{-.2em} \section{Conclusions}\vspace*{-.2em} We presented a new network architecture called \method{} that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing $L_1$ regularization and fixing $L_0$ norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies. The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor. If the origin of the data is not in the hypothesis class, \ie the underlying expression cannot be represented by the network, good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense. \subsubsection*{Acknowledgments} GM received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no.~[291734]. \begin{thebibliography}{24} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Basak et~al.(2007)Basak, Pal, and Patranabis]{basak2007:SVR} D.~Basak, S.~Pal, and D.~C. Patranabis. \newblock Support vector regression. \newblock \emph{Neural Information Processing-Letters and Reviews}, 11\penalty0 (10):\penalty0 203--224, 2007. \bibitem[Ben-David et~al.(2010)Ben-David, Blitzer, Crammer, Kulesza, Pereira, and Vaughan]{ben2010theory} S.~Ben-David, J.~Blitzer, K.~Crammer, A.~Kulesza, F.~Pereira, and J.~W. Vaughan. \newblock A theory of learning from different domains. \newblock \emph{Machine Learning}, 79\penalty0 (1-2):\penalty0 151--175, 2010. \bibitem[Bengio(2009)]{bengio2009learning} Y.~Bengio. \newblock Learning deep architectures for {AI}. \newblock \emph{Foundations and Trends in Machine Learning}, 2\penalty0 (1):\penalty0 1--127, 2009. \bibitem[Bengio et~al.(2013)Bengio, Courville, and Vincent]{bengio2013representation} Y.~Bengio, A.~Courville, and P.~Vincent. \newblock Representation learning: A review and new perspectives. \newblock \emph{IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, 35\penalty0 (8):\penalty0 1798--1828, 2013. \bibitem[Bishop(1995)]{bishop1995neural} C.~M. Bishop. \newblock \emph{Neural networks for pattern recognition}. \newblock Oxford University Press, 1995. \bibitem[Broomhead and Lowe(1988)]{broomhead1988radial} D.~S. Broomhead and D.~Lowe. \newblock Radial basis functions, multi-variable functional interpolation and adaptive networks. \newblock Technical report, DTIC Document, 1988. \bibitem[Deslattes et~al.(2003)Deslattes, Kessler~Jr, Indelicato, De~Billy, Lindroth, and Anton]{Deslattes2003:XrayTransEnergies} R.~D. Deslattes, E.~G. Kessler~Jr, P.~Indelicato, L.~De~Billy, E.~Lindroth, and J.~Anton. \newblock X-ray transition energies: new approach to a comprehensive evaluation. \newblock \emph{Reviews of Modern Physics}, 75\penalty0 (1):\penalty0 35, 2003. \bibitem[Durbin and Rumelhart(1989)]{DurbinRumelhart1989:ProductUnits} R.~Durbin and D.~E. Rumelhart. \newblock Product units: A computationally powerful and biologically plausible extension to backpropagation networks. \newblock \emph{Neural Computation}, 1\penalty0 (1):\penalty0 133--142, Mar. 1989. \newblock ISSN 0899-7667. \newblock \doi{10.1162/neco.1989.1.1.133}. \newblock URL \url{http://dx.doi.org/10.1162/neco.1989.1.1.133}. \bibitem[Gy{\"o}rfi et~al.(2013)Gy{\"o}rfi, H{\"a}rdle, Sarda, and Vieu]{gyorfi2013nonparametric} L.~Gy{\"o}rfi, W.~H{\"a}rdle, P.~Sarda, and P.~Vieu. \newblock \emph{Nonparametric curve estimation from time series}, volume~60. \newblock Springer, 2013. \bibitem[Kingma and Ba(2015)]{KingmaBa2015:Adam} D.~Kingma and J.~Ba. \newblock Adam: A method for stochastic optimization. \newblock In \emph{in Proceedings of ICLR}, 2015. \bibitem[M{\"u}ller et~al.(1997)M{\"u}ller, Smola, R{\"a}tsch, Sch{\"o}lkopf, Kohlmorgen, and Vapnik]{muller1997predicting} K.-R. M{\"u}ller, A.~J. Smola, G.~R{\"a}tsch, B.~Sch{\"o}lkopf, J.~Kohlmorgen, and V.~Vapnik. \newblock Predicting time series with support vector machines. \newblock In \emph{Artificial Neural Networks (ICANN)}, pages 999--1004. Springer, 1997. \bibitem[Pearl(2000)]{Pearl2000} J.~Pearl. \newblock \emph{Causality}. \newblock Cambridge {U}niversity {P}ress, 2000. \bibitem[Peters et~al.(2014)Peters, Mooij, Janzing, and Sch{\"o}lkopf]{PetersMJS2014} J.~Peters, J.~Mooij, D.~Janzing, and B.~Sch{\"o}lkopf. \newblock Causal discovery with continuous additive noise models. \newblock \emph{Journal of Machine Learning Research (JMLR)}, 15:\penalty0 2009--2053, 2014. \bibitem[Poon and Domingos(2012)]{PoonDomingos2011:sum-product-networks} H.~Poon and P.~M. Domingos. \newblock Sum-product networks: {A} new deep architecture, 2012. \bibitem[Quionero-Candela et~al.(2009)Quionero-Candela, Sugiyama, Schwaighofer, and Lawrence]{quionero2009dataset} J.~Quionero-Candela, M.~Sugiyama, A.~Schwaighofer, and N.~D. Lawrence. \newblock \emph{Dataset shift in machine learning}. \newblock The MIT Press, 2009. \bibitem[Schmidt and Lipson(2009)]{SchmidtLipson2009:learnnaturallaws} M.~Schmidt and H.~Lipson. \newblock Distilling free-form natural laws from experimental data. \newblock \emph{Science}, 324\penalty0 (5923):\penalty0 81--85, 2009. \newblock ISSN 0036-8075. \newblock \doi{10.1126/science.1165893}. \newblock URL \url{http://science.sciencemag.org/content/324/5923/81}. \bibitem[Shin and Ghosh(1991)]{ShinGhosh1991:pi-sigma} Y.~Shin and J.~Ghosh. \newblock The pi-sigma network : An efficient higher-order neural network for pattern classification and function approximation. \newblock In \emph{in Proceedings of the International Joint Conference on Neural Networks}, pages 13--18, 1991. \bibitem[Smola and Sch{\"o}lkopf(2004)]{smola2004tutorial} A.~J. Smola and B.~Sch{\"o}lkopf. \newblock A tutorial on support vector regression. \newblock \emph{Statistics and computing}, 14\penalty0 (3):\penalty0 199--222, 2004. \bibitem[Specht(1991)]{specht1991general} D.~F. Specht. \newblock A general regression neural network. \newblock \emph{IEEE Transactions on Neural Networks (TNN)}, 2\penalty0 (6):\penalty0 568--576, 1991. \bibitem[{Theano Development Team}(2016)]{2016arXiv160502688short} {Theano Development Team}. \newblock {Theano: A {Python} framework for fast computation of mathematical expressions}. \newblock \emph{arXiv e-prints}, abs/1605.02688, May 2016. \newblock URL \url{http://arxiv.org/abs/1605.02688}. \bibitem[Tibshirani(1996)]{tibshirani1996regression} R.~Tibshirani. \newblock Regression shrinkage and selection via the lasso. \newblock \emph{Journal of the Royal Statistical Society. Series B (Methodological)}, pages 267--288, 1996. \bibitem[Wiener(1949)]{wiener1949extrapolation} N.~Wiener. \newblock \emph{Extrapolation, interpolation, and smoothing of stationary time series}, volume~2. \newblock The MIT Press, 1949. \bibitem[Williams and Rasmussen(2006)]{williams2006gaussian} C.~K.~I. Williams and C.~E. Rasmussen. \newblock \emph{Gaussian processes for machine learning}. \newblock The MIT Press, 2006. \bibitem[Zaremba et~al.(2014)Zaremba, Kurach, and Fergus]{ZarembaFergus2014:LearnMathIdentities} W.~Zaremba, K.~Kurach, and R.~Fergus. \newblock Learning to discover efficient mathematical identities. \newblock In Z.~Ghahramani, M.~Welling, C.~Cortes, N.~Lawrence, and K.~Weinberger, editors, \emph{Advances in Neural Information Processing Systems 27}, pages 1278--1286. Curran Associates, Inc., 2014. \end{thebibliography} \input{appendix.tex} \end{document} \appendix \section{Appendix} \section{A1: Model selection details}\label{sec:modelsel:app} \subsection{Quantifying sparsity} We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by $s$. For a given network $phi$ we get \begin{align} s(\phi) = \sum_{l=1}^L\sum_{i=1}^k\Theta( |W^\l_{i,\cdot}| * |W^{\layer{l+1}}_{\cdot,i}| - 0.01)\,,\label{eqn:s} \end{align} where $\Theta$ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula). \subsection{Selection criteria} As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let $r^v(\phi)$ and $r^s(\phi)$ be the ranks of the network $\phi$ \wrt the validation error and sparsity $s(\phi)$respectively, then the network with minimal squared rank norm is selected: \begin{align} \argmin_\phi\left[ r^v(\phi)^2 + r^s(\phi)^2\right] \label{eqn:model:sel} \end{align} In \fig{fig:model:sel} the extrapolation performance of all considered networks for the kin2D-3 dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error. \section{A2: Dependence on noise and number of data points}\label{sec:dep:noise-pts} In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in \fig{fig:dep:noise-pts}. In general the method is robust to noise and as expected, more noise can be compensated by more data.
Extrapolation and learning equations
1610.02995
Table 2: Extrapolation performance for kinematic of robotic arms. See Tab. 1 for details. Standard deviations for 5 random initializations. Interpolation error for all methods is around 0.012±0.02
[ "[EMPTY]", "kin-3-end", "kin-4-end", "kin-5-all" ]
[ [ "EQL", "0.028±0.019", "0.018±0.004", "0.036±0.035" ], [ "MLP", "0.369±0.041", "0.415±0.020", "0.346±0.013" ], [ "SVR", "0.235", "0.590", "0.260" ] ]
Sx4SSx4SSS0Px3 Robotic arms. A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in [−\nicefracπ2,\nicefracπ2], each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude [−π,π] was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (kin-3-end, kin-4-end) and the coordinates of all segment positions kin-5-all. The numerical results, see Tab. Model selection as above with u=v∈{10,20}, (MLP: k∈{10,50}) and layer number L∈{2,3,4}. To illustrate the dependence on the amount of noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance.
\section{Introduction}\label{sec:intro} The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, \ie in a way to not overfit the data, the regression problem is well understood and can -- at least conceptually -- be considered solved. % However, when working with data from real-world devices, \eg controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, \eg when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call \emph{extrapolation generalization}, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, \eg mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. %\sec{sec:method}) We present our results in the Section \emph{Experimental evaluation} %\sec{sec:results} and close with conclusions. \subsection{Related work}% In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, \eg a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR)~\cite{williams2006gaussian} or Support Vector Regression (SVR)~\cite{smola2004tutorial}, or a multi-layer network of suitable expressive power~\cite{specht1991general}. The goal is to find a prediction function that leads to a small expected error on future data, not necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a ``biologically plausible'' model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, \eg allowing only for sparse linear models. The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of \emph{system identification}. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common. \emph{Causal learning} is an area of recent research that aims at identifying a causal relation between multiple observables, % phenomena, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence~\cite{Pearl2000}. Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. %\Geo[]{actually they explain it in terms of the conditional probability.}. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions~\cite{PetersMJS2014}. % one expects to observe The topic of learning a regression function with emphasis on \emph{extrapolation} performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, \ie predict the next value(s)~\cite{wiener1949extrapolation}. By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution~\cite{muller1997predicting,gyorfi2013nonparametric}. Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the \emph{domain adaptation} setting. In particular, since we assume a common labeling function, our setting would fall under the \emph{covariate shift} setting~\cite{quionero2009dataset}. Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time~\cite{ben2010theory}. In %,ben2010impossibility our setting this is not possible to obtain. On the technical level, \method{} networks are an instance of general feed-forward networks for function approximation~\cite{bishop1995neural}. In contrast to recent trends towards \emph{deep learning}~\cite{bengio2009learning,bengio2013representation}, our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, \method{} networks resemble \emph{sum-product networks (SPNs)}~\cite{PoonDomingos2011:sum-product-networks} and \emph{Pi-Sigma networks (PSNs)}~\cite{ShinGhosh1991:pi-sigma}, in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas \method{} networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in \method{} multiplication is optional. Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities~\cite{SchmidtLipson2009:learnnaturallaws}. Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In \cite{ZarembaFergus2014:LearnMathIdentities} this was done using machine learning to overcome the potentially exponential search space. \section{Experimental evaluation}\label{sec:results} We demonstrate the ability of \method{} to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and evaluation procedure in \emph{python} based on the \emph{theano} framework~\cite{2016arXiv160502688short}. We will make the code for training and evaluation public after acceptance of the manuscript. %Todo \paragraph{Pendulum.} We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is $X=\Real\times\Real$ where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as $(\theta,\omega)$, but for our purposes, we call them $(x_1,x_2)$ in order to keep the notation consistent between experiments. The pendulum's dynamic behavior is governed by the following two ordinary differential equations: \begin{equation} \dot x_1 = x_2 \qquad\qquad\text{and}\qquad\qquad \dot x_2 = -g \sin(x_1)\,,\label{eqn:pend} % y_2 := \end{equation} where $g=9.81$ is the gravitation constant. We divide each equation by $g$ in order to balance the output scales and form a regression problem with two output values, $y_1=\frac{1}{g}x_2$ and $y_2=-\sin(x_1)$. As training data, we sample 1000 points uniformly in the hypercube {\small $[-h,h] \times [-h,h]$} for $h=2$. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation $\sigma=0.01$. We also define three test sets, each with 1000 points. The \emph{interpolation test set} is sampled from the same data distribution as the training set. The \emph{extrapolation (near) test set} contains data sampled uniformly from the data domain {\small $[-\frac32 h,\frac32 h] \times [-\frac32 h,\frac32 h]\setminus [-h,h] \times [-h,h]$}, which is relatively near the training region and the \emph{extrapolation (far) test set} extends the region to further outside: {\small $[-2h,2h] \times [-2h,2h]\setminus [-h,h] \times [-h,h]$}. We train a 2-layer \method{} and perform model selection among the hyper-parameters: the regularization strength {\small $\lambda\in10^{\{-7,-6.3,-6,-5.3,-5,-4.3,-4,-3.3,-3\}}$} and the number of nodes {\small $\frac 1 4 u=v\in\{1,3,5\}$}. All weights are randomly initialized from a normal distribution with {\small $\sigma = \sqrt{1/(k'+d)}$}. The unit selection $\typ{}$ is set such that all unit types are equally often. To ensure convergence we chose $T=10000$ epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with $\tanh$ activation functions and possible hyperparameters: $\lambda$ as for \method, number of layers {\small $L\in\{2,3\}$}, and number of neurons {\small $k\in\{5,10,20\}$}. A second baseline is given by epsilon support vector regression (SVR)~\cite{basak2007:SVR} with two hyperparameters {\small $C\in10^{\{-3,-2,-1,0,1,2,3,3.5\}}$} and {\small $\epsilon \in 10^{\{-3,-2,-1,0\}}$} using radial basis function kernel with width {\small $\gamma\in \{0.05,0.1,0.2,0.5,1.0\}$}. Numeric results are reported in \tab{tab:pend:results}. As expected all models are able to interpolate well with a test error on the order of the noise level ($\sigma=0.01$). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. \method, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure~\ref{fig:pend}: while the MLP and SVR simply learns a function that interpolates the training values, \method{} finds the correct functional expression and therefore predicts the correct values for any input data. \paragraph{Double pendulum kinematics.} The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum \cite{SchmidtLipson2009:learnnaturallaws}. The task here is to learn the position of the tips of the double pendulum segments from the given joint angles ($x_1,x_2$). These positions where not measured such that we supply them by the following formula: $y_1=\cos(x_1), y_2=\cos(x_1)+\cos(x_1+x_2), y_3=\sin(x_1), y_4=\sin(x_1)+\sin(x_1+x_2)$ where $(y_1,y_3)$ and $(y_2,y_4)$ correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first covers only part of the domain (input as well as output) and consists of 819 samples where 10\% was used as validation set (randomly sampled), see \fig{fig:dpk}(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to $[-\pi,\pi]$. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in \fig{fig:dpk}(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see \fig{fig:dpk}(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in \fig{fig:dpk}(d). Model selection is performed to determine $\lambda$ as above, $u=v\in\{3,5\}$, (MLP: $k\in\{5,10,20\}$) and layer number $L\in\{2,3\}$. \paragraph{Robotic arms.} A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in $[-\nicefrac{\pi}{2},\nicefrac{\pi}{2}]$, each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude $[-\pi,\pi]$ was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (\emph{kin-3-end}, \emph{kin-4-end}) and the coordinates of all segment positions \emph{kin-5-all}. The numerical results, see \tab{tab:kin}, shows that our method is able to extrapolate in these cases. Model selection as above with $u=v\in\{10,20\}$, (MLP: $k\in\{10,50\}$) and layer number $L\in\{2,3,4\}$. To illustrate the dependence on the amount of noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance. % linear relationship? \paragraph{Learning complex formula.} In order to find out whether \method{} can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output: \begin{align} y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + \sin\left(2 \pi x_2 + \nicefrac{\pi}{8}\right)+x_2 - x_3 x_4 \right)&\text{F-1}\label{eqn:syn1}\\ y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + x_2 \cos(2\pi x_1 + \nicefrac{\pi}{4}) + x_3-x_4^2\right) &\text{F-2}\label{eqn:syn2}\\ y &= \nicefrac{1}{3} \left( (1+x_2) \sin(\pi x_1) + x_2 x_3 x_4\right) &\text{F-3}\label{eqn:syn3} \end{align} The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of $x_2$ and $\cos$ and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with $h=1$ as input data range. We use 10000 points for training set and validation set (90\%-10\% split) and 5000 points for each of the test sets. Model selection for \method{} is performed as above using the number of layers {\small $L\in{2,3,4}$}. The number of units is set to $\frac{1}{4}u=v=10$. For the MLP, we select $L$ and $\lambda$ from the same set as above as well as {\small $k\in\{10,30\}$}. \Tab{tab:syn:results} shows the numerical results. Again, all methods are able to interpolate, but only \method{} achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure~\fig{fig:syn} illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement $x_2\cos(a x_1+b)$? Apparently it uses $1.21 \cos(a x_1 + \pi + b + 0.41 x_2) + \sin(a x_1 + b + 0.41 x_2)$ which is a good approximation for $x_2 \in [-2,2]$. The sparsity of this solution is $5$ whereas the true solution needs at least $6$, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating $(1+x_2)\sin(x_1)$ using $(x_1 + x_1 x_2)\cos(\beta x_1)$, which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see \fig{fig:syn}(c). \paragraph{X-Ray transition energies.} As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from~\cite{Deslattes2003:XrayTransEnergies}, where we consider one specific transition, called the $K\,\alpha_2$ line, because it was measured for all elements. The true relationship between atomic number $Z$ and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is $K\,\alpha_2 \propto Z^2$ according to Moseley's law. Further correction terms for elements with larger $Z$ are potentially of higher order. We have data for elements with $10\le Z \le 100$, which is split into training/validation sets in the range $[10,91]$ (70/10 data points) and extrapolation test set in the interval $[92,100]$ (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation splits. The data is scaled to lie in $[0,1]$, \ie $x= Z/100$ and $y=K\alpha_2/100000$. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the $Z^2$ relationship. Mini-batch size is 2 here and $T=50000$ was used. \Fig{fig:xray} presents the data, the predictions, the learned formulas and the numerical results. \method{} and SVR achieve similar performance and MLP is significantly worse. However, \method{} also yields interpretable formulas, see \fig{fig:xray}(e) that can be used to gain insights into the potential relationship. \subsection{Poor extrapolation out of model class --- cart-pendulum system} Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set. Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see \fig{fig:cp}(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector $x=(x_1,\dots,x_4)$. We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where $y_1 = \dot x_1 = x_3$, $y_2 = \dot x_2 = x_4$ and \begin{align} y_3&= \frac{-x_1-0.01 x_3+x_4^2 \sin\left(x_2\right)+0.1 x_4 \cos \left(x_2\right)+9.81 \sin \left(x_2\right) \cos \left(x_2\right)}{\sin ^2\left(x_2\right)+1}\label{eqn:cp}, \\ y_4&= \frac{-0.2 x_4 - 19.62 \sin \left(x_2\right) + x_1 \cos \left(x_2\right) + 0.01 x_3 \cos \left(x_2\right) - x_4^2 \sin \left(x_2\right)\cos \left(x_2\right)} {\sin^2\left(x_2\right)+1}.\nonumber \end{align} The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we {\bf cannot} expect great extrapolation performance and this is confirmed by the experiments. In \fig{fig:cp}(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both \method{} and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in \tab{tab:cp:results}. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used. \section{Regression and extrapolation}\label{sec:setting} We consider a multivariate regression problem with a training set $\{(x_1,y_1),\dots,(x_N,y_N)\}$ with $x \in \Real^n$, $y\in \Real^m$. % sampled from a data distribution $p(x,y)$. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), $\phi:\Real^n\to\Real^m$ with additive zero-mean noise, $\xi$, \ie $y=\phi(x)+\xi$ and $\mathbb{E}\xi=0$. The function $\phi$ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function $\psi:\Real^n\to\Real^m$ that approximates the true functional relation as well as possible in the squared loss sense, \ie achieves minimal expected error $\mathbb{E}\|\psi(x) - \phi(x)\|^2$. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on training or test data $D$, %=[(x_i,y_i)]$, \begin{align} E(D)&=\frac{1}{N}\sum^{N}_{i=1}\|\psi(x_i) - y_i\|^2\,. \label{eqn:error} \end{align} If training and test data are sampled from the same distribution then we speak about an \emph{interpolation} problem. In the \emph{extrapolation} setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, \eg for higher velocities. As usual, we split the data that is available at training time into a part for model training %, $\D^\train$, and a part for validation or model selection. %, $\D^\val$. \section{Learning a network for function extrapolation}\label{sec:method}%Learning physical equations The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an $L$-layer network, there are $L-1$ hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure ($k'$ inputs, $k$ outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match. The linear mapping at level $l$ maps the $k'$-dimensional input $y^{\lm}$ to the $d$-dimensional intermediate representation $z$ given by \begin{align} z^\l &= W^\l y^\lm + b^\l, \end{align} where $y^\lm$ is the output of the previous layer, with the convention $y^{(0)}=x$. The weight matrix $W^\l\in \Real^{d \times k'}$ and the bias vector $b^\l\in\Real^{d}$ are free parameters that are learned during training. The non-linear transformation contains $u$ \emph{unary units}, $f_i:\Real\to\Real$, for $i=1,\dots,u$, and $v$ \emph{binary units}, $g_j:\Real\times\Real\to\Real$ for $j=1,\dots,v$. Their outputs are concatenated to form the layer output \begin{align} y^\l &:= \Big(f_1(z^\l_1),f_2(z^\l_2),\dots,f_{u}(z^\l_{u}),\nonumber\\ & \qquad g_{1}(z^\l_{u+1},z^\l_{u+2}),\dots,g_{v}(z^\l_{u+2v-1},z^\l_{u+2v}) \Big)\,. \end{align} In total, the nonlinear stage has $k = u + v$ outputs and $d = u + 2 v$ inputs. The unary units, $f_1,\dots,f_u$ receive the respective component, $z_1,\dots,z_u$ as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter $\typ_i\in\{0,1,2,3\}$ \begin{align} f_i(z_i) &:= \begin{cases} z_i & \text{ if } \typ_i=0,\\ \sin(z_i) & \text{ if } \typ_i=1,\\ \cos(z_i) & \text{ if } \typ_i=2,\\ \sigm(z_i) & \text{ if } \typ_i=3, \end{cases}&\text{ for } i=1,\dots,u, \end{align} where $\sigm(z)=\frac{1}{1+e^{-z}}$ is the standard sigmoid function. The binary units, $g_1,\dots,g_v$ receive the remaining component, $z_{u+1},\dots,z_{u+2v}$, as input in pairs of two. They are \emph{multiplication units} that compute the product of their two input values: \begin{align} g_j(z_{u+2j-1}, z_{u+2j}) &:= z_{u+2j-1} \cdot z_{u+2j}&\text{ for }j=1,\dots,v. \end{align} Finally, the $L$-th and last layer computes the regression values by a linear read-out \begin{align} y^{\layer{L}} &:= W^{\layer{L}} y^{\layer{L-1}} + b^{\layer{L}}. \end{align} The architecture is depicted in \fig{fig:network}. We call the new architecture Equation Learner (\method{}) and denote the function it defines by $\psi$. \subsection{Discussion of the architecture} The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of \emph{sine} and \emph{cosine} as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space. \emph{Sigmoid} nonlinearities are the canonical choice of \emph{activation function} for \emph{artificial neural networks} (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular \emph{radial basis functions}~\cite{broomhead1988radial} we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as \emph{(square) roots} and \emph{logarithms}, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work. The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units~\cite{DurbinRumelhart1989:ProductUnits} and Pi-Sigma-unit~\cite{ShinGhosh1991:pi-sigma}. The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network. Finally, each layer of the network contains unary units that act as \emph{identity} maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths. \subsection{Network training} The \method{} is fully differentiable in its free parameters $\theta=\{W^{(1)},\dots,W^{(L)},b^{(1)},\dots,b^{(L)}\}$, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective~\cite{tibshirani1996regression}, \begin{align} L(D)&=\frac{1}{N}\sum^{|D|}_{i=1}\|\psi(x_i) - y_i\|^2 + \lambda \sum_{l=1}^L\big|W^\l\big|_1\,,\label{eqn:loss} \end{align} that is, a linear combination of $L_2$ loss and $L_1$ regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam~\cite{KingmaBa2015:Adam} for calculating the updates: \begin{align} \theta_{t+1} &= \theta_{t} + \text{Adam}\left(\frac{\partial L(D_{(t)})}{\partial \theta}, \alpha\right), \end{align} where $D_{(t)}$ denotes the current mini-batch and $\alpha$ is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use $\alpha=0.001$ and a mini-batch size of 20. The role of the $L_1$ regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values. Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure ($t<t_1$) we use no regularization ($\lambda=0$), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting $\lambda$ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training ($t>t_2$) we disable $L_1$ regularization ($\lambda=0$) but enforce the same $L_0$ norm of the weights. This is achieved by keeping all weights $w\in W^{1\dots L}$ that are close to 0 at 0, \ie if $|w|<0.001$ then $w=0$ during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints $t_1$ and $t_2$ is not critical. In practice, we use $t_1 = \frac{1}{4} T$ and $t_2=\frac{19}{20} T$, where $T$ is total number of update steps. $T$ was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous. \subsection{Model selection for extrapolation}\label{sec:modelsel} \method{} networks have a number of hyper-parameters, \eg the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the ``right'' formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between $cos(x)$ and its truncated power series approximation $1-x^2/2 + x^4/24$, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 %\sec{sec:modelsel:app} for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, \ie it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances \wrt validation error and sparsity and select the one with the smallest $L_2$ norm (in rank-space), see \eqn{eqn:model:sel}. Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations. \documentclass[a4paper]{article} % For LaTeX2e \usepackage[margin=2.5cm,top=2cm]{geometry} \usepackage[square,sort]{natbib} \bibliographystyle{abbrvnat} \renewcommand{\cite}[1]{\citep{#1}} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % professional-quality tables % compact symbols for 1/2, etc. % microtypography % hyperlinks % simple URL typesetting \graphicspath{{../graphics/}} \newcommand{\Fig}[1]{Figure~\ref{#1}} \newcommand{\fig}[1]{Fig.~\ref{#1}} \newcommand{\Tab}[1]{Table~\ref{#1}} \newcommand{\tab}[1]{Tab.~\ref{#1}} \newcommand{\Eqn}[1]{Equation \eqref{#1}} \newcommand{\eqn}[1]{Eq.~\eqref{#1}} % Eq. (1.1) \newcommand{\eqnp}[1]{(Eq.~\ref{#1})} % (Eq. 1.1) \renewcommand{\sec}[1]{Section~\ref{#1}} % Section 1 \newcommand{\ie}{i.\,e.~} \newcommand{\eg}{e.\,g.~} \newcommand{\wrt}{w.\,r.\,t.~} \newcommand{\Real}{\ensuremath{\mathbb R}} % Real numbers \newcommand{\Unit}{\ensuremath{\mathbb I}} % Unit Matrix \newcommand{\T}{\ensuremath{\top}} % Transpose \newcommand{\sigm}{\ensuremath{\text{sigm}}} % sigmoid \newcommand\Tstrut{\rule{0pt}{2.6ex}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\method}{EQL} %{\textcolor{green}{EQL}}%{EQL,ABFNet} \newcommand{\typ}{\ensuremath{I}} % unit type \newcommand{\D}{\ensuremath{\mathbf{D}}} % dataset \newcommand{\train}{\ensuremath{\text{Tr}}} % training \newcommand{\test}{\ensuremath{\text{T}}} % training \newcommand{\val}{\ensuremath{\text{V}}} % training \newcommand{\extra}{\ensuremath{\text{X}}} % training \newcommand{\restr}{\ensuremath{\text{XTr}}} % training \newcommand{\restrval}{\ensuremath{\text{XV}}} % training \newcommand{\layer}[1]{{(#1)}} % layer l \renewcommand{\l}{{\layer{l}}} % layer l \newcommand{\lm}{{\layer{l-1}}} % layer l-1 \usepackage[disable]{todonotes} \setlength{\marginparwidth}{3cm} \newcommand{\Geo}[2][inline]{\todo[#1,color=yellow!60,size=\scriptsize]{#2}} \newcommand{\Chl}[2][inline]{\todo[#1,color=green!70,size=\scriptsize]{#2}} \pdfinfo{ /Title (Extrapolation and learning equations) /Author (Georg Martius and Christoph H. Lampert)} \setcounter{secnumdepth}{0} \begin{document} \title{Extrapolation and learning equations} \author{% Georg Martius \& Christoph H. Lampert\\ IST Austria\\ Am Campus 1, 3400 Klosterneuburg, Austria\\ \texttt{\{gmartius,chl\}@ist.ac.at} } \maketitle \begin{abstract} In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. \end{abstract} \input{intro.tex} \input{methods.tex} \input{relatedwork.tex} \input{results.tex} \vspace*{-.2em} \section{Conclusions}\vspace*{-.2em} We presented a new network architecture called \method{} that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing $L_1$ regularization and fixing $L_0$ norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies. The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor. If the origin of the data is not in the hypothesis class, \ie the underlying expression cannot be represented by the network, good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense. \subsubsection*{Acknowledgments} GM received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no.~[291734]. \begin{thebibliography}{24} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Basak et~al.(2007)Basak, Pal, and Patranabis]{basak2007:SVR} D.~Basak, S.~Pal, and D.~C. Patranabis. \newblock Support vector regression. \newblock \emph{Neural Information Processing-Letters and Reviews}, 11\penalty0 (10):\penalty0 203--224, 2007. \bibitem[Ben-David et~al.(2010)Ben-David, Blitzer, Crammer, Kulesza, Pereira, and Vaughan]{ben2010theory} S.~Ben-David, J.~Blitzer, K.~Crammer, A.~Kulesza, F.~Pereira, and J.~W. Vaughan. \newblock A theory of learning from different domains. \newblock \emph{Machine Learning}, 79\penalty0 (1-2):\penalty0 151--175, 2010. \bibitem[Bengio(2009)]{bengio2009learning} Y.~Bengio. \newblock Learning deep architectures for {AI}. \newblock \emph{Foundations and Trends in Machine Learning}, 2\penalty0 (1):\penalty0 1--127, 2009. \bibitem[Bengio et~al.(2013)Bengio, Courville, and Vincent]{bengio2013representation} Y.~Bengio, A.~Courville, and P.~Vincent. \newblock Representation learning: A review and new perspectives. \newblock \emph{IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, 35\penalty0 (8):\penalty0 1798--1828, 2013. \bibitem[Bishop(1995)]{bishop1995neural} C.~M. Bishop. \newblock \emph{Neural networks for pattern recognition}. \newblock Oxford University Press, 1995. \bibitem[Broomhead and Lowe(1988)]{broomhead1988radial} D.~S. Broomhead and D.~Lowe. \newblock Radial basis functions, multi-variable functional interpolation and adaptive networks. \newblock Technical report, DTIC Document, 1988. \bibitem[Deslattes et~al.(2003)Deslattes, Kessler~Jr, Indelicato, De~Billy, Lindroth, and Anton]{Deslattes2003:XrayTransEnergies} R.~D. Deslattes, E.~G. Kessler~Jr, P.~Indelicato, L.~De~Billy, E.~Lindroth, and J.~Anton. \newblock X-ray transition energies: new approach to a comprehensive evaluation. \newblock \emph{Reviews of Modern Physics}, 75\penalty0 (1):\penalty0 35, 2003. \bibitem[Durbin and Rumelhart(1989)]{DurbinRumelhart1989:ProductUnits} R.~Durbin and D.~E. Rumelhart. \newblock Product units: A computationally powerful and biologically plausible extension to backpropagation networks. \newblock \emph{Neural Computation}, 1\penalty0 (1):\penalty0 133--142, Mar. 1989. \newblock ISSN 0899-7667. \newblock \doi{10.1162/neco.1989.1.1.133}. \newblock URL \url{http://dx.doi.org/10.1162/neco.1989.1.1.133}. \bibitem[Gy{\"o}rfi et~al.(2013)Gy{\"o}rfi, H{\"a}rdle, Sarda, and Vieu]{gyorfi2013nonparametric} L.~Gy{\"o}rfi, W.~H{\"a}rdle, P.~Sarda, and P.~Vieu. \newblock \emph{Nonparametric curve estimation from time series}, volume~60. \newblock Springer, 2013. \bibitem[Kingma and Ba(2015)]{KingmaBa2015:Adam} D.~Kingma and J.~Ba. \newblock Adam: A method for stochastic optimization. \newblock In \emph{in Proceedings of ICLR}, 2015. \bibitem[M{\"u}ller et~al.(1997)M{\"u}ller, Smola, R{\"a}tsch, Sch{\"o}lkopf, Kohlmorgen, and Vapnik]{muller1997predicting} K.-R. M{\"u}ller, A.~J. Smola, G.~R{\"a}tsch, B.~Sch{\"o}lkopf, J.~Kohlmorgen, and V.~Vapnik. \newblock Predicting time series with support vector machines. \newblock In \emph{Artificial Neural Networks (ICANN)}, pages 999--1004. Springer, 1997. \bibitem[Pearl(2000)]{Pearl2000} J.~Pearl. \newblock \emph{Causality}. \newblock Cambridge {U}niversity {P}ress, 2000. \bibitem[Peters et~al.(2014)Peters, Mooij, Janzing, and Sch{\"o}lkopf]{PetersMJS2014} J.~Peters, J.~Mooij, D.~Janzing, and B.~Sch{\"o}lkopf. \newblock Causal discovery with continuous additive noise models. \newblock \emph{Journal of Machine Learning Research (JMLR)}, 15:\penalty0 2009--2053, 2014. \bibitem[Poon and Domingos(2012)]{PoonDomingos2011:sum-product-networks} H.~Poon and P.~M. Domingos. \newblock Sum-product networks: {A} new deep architecture, 2012. \bibitem[Quionero-Candela et~al.(2009)Quionero-Candela, Sugiyama, Schwaighofer, and Lawrence]{quionero2009dataset} J.~Quionero-Candela, M.~Sugiyama, A.~Schwaighofer, and N.~D. Lawrence. \newblock \emph{Dataset shift in machine learning}. \newblock The MIT Press, 2009. \bibitem[Schmidt and Lipson(2009)]{SchmidtLipson2009:learnnaturallaws} M.~Schmidt and H.~Lipson. \newblock Distilling free-form natural laws from experimental data. \newblock \emph{Science}, 324\penalty0 (5923):\penalty0 81--85, 2009. \newblock ISSN 0036-8075. \newblock \doi{10.1126/science.1165893}. \newblock URL \url{http://science.sciencemag.org/content/324/5923/81}. \bibitem[Shin and Ghosh(1991)]{ShinGhosh1991:pi-sigma} Y.~Shin and J.~Ghosh. \newblock The pi-sigma network : An efficient higher-order neural network for pattern classification and function approximation. \newblock In \emph{in Proceedings of the International Joint Conference on Neural Networks}, pages 13--18, 1991. \bibitem[Smola and Sch{\"o}lkopf(2004)]{smola2004tutorial} A.~J. Smola and B.~Sch{\"o}lkopf. \newblock A tutorial on support vector regression. \newblock \emph{Statistics and computing}, 14\penalty0 (3):\penalty0 199--222, 2004. \bibitem[Specht(1991)]{specht1991general} D.~F. Specht. \newblock A general regression neural network. \newblock \emph{IEEE Transactions on Neural Networks (TNN)}, 2\penalty0 (6):\penalty0 568--576, 1991. \bibitem[{Theano Development Team}(2016)]{2016arXiv160502688short} {Theano Development Team}. \newblock {Theano: A {Python} framework for fast computation of mathematical expressions}. \newblock \emph{arXiv e-prints}, abs/1605.02688, May 2016. \newblock URL \url{http://arxiv.org/abs/1605.02688}. \bibitem[Tibshirani(1996)]{tibshirani1996regression} R.~Tibshirani. \newblock Regression shrinkage and selection via the lasso. \newblock \emph{Journal of the Royal Statistical Society. Series B (Methodological)}, pages 267--288, 1996. \bibitem[Wiener(1949)]{wiener1949extrapolation} N.~Wiener. \newblock \emph{Extrapolation, interpolation, and smoothing of stationary time series}, volume~2. \newblock The MIT Press, 1949. \bibitem[Williams and Rasmussen(2006)]{williams2006gaussian} C.~K.~I. Williams and C.~E. Rasmussen. \newblock \emph{Gaussian processes for machine learning}. \newblock The MIT Press, 2006. \bibitem[Zaremba et~al.(2014)Zaremba, Kurach, and Fergus]{ZarembaFergus2014:LearnMathIdentities} W.~Zaremba, K.~Kurach, and R.~Fergus. \newblock Learning to discover efficient mathematical identities. \newblock In Z.~Ghahramani, M.~Welling, C.~Cortes, N.~Lawrence, and K.~Weinberger, editors, \emph{Advances in Neural Information Processing Systems 27}, pages 1278--1286. Curran Associates, Inc., 2014. \end{thebibliography} \input{appendix.tex} \end{document} \appendix \section{Appendix} \section{A1: Model selection details}\label{sec:modelsel:app} \subsection{Quantifying sparsity} We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by $s$. For a given network $phi$ we get \begin{align} s(\phi) = \sum_{l=1}^L\sum_{i=1}^k\Theta( |W^\l_{i,\cdot}| * |W^{\layer{l+1}}_{\cdot,i}| - 0.01)\,,\label{eqn:s} \end{align} where $\Theta$ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula). \subsection{Selection criteria} As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let $r^v(\phi)$ and $r^s(\phi)$ be the ranks of the network $\phi$ \wrt the validation error and sparsity $s(\phi)$respectively, then the network with minimal squared rank norm is selected: \begin{align} \argmin_\phi\left[ r^v(\phi)^2 + r^s(\phi)^2\right] \label{eqn:model:sel} \end{align} In \fig{fig:model:sel} the extrapolation performance of all considered networks for the kin2D-3 dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error. \section{A2: Dependence on noise and number of data points}\label{sec:dep:noise-pts} In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in \fig{fig:dep:noise-pts}. In general the method is robust to noise and as expected, more noise can be compensated by more data.
Extrapolation and learning equations
1610.02995
Table 3: Interpolation and extrapolation performance for formula learning. See Tab. 1 for details.
[ "dataset", "method", "interpolation", "extrapol. (near)", "extrapol. (far)" ]
[ [ "F-1", "EQL", "0.010±0.000", "0.015±0.005", "0.026±0.015" ], [ "[EMPTY]", "MLP", "0.011±0.000", "0.32±0.12", "0.920±0.420" ], [ "[EMPTY]", "SVR", "0.011", "0.28", "1.2" ], [ "F-2", "EQL", "0.01±0.00", "0.013±0.004", "0.026±0.019" ], [ "[EMPTY]", "MLP", "0.01±0.00", "0.2±0.014", "0.49±0.043" ], [ "[EMPTY]", "SVR", "0.011", "0.3", "0.94" ], [ "F-3", "EQL", "0.01±0.000", "0.047±0.012", "0.35±0.11" ], [ "[EMPTY]", "EQL (no cos)", "0.01±0.000", "0.01±0.000", "0.011±0.001" ], [ "[EMPTY]", "MLP", "0.01±0.000", "0.084±0.007", "0.4±0.021" ], [ "[EMPTY]", "SVR", "0.01", "0.071", "0.39" ] ]
Again, all methods are able to interpolate, but only EQL achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure Fig. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement x2cos(ax1+b)? Apparently it uses 1.21cos(ax1+π+b+0.41x2)+sin(ax1+b+0.41x2) which is a good approximation for x2∈[−2,2]. The sparsity of this solution is 5 whereas the true solution needs at least 6, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating (1+x2)sin(x1) using (x1+x1x2)cos(βx1), which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see Fig.
\section{Introduction}\label{sec:intro} The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, \ie in a way to not overfit the data, the regression problem is well understood and can -- at least conceptually -- be considered solved. % However, when working with data from real-world devices, \eg controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, \eg when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call \emph{extrapolation generalization}, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, \eg mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. %\sec{sec:method}) We present our results in the Section \emph{Experimental evaluation} %\sec{sec:results} and close with conclusions. \subsection{Related work}% In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, \eg a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR)~\cite{williams2006gaussian} or Support Vector Regression (SVR)~\cite{smola2004tutorial}, or a multi-layer network of suitable expressive power~\cite{specht1991general}. The goal is to find a prediction function that leads to a small expected error on future data, not necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a ``biologically plausible'' model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, \eg allowing only for sparse linear models. The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of \emph{system identification}. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common. \emph{Causal learning} is an area of recent research that aims at identifying a causal relation between multiple observables, % phenomena, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence~\cite{Pearl2000}. Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. %\Geo[]{actually they explain it in terms of the conditional probability.}. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions~\cite{PetersMJS2014}. % one expects to observe The topic of learning a regression function with emphasis on \emph{extrapolation} performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, \ie predict the next value(s)~\cite{wiener1949extrapolation}. By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution~\cite{muller1997predicting,gyorfi2013nonparametric}. Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the \emph{domain adaptation} setting. In particular, since we assume a common labeling function, our setting would fall under the \emph{covariate shift} setting~\cite{quionero2009dataset}. Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time~\cite{ben2010theory}. In %,ben2010impossibility our setting this is not possible to obtain. On the technical level, \method{} networks are an instance of general feed-forward networks for function approximation~\cite{bishop1995neural}. In contrast to recent trends towards \emph{deep learning}~\cite{bengio2009learning,bengio2013representation}, our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, \method{} networks resemble \emph{sum-product networks (SPNs)}~\cite{PoonDomingos2011:sum-product-networks} and \emph{Pi-Sigma networks (PSNs)}~\cite{ShinGhosh1991:pi-sigma}, in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas \method{} networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in \method{} multiplication is optional. Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities~\cite{SchmidtLipson2009:learnnaturallaws}. Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In \cite{ZarembaFergus2014:LearnMathIdentities} this was done using machine learning to overcome the potentially exponential search space. \section{Experimental evaluation}\label{sec:results} We demonstrate the ability of \method{} to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and evaluation procedure in \emph{python} based on the \emph{theano} framework~\cite{2016arXiv160502688short}. We will make the code for training and evaluation public after acceptance of the manuscript. %Todo \paragraph{Pendulum.} We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is $X=\Real\times\Real$ where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as $(\theta,\omega)$, but for our purposes, we call them $(x_1,x_2)$ in order to keep the notation consistent between experiments. The pendulum's dynamic behavior is governed by the following two ordinary differential equations: \begin{equation} \dot x_1 = x_2 \qquad\qquad\text{and}\qquad\qquad \dot x_2 = -g \sin(x_1)\,,\label{eqn:pend} % y_2 := \end{equation} where $g=9.81$ is the gravitation constant. We divide each equation by $g$ in order to balance the output scales and form a regression problem with two output values, $y_1=\frac{1}{g}x_2$ and $y_2=-\sin(x_1)$. As training data, we sample 1000 points uniformly in the hypercube {\small $[-h,h] \times [-h,h]$} for $h=2$. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation $\sigma=0.01$. We also define three test sets, each with 1000 points. The \emph{interpolation test set} is sampled from the same data distribution as the training set. The \emph{extrapolation (near) test set} contains data sampled uniformly from the data domain {\small $[-\frac32 h,\frac32 h] \times [-\frac32 h,\frac32 h]\setminus [-h,h] \times [-h,h]$}, which is relatively near the training region and the \emph{extrapolation (far) test set} extends the region to further outside: {\small $[-2h,2h] \times [-2h,2h]\setminus [-h,h] \times [-h,h]$}. We train a 2-layer \method{} and perform model selection among the hyper-parameters: the regularization strength {\small $\lambda\in10^{\{-7,-6.3,-6,-5.3,-5,-4.3,-4,-3.3,-3\}}$} and the number of nodes {\small $\frac 1 4 u=v\in\{1,3,5\}$}. All weights are randomly initialized from a normal distribution with {\small $\sigma = \sqrt{1/(k'+d)}$}. The unit selection $\typ{}$ is set such that all unit types are equally often. To ensure convergence we chose $T=10000$ epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with $\tanh$ activation functions and possible hyperparameters: $\lambda$ as for \method, number of layers {\small $L\in\{2,3\}$}, and number of neurons {\small $k\in\{5,10,20\}$}. A second baseline is given by epsilon support vector regression (SVR)~\cite{basak2007:SVR} with two hyperparameters {\small $C\in10^{\{-3,-2,-1,0,1,2,3,3.5\}}$} and {\small $\epsilon \in 10^{\{-3,-2,-1,0\}}$} using radial basis function kernel with width {\small $\gamma\in \{0.05,0.1,0.2,0.5,1.0\}$}. Numeric results are reported in \tab{tab:pend:results}. As expected all models are able to interpolate well with a test error on the order of the noise level ($\sigma=0.01$). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. \method, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure~\ref{fig:pend}: while the MLP and SVR simply learns a function that interpolates the training values, \method{} finds the correct functional expression and therefore predicts the correct values for any input data. \paragraph{Double pendulum kinematics.} The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum \cite{SchmidtLipson2009:learnnaturallaws}. The task here is to learn the position of the tips of the double pendulum segments from the given joint angles ($x_1,x_2$). These positions where not measured such that we supply them by the following formula: $y_1=\cos(x_1), y_2=\cos(x_1)+\cos(x_1+x_2), y_3=\sin(x_1), y_4=\sin(x_1)+\sin(x_1+x_2)$ where $(y_1,y_3)$ and $(y_2,y_4)$ correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first covers only part of the domain (input as well as output) and consists of 819 samples where 10\% was used as validation set (randomly sampled), see \fig{fig:dpk}(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to $[-\pi,\pi]$. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in \fig{fig:dpk}(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see \fig{fig:dpk}(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in \fig{fig:dpk}(d). Model selection is performed to determine $\lambda$ as above, $u=v\in\{3,5\}$, (MLP: $k\in\{5,10,20\}$) and layer number $L\in\{2,3\}$. \paragraph{Robotic arms.} A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in $[-\nicefrac{\pi}{2},\nicefrac{\pi}{2}]$, each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude $[-\pi,\pi]$ was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (\emph{kin-3-end}, \emph{kin-4-end}) and the coordinates of all segment positions \emph{kin-5-all}. The numerical results, see \tab{tab:kin}, shows that our method is able to extrapolate in these cases. Model selection as above with $u=v\in\{10,20\}$, (MLP: $k\in\{10,50\}$) and layer number $L\in\{2,3,4\}$. To illustrate the dependence on the amount of noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance. % linear relationship? \paragraph{Learning complex formula.} In order to find out whether \method{} can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output: \begin{align} y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + \sin\left(2 \pi x_2 + \nicefrac{\pi}{8}\right)+x_2 - x_3 x_4 \right)&\text{F-1}\label{eqn:syn1}\\ y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + x_2 \cos(2\pi x_1 + \nicefrac{\pi}{4}) + x_3-x_4^2\right) &\text{F-2}\label{eqn:syn2}\\ y &= \nicefrac{1}{3} \left( (1+x_2) \sin(\pi x_1) + x_2 x_3 x_4\right) &\text{F-3}\label{eqn:syn3} \end{align} The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of $x_2$ and $\cos$ and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with $h=1$ as input data range. We use 10000 points for training set and validation set (90\%-10\% split) and 5000 points for each of the test sets. Model selection for \method{} is performed as above using the number of layers {\small $L\in{2,3,4}$}. The number of units is set to $\frac{1}{4}u=v=10$. For the MLP, we select $L$ and $\lambda$ from the same set as above as well as {\small $k\in\{10,30\}$}. \Tab{tab:syn:results} shows the numerical results. Again, all methods are able to interpolate, but only \method{} achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure~\fig{fig:syn} illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement $x_2\cos(a x_1+b)$? Apparently it uses $1.21 \cos(a x_1 + \pi + b + 0.41 x_2) + \sin(a x_1 + b + 0.41 x_2)$ which is a good approximation for $x_2 \in [-2,2]$. The sparsity of this solution is $5$ whereas the true solution needs at least $6$, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating $(1+x_2)\sin(x_1)$ using $(x_1 + x_1 x_2)\cos(\beta x_1)$, which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see \fig{fig:syn}(c). \paragraph{X-Ray transition energies.} As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from~\cite{Deslattes2003:XrayTransEnergies}, where we consider one specific transition, called the $K\,\alpha_2$ line, because it was measured for all elements. The true relationship between atomic number $Z$ and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is $K\,\alpha_2 \propto Z^2$ according to Moseley's law. Further correction terms for elements with larger $Z$ are potentially of higher order. We have data for elements with $10\le Z \le 100$, which is split into training/validation sets in the range $[10,91]$ (70/10 data points) and extrapolation test set in the interval $[92,100]$ (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation splits. The data is scaled to lie in $[0,1]$, \ie $x= Z/100$ and $y=K\alpha_2/100000$. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the $Z^2$ relationship. Mini-batch size is 2 here and $T=50000$ was used. \Fig{fig:xray} presents the data, the predictions, the learned formulas and the numerical results. \method{} and SVR achieve similar performance and MLP is significantly worse. However, \method{} also yields interpretable formulas, see \fig{fig:xray}(e) that can be used to gain insights into the potential relationship. \subsection{Poor extrapolation out of model class --- cart-pendulum system} Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set. Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see \fig{fig:cp}(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector $x=(x_1,\dots,x_4)$. We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where $y_1 = \dot x_1 = x_3$, $y_2 = \dot x_2 = x_4$ and \begin{align} y_3&= \frac{-x_1-0.01 x_3+x_4^2 \sin\left(x_2\right)+0.1 x_4 \cos \left(x_2\right)+9.81 \sin \left(x_2\right) \cos \left(x_2\right)}{\sin ^2\left(x_2\right)+1}\label{eqn:cp}, \\ y_4&= \frac{-0.2 x_4 - 19.62 \sin \left(x_2\right) + x_1 \cos \left(x_2\right) + 0.01 x_3 \cos \left(x_2\right) - x_4^2 \sin \left(x_2\right)\cos \left(x_2\right)} {\sin^2\left(x_2\right)+1}.\nonumber \end{align} The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we {\bf cannot} expect great extrapolation performance and this is confirmed by the experiments. In \fig{fig:cp}(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both \method{} and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in \tab{tab:cp:results}. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used. \section{Regression and extrapolation}\label{sec:setting} We consider a multivariate regression problem with a training set $\{(x_1,y_1),\dots,(x_N,y_N)\}$ with $x \in \Real^n$, $y\in \Real^m$. % sampled from a data distribution $p(x,y)$. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), $\phi:\Real^n\to\Real^m$ with additive zero-mean noise, $\xi$, \ie $y=\phi(x)+\xi$ and $\mathbb{E}\xi=0$. The function $\phi$ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function $\psi:\Real^n\to\Real^m$ that approximates the true functional relation as well as possible in the squared loss sense, \ie achieves minimal expected error $\mathbb{E}\|\psi(x) - \phi(x)\|^2$. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on training or test data $D$, %=[(x_i,y_i)]$, \begin{align} E(D)&=\frac{1}{N}\sum^{N}_{i=1}\|\psi(x_i) - y_i\|^2\,. \label{eqn:error} \end{align} If training and test data are sampled from the same distribution then we speak about an \emph{interpolation} problem. In the \emph{extrapolation} setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, \eg for higher velocities. As usual, we split the data that is available at training time into a part for model training %, $\D^\train$, and a part for validation or model selection. %, $\D^\val$. \section{Learning a network for function extrapolation}\label{sec:method}%Learning physical equations The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an $L$-layer network, there are $L-1$ hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure ($k'$ inputs, $k$ outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match. The linear mapping at level $l$ maps the $k'$-dimensional input $y^{\lm}$ to the $d$-dimensional intermediate representation $z$ given by \begin{align} z^\l &= W^\l y^\lm + b^\l, \end{align} where $y^\lm$ is the output of the previous layer, with the convention $y^{(0)}=x$. The weight matrix $W^\l\in \Real^{d \times k'}$ and the bias vector $b^\l\in\Real^{d}$ are free parameters that are learned during training. The non-linear transformation contains $u$ \emph{unary units}, $f_i:\Real\to\Real$, for $i=1,\dots,u$, and $v$ \emph{binary units}, $g_j:\Real\times\Real\to\Real$ for $j=1,\dots,v$. Their outputs are concatenated to form the layer output \begin{align} y^\l &:= \Big(f_1(z^\l_1),f_2(z^\l_2),\dots,f_{u}(z^\l_{u}),\nonumber\\ & \qquad g_{1}(z^\l_{u+1},z^\l_{u+2}),\dots,g_{v}(z^\l_{u+2v-1},z^\l_{u+2v}) \Big)\,. \end{align} In total, the nonlinear stage has $k = u + v$ outputs and $d = u + 2 v$ inputs. The unary units, $f_1,\dots,f_u$ receive the respective component, $z_1,\dots,z_u$ as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter $\typ_i\in\{0,1,2,3\}$ \begin{align} f_i(z_i) &:= \begin{cases} z_i & \text{ if } \typ_i=0,\\ \sin(z_i) & \text{ if } \typ_i=1,\\ \cos(z_i) & \text{ if } \typ_i=2,\\ \sigm(z_i) & \text{ if } \typ_i=3, \end{cases}&\text{ for } i=1,\dots,u, \end{align} where $\sigm(z)=\frac{1}{1+e^{-z}}$ is the standard sigmoid function. The binary units, $g_1,\dots,g_v$ receive the remaining component, $z_{u+1},\dots,z_{u+2v}$, as input in pairs of two. They are \emph{multiplication units} that compute the product of their two input values: \begin{align} g_j(z_{u+2j-1}, z_{u+2j}) &:= z_{u+2j-1} \cdot z_{u+2j}&\text{ for }j=1,\dots,v. \end{align} Finally, the $L$-th and last layer computes the regression values by a linear read-out \begin{align} y^{\layer{L}} &:= W^{\layer{L}} y^{\layer{L-1}} + b^{\layer{L}}. \end{align} The architecture is depicted in \fig{fig:network}. We call the new architecture Equation Learner (\method{}) and denote the function it defines by $\psi$. \subsection{Discussion of the architecture} The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of \emph{sine} and \emph{cosine} as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space. \emph{Sigmoid} nonlinearities are the canonical choice of \emph{activation function} for \emph{artificial neural networks} (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular \emph{radial basis functions}~\cite{broomhead1988radial} we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as \emph{(square) roots} and \emph{logarithms}, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work. The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units~\cite{DurbinRumelhart1989:ProductUnits} and Pi-Sigma-unit~\cite{ShinGhosh1991:pi-sigma}. The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network. Finally, each layer of the network contains unary units that act as \emph{identity} maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths. \subsection{Network training} The \method{} is fully differentiable in its free parameters $\theta=\{W^{(1)},\dots,W^{(L)},b^{(1)},\dots,b^{(L)}\}$, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective~\cite{tibshirani1996regression}, \begin{align} L(D)&=\frac{1}{N}\sum^{|D|}_{i=1}\|\psi(x_i) - y_i\|^2 + \lambda \sum_{l=1}^L\big|W^\l\big|_1\,,\label{eqn:loss} \end{align} that is, a linear combination of $L_2$ loss and $L_1$ regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam~\cite{KingmaBa2015:Adam} for calculating the updates: \begin{align} \theta_{t+1} &= \theta_{t} + \text{Adam}\left(\frac{\partial L(D_{(t)})}{\partial \theta}, \alpha\right), \end{align} where $D_{(t)}$ denotes the current mini-batch and $\alpha$ is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use $\alpha=0.001$ and a mini-batch size of 20. The role of the $L_1$ regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values. Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure ($t<t_1$) we use no regularization ($\lambda=0$), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting $\lambda$ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training ($t>t_2$) we disable $L_1$ regularization ($\lambda=0$) but enforce the same $L_0$ norm of the weights. This is achieved by keeping all weights $w\in W^{1\dots L}$ that are close to 0 at 0, \ie if $|w|<0.001$ then $w=0$ during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints $t_1$ and $t_2$ is not critical. In practice, we use $t_1 = \frac{1}{4} T$ and $t_2=\frac{19}{20} T$, where $T$ is total number of update steps. $T$ was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous. \subsection{Model selection for extrapolation}\label{sec:modelsel} \method{} networks have a number of hyper-parameters, \eg the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the ``right'' formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between $cos(x)$ and its truncated power series approximation $1-x^2/2 + x^4/24$, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 %\sec{sec:modelsel:app} for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, \ie it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances \wrt validation error and sparsity and select the one with the smallest $L_2$ norm (in rank-space), see \eqn{eqn:model:sel}. Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations. \documentclass[a4paper]{article} % For LaTeX2e \usepackage[margin=2.5cm,top=2cm]{geometry} \usepackage[square,sort]{natbib} \bibliographystyle{abbrvnat} \renewcommand{\cite}[1]{\citep{#1}} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % professional-quality tables % compact symbols for 1/2, etc. % microtypography % hyperlinks % simple URL typesetting \graphicspath{{../graphics/}} \newcommand{\Fig}[1]{Figure~\ref{#1}} \newcommand{\fig}[1]{Fig.~\ref{#1}} \newcommand{\Tab}[1]{Table~\ref{#1}} \newcommand{\tab}[1]{Tab.~\ref{#1}} \newcommand{\Eqn}[1]{Equation \eqref{#1}} \newcommand{\eqn}[1]{Eq.~\eqref{#1}} % Eq. (1.1) \newcommand{\eqnp}[1]{(Eq.~\ref{#1})} % (Eq. 1.1) \renewcommand{\sec}[1]{Section~\ref{#1}} % Section 1 \newcommand{\ie}{i.\,e.~} \newcommand{\eg}{e.\,g.~} \newcommand{\wrt}{w.\,r.\,t.~} \newcommand{\Real}{\ensuremath{\mathbb R}} % Real numbers \newcommand{\Unit}{\ensuremath{\mathbb I}} % Unit Matrix \newcommand{\T}{\ensuremath{\top}} % Transpose \newcommand{\sigm}{\ensuremath{\text{sigm}}} % sigmoid \newcommand\Tstrut{\rule{0pt}{2.6ex}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\method}{EQL} %{\textcolor{green}{EQL}}%{EQL,ABFNet} \newcommand{\typ}{\ensuremath{I}} % unit type \newcommand{\D}{\ensuremath{\mathbf{D}}} % dataset \newcommand{\train}{\ensuremath{\text{Tr}}} % training \newcommand{\test}{\ensuremath{\text{T}}} % training \newcommand{\val}{\ensuremath{\text{V}}} % training \newcommand{\extra}{\ensuremath{\text{X}}} % training \newcommand{\restr}{\ensuremath{\text{XTr}}} % training \newcommand{\restrval}{\ensuremath{\text{XV}}} % training \newcommand{\layer}[1]{{(#1)}} % layer l \renewcommand{\l}{{\layer{l}}} % layer l \newcommand{\lm}{{\layer{l-1}}} % layer l-1 \usepackage[disable]{todonotes} \setlength{\marginparwidth}{3cm} \newcommand{\Geo}[2][inline]{\todo[#1,color=yellow!60,size=\scriptsize]{#2}} \newcommand{\Chl}[2][inline]{\todo[#1,color=green!70,size=\scriptsize]{#2}} \pdfinfo{ /Title (Extrapolation and learning equations) /Author (Georg Martius and Christoph H. Lampert)} \setcounter{secnumdepth}{0} \begin{document} \title{Extrapolation and learning equations} \author{% Georg Martius \& Christoph H. Lampert\\ IST Austria\\ Am Campus 1, 3400 Klosterneuburg, Austria\\ \texttt{\{gmartius,chl\}@ist.ac.at} } \maketitle \begin{abstract} In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. \end{abstract} \input{intro.tex} \input{methods.tex} \input{relatedwork.tex} \input{results.tex} \vspace*{-.2em} \section{Conclusions}\vspace*{-.2em} We presented a new network architecture called \method{} that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing $L_1$ regularization and fixing $L_0$ norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies. The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor. If the origin of the data is not in the hypothesis class, \ie the underlying expression cannot be represented by the network, good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense. \subsubsection*{Acknowledgments} GM received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no.~[291734]. \begin{thebibliography}{24} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Basak et~al.(2007)Basak, Pal, and Patranabis]{basak2007:SVR} D.~Basak, S.~Pal, and D.~C. Patranabis. \newblock Support vector regression. \newblock \emph{Neural Information Processing-Letters and Reviews}, 11\penalty0 (10):\penalty0 203--224, 2007. \bibitem[Ben-David et~al.(2010)Ben-David, Blitzer, Crammer, Kulesza, Pereira, and Vaughan]{ben2010theory} S.~Ben-David, J.~Blitzer, K.~Crammer, A.~Kulesza, F.~Pereira, and J.~W. Vaughan. \newblock A theory of learning from different domains. \newblock \emph{Machine Learning}, 79\penalty0 (1-2):\penalty0 151--175, 2010. \bibitem[Bengio(2009)]{bengio2009learning} Y.~Bengio. \newblock Learning deep architectures for {AI}. \newblock \emph{Foundations and Trends in Machine Learning}, 2\penalty0 (1):\penalty0 1--127, 2009. \bibitem[Bengio et~al.(2013)Bengio, Courville, and Vincent]{bengio2013representation} Y.~Bengio, A.~Courville, and P.~Vincent. \newblock Representation learning: A review and new perspectives. \newblock \emph{IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, 35\penalty0 (8):\penalty0 1798--1828, 2013. \bibitem[Bishop(1995)]{bishop1995neural} C.~M. Bishop. \newblock \emph{Neural networks for pattern recognition}. \newblock Oxford University Press, 1995. \bibitem[Broomhead and Lowe(1988)]{broomhead1988radial} D.~S. Broomhead and D.~Lowe. \newblock Radial basis functions, multi-variable functional interpolation and adaptive networks. \newblock Technical report, DTIC Document, 1988. \bibitem[Deslattes et~al.(2003)Deslattes, Kessler~Jr, Indelicato, De~Billy, Lindroth, and Anton]{Deslattes2003:XrayTransEnergies} R.~D. Deslattes, E.~G. Kessler~Jr, P.~Indelicato, L.~De~Billy, E.~Lindroth, and J.~Anton. \newblock X-ray transition energies: new approach to a comprehensive evaluation. \newblock \emph{Reviews of Modern Physics}, 75\penalty0 (1):\penalty0 35, 2003. \bibitem[Durbin and Rumelhart(1989)]{DurbinRumelhart1989:ProductUnits} R.~Durbin and D.~E. Rumelhart. \newblock Product units: A computationally powerful and biologically plausible extension to backpropagation networks. \newblock \emph{Neural Computation}, 1\penalty0 (1):\penalty0 133--142, Mar. 1989. \newblock ISSN 0899-7667. \newblock \doi{10.1162/neco.1989.1.1.133}. \newblock URL \url{http://dx.doi.org/10.1162/neco.1989.1.1.133}. \bibitem[Gy{\"o}rfi et~al.(2013)Gy{\"o}rfi, H{\"a}rdle, Sarda, and Vieu]{gyorfi2013nonparametric} L.~Gy{\"o}rfi, W.~H{\"a}rdle, P.~Sarda, and P.~Vieu. \newblock \emph{Nonparametric curve estimation from time series}, volume~60. \newblock Springer, 2013. \bibitem[Kingma and Ba(2015)]{KingmaBa2015:Adam} D.~Kingma and J.~Ba. \newblock Adam: A method for stochastic optimization. \newblock In \emph{in Proceedings of ICLR}, 2015. \bibitem[M{\"u}ller et~al.(1997)M{\"u}ller, Smola, R{\"a}tsch, Sch{\"o}lkopf, Kohlmorgen, and Vapnik]{muller1997predicting} K.-R. M{\"u}ller, A.~J. Smola, G.~R{\"a}tsch, B.~Sch{\"o}lkopf, J.~Kohlmorgen, and V.~Vapnik. \newblock Predicting time series with support vector machines. \newblock In \emph{Artificial Neural Networks (ICANN)}, pages 999--1004. Springer, 1997. \bibitem[Pearl(2000)]{Pearl2000} J.~Pearl. \newblock \emph{Causality}. \newblock Cambridge {U}niversity {P}ress, 2000. \bibitem[Peters et~al.(2014)Peters, Mooij, Janzing, and Sch{\"o}lkopf]{PetersMJS2014} J.~Peters, J.~Mooij, D.~Janzing, and B.~Sch{\"o}lkopf. \newblock Causal discovery with continuous additive noise models. \newblock \emph{Journal of Machine Learning Research (JMLR)}, 15:\penalty0 2009--2053, 2014. \bibitem[Poon and Domingos(2012)]{PoonDomingos2011:sum-product-networks} H.~Poon and P.~M. Domingos. \newblock Sum-product networks: {A} new deep architecture, 2012. \bibitem[Quionero-Candela et~al.(2009)Quionero-Candela, Sugiyama, Schwaighofer, and Lawrence]{quionero2009dataset} J.~Quionero-Candela, M.~Sugiyama, A.~Schwaighofer, and N.~D. Lawrence. \newblock \emph{Dataset shift in machine learning}. \newblock The MIT Press, 2009. \bibitem[Schmidt and Lipson(2009)]{SchmidtLipson2009:learnnaturallaws} M.~Schmidt and H.~Lipson. \newblock Distilling free-form natural laws from experimental data. \newblock \emph{Science}, 324\penalty0 (5923):\penalty0 81--85, 2009. \newblock ISSN 0036-8075. \newblock \doi{10.1126/science.1165893}. \newblock URL \url{http://science.sciencemag.org/content/324/5923/81}. \bibitem[Shin and Ghosh(1991)]{ShinGhosh1991:pi-sigma} Y.~Shin and J.~Ghosh. \newblock The pi-sigma network : An efficient higher-order neural network for pattern classification and function approximation. \newblock In \emph{in Proceedings of the International Joint Conference on Neural Networks}, pages 13--18, 1991. \bibitem[Smola and Sch{\"o}lkopf(2004)]{smola2004tutorial} A.~J. Smola and B.~Sch{\"o}lkopf. \newblock A tutorial on support vector regression. \newblock \emph{Statistics and computing}, 14\penalty0 (3):\penalty0 199--222, 2004. \bibitem[Specht(1991)]{specht1991general} D.~F. Specht. \newblock A general regression neural network. \newblock \emph{IEEE Transactions on Neural Networks (TNN)}, 2\penalty0 (6):\penalty0 568--576, 1991. \bibitem[{Theano Development Team}(2016)]{2016arXiv160502688short} {Theano Development Team}. \newblock {Theano: A {Python} framework for fast computation of mathematical expressions}. \newblock \emph{arXiv e-prints}, abs/1605.02688, May 2016. \newblock URL \url{http://arxiv.org/abs/1605.02688}. \bibitem[Tibshirani(1996)]{tibshirani1996regression} R.~Tibshirani. \newblock Regression shrinkage and selection via the lasso. \newblock \emph{Journal of the Royal Statistical Society. Series B (Methodological)}, pages 267--288, 1996. \bibitem[Wiener(1949)]{wiener1949extrapolation} N.~Wiener. \newblock \emph{Extrapolation, interpolation, and smoothing of stationary time series}, volume~2. \newblock The MIT Press, 1949. \bibitem[Williams and Rasmussen(2006)]{williams2006gaussian} C.~K.~I. Williams and C.~E. Rasmussen. \newblock \emph{Gaussian processes for machine learning}. \newblock The MIT Press, 2006. \bibitem[Zaremba et~al.(2014)Zaremba, Kurach, and Fergus]{ZarembaFergus2014:LearnMathIdentities} W.~Zaremba, K.~Kurach, and R.~Fergus. \newblock Learning to discover efficient mathematical identities. \newblock In Z.~Ghahramani, M.~Welling, C.~Cortes, N.~Lawrence, and K.~Weinberger, editors, \emph{Advances in Neural Information Processing Systems 27}, pages 1278--1286. Curran Associates, Inc., 2014. \end{thebibliography} \input{appendix.tex} \end{document} \appendix \section{Appendix} \section{A1: Model selection details}\label{sec:modelsel:app} \subsection{Quantifying sparsity} We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by $s$. For a given network $phi$ we get \begin{align} s(\phi) = \sum_{l=1}^L\sum_{i=1}^k\Theta( |W^\l_{i,\cdot}| * |W^{\layer{l+1}}_{\cdot,i}| - 0.01)\,,\label{eqn:s} \end{align} where $\Theta$ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula). \subsection{Selection criteria} As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let $r^v(\phi)$ and $r^s(\phi)$ be the ranks of the network $\phi$ \wrt the validation error and sparsity $s(\phi)$respectively, then the network with minimal squared rank norm is selected: \begin{align} \argmin_\phi\left[ r^v(\phi)^2 + r^s(\phi)^2\right] \label{eqn:model:sel} \end{align} In \fig{fig:model:sel} the extrapolation performance of all considered networks for the kin2D-3 dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error. \section{A2: Dependence on noise and number of data points}\label{sec:dep:noise-pts} In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in \fig{fig:dep:noise-pts}. In general the method is robust to noise and as expected, more noise can be compensated by more data.
Extrapolation and learning equations
1610.02995
Figure 5: X-Ray transition energies. (a) Measured data and predicted values by EQL and (b) visualized prediction error for all methods for one train/validation splitting. (c) EQL solutions during model selection in validation error – sparsity space, see Appendix A1 for details. (d) numeric results. Reported are RMS errors with standard deviation for 10 independent train/validation splits. In real units the error is in 100 keV and is well below the difference between neighboring high-Z elements. (e) learned formulas for different sparsities s (lowest dot for each s in (c)).
[ "[EMPTY]", "interpolation", "extrapolation" ]
[ [ "EQL", "0.00042", "0.0061±0.0038" ], [ "MLP", "0.002", "0.0180±0.0024" ], [ "SVR", "0.00067", "0.0057±0.0014" ] ]
As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The true relationship between atomic number Z and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is Kα2∝Z2 according to Moseley’s law. Further correction terms for elements with larger Z are potentially of higher order. We have data for elements with 10≤Z≤100, which is split into training/validation sets in the range [10,91] (70/10 data points) and extrapolation test set in the interval [92,100] (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation splits. The data is scaled to lie in [0,1], i. e. x=Z/100 and y=Kα2/100000. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the Z2 relationship. Mini-batch size is 2 here and T=50000 was used. EQL and SVR achieve similar performance and MLP is significantly worse. However, EQL also yields interpretable formulas, see Fig.
\section{Introduction}\label{sec:intro} The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, \ie in a way to not overfit the data, the regression problem is well understood and can -- at least conceptually -- be considered solved. % However, when working with data from real-world devices, \eg controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, \eg when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call \emph{extrapolation generalization}, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, \eg mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. %\sec{sec:method}) We present our results in the Section \emph{Experimental evaluation} %\sec{sec:results} and close with conclusions. \subsection{Related work}% In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, \eg a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR)~\cite{williams2006gaussian} or Support Vector Regression (SVR)~\cite{smola2004tutorial}, or a multi-layer network of suitable expressive power~\cite{specht1991general}. The goal is to find a prediction function that leads to a small expected error on future data, not necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a ``biologically plausible'' model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, \eg allowing only for sparse linear models. The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of \emph{system identification}. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common. \emph{Causal learning} is an area of recent research that aims at identifying a causal relation between multiple observables, % phenomena, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence~\cite{Pearl2000}. Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. %\Geo[]{actually they explain it in terms of the conditional probability.}. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions~\cite{PetersMJS2014}. % one expects to observe The topic of learning a regression function with emphasis on \emph{extrapolation} performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, \ie predict the next value(s)~\cite{wiener1949extrapolation}. By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution~\cite{muller1997predicting,gyorfi2013nonparametric}. Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the \emph{domain adaptation} setting. In particular, since we assume a common labeling function, our setting would fall under the \emph{covariate shift} setting~\cite{quionero2009dataset}. Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time~\cite{ben2010theory}. In %,ben2010impossibility our setting this is not possible to obtain. On the technical level, \method{} networks are an instance of general feed-forward networks for function approximation~\cite{bishop1995neural}. In contrast to recent trends towards \emph{deep learning}~\cite{bengio2009learning,bengio2013representation}, our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, \method{} networks resemble \emph{sum-product networks (SPNs)}~\cite{PoonDomingos2011:sum-product-networks} and \emph{Pi-Sigma networks (PSNs)}~\cite{ShinGhosh1991:pi-sigma}, in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas \method{} networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in \method{} multiplication is optional. Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities~\cite{SchmidtLipson2009:learnnaturallaws}. Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In \cite{ZarembaFergus2014:LearnMathIdentities} this was done using machine learning to overcome the potentially exponential search space. \section{Experimental evaluation}\label{sec:results} We demonstrate the ability of \method{} to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and evaluation procedure in \emph{python} based on the \emph{theano} framework~\cite{2016arXiv160502688short}. We will make the code for training and evaluation public after acceptance of the manuscript. %Todo \paragraph{Pendulum.} We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is $X=\Real\times\Real$ where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as $(\theta,\omega)$, but for our purposes, we call them $(x_1,x_2)$ in order to keep the notation consistent between experiments. The pendulum's dynamic behavior is governed by the following two ordinary differential equations: \begin{equation} \dot x_1 = x_2 \qquad\qquad\text{and}\qquad\qquad \dot x_2 = -g \sin(x_1)\,,\label{eqn:pend} % y_2 := \end{equation} where $g=9.81$ is the gravitation constant. We divide each equation by $g$ in order to balance the output scales and form a regression problem with two output values, $y_1=\frac{1}{g}x_2$ and $y_2=-\sin(x_1)$. As training data, we sample 1000 points uniformly in the hypercube {\small $[-h,h] \times [-h,h]$} for $h=2$. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation $\sigma=0.01$. We also define three test sets, each with 1000 points. The \emph{interpolation test set} is sampled from the same data distribution as the training set. The \emph{extrapolation (near) test set} contains data sampled uniformly from the data domain {\small $[-\frac32 h,\frac32 h] \times [-\frac32 h,\frac32 h]\setminus [-h,h] \times [-h,h]$}, which is relatively near the training region and the \emph{extrapolation (far) test set} extends the region to further outside: {\small $[-2h,2h] \times [-2h,2h]\setminus [-h,h] \times [-h,h]$}. We train a 2-layer \method{} and perform model selection among the hyper-parameters: the regularization strength {\small $\lambda\in10^{\{-7,-6.3,-6,-5.3,-5,-4.3,-4,-3.3,-3\}}$} and the number of nodes {\small $\frac 1 4 u=v\in\{1,3,5\}$}. All weights are randomly initialized from a normal distribution with {\small $\sigma = \sqrt{1/(k'+d)}$}. The unit selection $\typ{}$ is set such that all unit types are equally often. To ensure convergence we chose $T=10000$ epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with $\tanh$ activation functions and possible hyperparameters: $\lambda$ as for \method, number of layers {\small $L\in\{2,3\}$}, and number of neurons {\small $k\in\{5,10,20\}$}. A second baseline is given by epsilon support vector regression (SVR)~\cite{basak2007:SVR} with two hyperparameters {\small $C\in10^{\{-3,-2,-1,0,1,2,3,3.5\}}$} and {\small $\epsilon \in 10^{\{-3,-2,-1,0\}}$} using radial basis function kernel with width {\small $\gamma\in \{0.05,0.1,0.2,0.5,1.0\}$}. Numeric results are reported in \tab{tab:pend:results}. As expected all models are able to interpolate well with a test error on the order of the noise level ($\sigma=0.01$). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. \method, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure~\ref{fig:pend}: while the MLP and SVR simply learns a function that interpolates the training values, \method{} finds the correct functional expression and therefore predicts the correct values for any input data. \paragraph{Double pendulum kinematics.} The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum \cite{SchmidtLipson2009:learnnaturallaws}. The task here is to learn the position of the tips of the double pendulum segments from the given joint angles ($x_1,x_2$). These positions where not measured such that we supply them by the following formula: $y_1=\cos(x_1), y_2=\cos(x_1)+\cos(x_1+x_2), y_3=\sin(x_1), y_4=\sin(x_1)+\sin(x_1+x_2)$ where $(y_1,y_3)$ and $(y_2,y_4)$ correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first covers only part of the domain (input as well as output) and consists of 819 samples where 10\% was used as validation set (randomly sampled), see \fig{fig:dpk}(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to $[-\pi,\pi]$. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in \fig{fig:dpk}(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see \fig{fig:dpk}(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in \fig{fig:dpk}(d). Model selection is performed to determine $\lambda$ as above, $u=v\in\{3,5\}$, (MLP: $k\in\{5,10,20\}$) and layer number $L\in\{2,3\}$. \paragraph{Robotic arms.} A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in $[-\nicefrac{\pi}{2},\nicefrac{\pi}{2}]$, each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude $[-\pi,\pi]$ was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (\emph{kin-3-end}, \emph{kin-4-end}) and the coordinates of all segment positions \emph{kin-5-all}. The numerical results, see \tab{tab:kin}, shows that our method is able to extrapolate in these cases. Model selection as above with $u=v\in\{10,20\}$, (MLP: $k\in\{10,50\}$) and layer number $L\in\{2,3,4\}$. To illustrate the dependence on the amount of noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance. % linear relationship? \paragraph{Learning complex formula.} In order to find out whether \method{} can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output: \begin{align} y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + \sin\left(2 \pi x_2 + \nicefrac{\pi}{8}\right)+x_2 - x_3 x_4 \right)&\text{F-1}\label{eqn:syn1}\\ y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + x_2 \cos(2\pi x_1 + \nicefrac{\pi}{4}) + x_3-x_4^2\right) &\text{F-2}\label{eqn:syn2}\\ y &= \nicefrac{1}{3} \left( (1+x_2) \sin(\pi x_1) + x_2 x_3 x_4\right) &\text{F-3}\label{eqn:syn3} \end{align} The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of $x_2$ and $\cos$ and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with $h=1$ as input data range. We use 10000 points for training set and validation set (90\%-10\% split) and 5000 points for each of the test sets. Model selection for \method{} is performed as above using the number of layers {\small $L\in{2,3,4}$}. The number of units is set to $\frac{1}{4}u=v=10$. For the MLP, we select $L$ and $\lambda$ from the same set as above as well as {\small $k\in\{10,30\}$}. \Tab{tab:syn:results} shows the numerical results. Again, all methods are able to interpolate, but only \method{} achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure~\fig{fig:syn} illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement $x_2\cos(a x_1+b)$? Apparently it uses $1.21 \cos(a x_1 + \pi + b + 0.41 x_2) + \sin(a x_1 + b + 0.41 x_2)$ which is a good approximation for $x_2 \in [-2,2]$. The sparsity of this solution is $5$ whereas the true solution needs at least $6$, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating $(1+x_2)\sin(x_1)$ using $(x_1 + x_1 x_2)\cos(\beta x_1)$, which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see \fig{fig:syn}(c). \paragraph{X-Ray transition energies.} As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from~\cite{Deslattes2003:XrayTransEnergies}, where we consider one specific transition, called the $K\,\alpha_2$ line, because it was measured for all elements. The true relationship between atomic number $Z$ and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is $K\,\alpha_2 \propto Z^2$ according to Moseley's law. Further correction terms for elements with larger $Z$ are potentially of higher order. We have data for elements with $10\le Z \le 100$, which is split into training/validation sets in the range $[10,91]$ (70/10 data points) and extrapolation test set in the interval $[92,100]$ (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation splits. The data is scaled to lie in $[0,1]$, \ie $x= Z/100$ and $y=K\alpha_2/100000$. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the $Z^2$ relationship. Mini-batch size is 2 here and $T=50000$ was used. \Fig{fig:xray} presents the data, the predictions, the learned formulas and the numerical results. \method{} and SVR achieve similar performance and MLP is significantly worse. However, \method{} also yields interpretable formulas, see \fig{fig:xray}(e) that can be used to gain insights into the potential relationship. \subsection{Poor extrapolation out of model class --- cart-pendulum system} Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set. Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see \fig{fig:cp}(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector $x=(x_1,\dots,x_4)$. We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where $y_1 = \dot x_1 = x_3$, $y_2 = \dot x_2 = x_4$ and \begin{align} y_3&= \frac{-x_1-0.01 x_3+x_4^2 \sin\left(x_2\right)+0.1 x_4 \cos \left(x_2\right)+9.81 \sin \left(x_2\right) \cos \left(x_2\right)}{\sin ^2\left(x_2\right)+1}\label{eqn:cp}, \\ y_4&= \frac{-0.2 x_4 - 19.62 \sin \left(x_2\right) + x_1 \cos \left(x_2\right) + 0.01 x_3 \cos \left(x_2\right) - x_4^2 \sin \left(x_2\right)\cos \left(x_2\right)} {\sin^2\left(x_2\right)+1}.\nonumber \end{align} The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we {\bf cannot} expect great extrapolation performance and this is confirmed by the experiments. In \fig{fig:cp}(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both \method{} and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in \tab{tab:cp:results}. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used. \section{Regression and extrapolation}\label{sec:setting} We consider a multivariate regression problem with a training set $\{(x_1,y_1),\dots,(x_N,y_N)\}$ with $x \in \Real^n$, $y\in \Real^m$. % sampled from a data distribution $p(x,y)$. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), $\phi:\Real^n\to\Real^m$ with additive zero-mean noise, $\xi$, \ie $y=\phi(x)+\xi$ and $\mathbb{E}\xi=0$. The function $\phi$ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function $\psi:\Real^n\to\Real^m$ that approximates the true functional relation as well as possible in the squared loss sense, \ie achieves minimal expected error $\mathbb{E}\|\psi(x) - \phi(x)\|^2$. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on training or test data $D$, %=[(x_i,y_i)]$, \begin{align} E(D)&=\frac{1}{N}\sum^{N}_{i=1}\|\psi(x_i) - y_i\|^2\,. \label{eqn:error} \end{align} If training and test data are sampled from the same distribution then we speak about an \emph{interpolation} problem. In the \emph{extrapolation} setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, \eg for higher velocities. As usual, we split the data that is available at training time into a part for model training %, $\D^\train$, and a part for validation or model selection. %, $\D^\val$. \section{Learning a network for function extrapolation}\label{sec:method}%Learning physical equations The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an $L$-layer network, there are $L-1$ hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure ($k'$ inputs, $k$ outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match. The linear mapping at level $l$ maps the $k'$-dimensional input $y^{\lm}$ to the $d$-dimensional intermediate representation $z$ given by \begin{align} z^\l &= W^\l y^\lm + b^\l, \end{align} where $y^\lm$ is the output of the previous layer, with the convention $y^{(0)}=x$. The weight matrix $W^\l\in \Real^{d \times k'}$ and the bias vector $b^\l\in\Real^{d}$ are free parameters that are learned during training. The non-linear transformation contains $u$ \emph{unary units}, $f_i:\Real\to\Real$, for $i=1,\dots,u$, and $v$ \emph{binary units}, $g_j:\Real\times\Real\to\Real$ for $j=1,\dots,v$. Their outputs are concatenated to form the layer output \begin{align} y^\l &:= \Big(f_1(z^\l_1),f_2(z^\l_2),\dots,f_{u}(z^\l_{u}),\nonumber\\ & \qquad g_{1}(z^\l_{u+1},z^\l_{u+2}),\dots,g_{v}(z^\l_{u+2v-1},z^\l_{u+2v}) \Big)\,. \end{align} In total, the nonlinear stage has $k = u + v$ outputs and $d = u + 2 v$ inputs. The unary units, $f_1,\dots,f_u$ receive the respective component, $z_1,\dots,z_u$ as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter $\typ_i\in\{0,1,2,3\}$ \begin{align} f_i(z_i) &:= \begin{cases} z_i & \text{ if } \typ_i=0,\\ \sin(z_i) & \text{ if } \typ_i=1,\\ \cos(z_i) & \text{ if } \typ_i=2,\\ \sigm(z_i) & \text{ if } \typ_i=3, \end{cases}&\text{ for } i=1,\dots,u, \end{align} where $\sigm(z)=\frac{1}{1+e^{-z}}$ is the standard sigmoid function. The binary units, $g_1,\dots,g_v$ receive the remaining component, $z_{u+1},\dots,z_{u+2v}$, as input in pairs of two. They are \emph{multiplication units} that compute the product of their two input values: \begin{align} g_j(z_{u+2j-1}, z_{u+2j}) &:= z_{u+2j-1} \cdot z_{u+2j}&\text{ for }j=1,\dots,v. \end{align} Finally, the $L$-th and last layer computes the regression values by a linear read-out \begin{align} y^{\layer{L}} &:= W^{\layer{L}} y^{\layer{L-1}} + b^{\layer{L}}. \end{align} The architecture is depicted in \fig{fig:network}. We call the new architecture Equation Learner (\method{}) and denote the function it defines by $\psi$. \subsection{Discussion of the architecture} The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of \emph{sine} and \emph{cosine} as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space. \emph{Sigmoid} nonlinearities are the canonical choice of \emph{activation function} for \emph{artificial neural networks} (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular \emph{radial basis functions}~\cite{broomhead1988radial} we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as \emph{(square) roots} and \emph{logarithms}, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work. The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units~\cite{DurbinRumelhart1989:ProductUnits} and Pi-Sigma-unit~\cite{ShinGhosh1991:pi-sigma}. The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network. Finally, each layer of the network contains unary units that act as \emph{identity} maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths. \subsection{Network training} The \method{} is fully differentiable in its free parameters $\theta=\{W^{(1)},\dots,W^{(L)},b^{(1)},\dots,b^{(L)}\}$, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective~\cite{tibshirani1996regression}, \begin{align} L(D)&=\frac{1}{N}\sum^{|D|}_{i=1}\|\psi(x_i) - y_i\|^2 + \lambda \sum_{l=1}^L\big|W^\l\big|_1\,,\label{eqn:loss} \end{align} that is, a linear combination of $L_2$ loss and $L_1$ regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam~\cite{KingmaBa2015:Adam} for calculating the updates: \begin{align} \theta_{t+1} &= \theta_{t} + \text{Adam}\left(\frac{\partial L(D_{(t)})}{\partial \theta}, \alpha\right), \end{align} where $D_{(t)}$ denotes the current mini-batch and $\alpha$ is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use $\alpha=0.001$ and a mini-batch size of 20. The role of the $L_1$ regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values. Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure ($t<t_1$) we use no regularization ($\lambda=0$), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting $\lambda$ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training ($t>t_2$) we disable $L_1$ regularization ($\lambda=0$) but enforce the same $L_0$ norm of the weights. This is achieved by keeping all weights $w\in W^{1\dots L}$ that are close to 0 at 0, \ie if $|w|<0.001$ then $w=0$ during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints $t_1$ and $t_2$ is not critical. In practice, we use $t_1 = \frac{1}{4} T$ and $t_2=\frac{19}{20} T$, where $T$ is total number of update steps. $T$ was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous. \subsection{Model selection for extrapolation}\label{sec:modelsel} \method{} networks have a number of hyper-parameters, \eg the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the ``right'' formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between $cos(x)$ and its truncated power series approximation $1-x^2/2 + x^4/24$, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 %\sec{sec:modelsel:app} for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, \ie it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances \wrt validation error and sparsity and select the one with the smallest $L_2$ norm (in rank-space), see \eqn{eqn:model:sel}. Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations. \documentclass[a4paper]{article} % For LaTeX2e \usepackage[margin=2.5cm,top=2cm]{geometry} \usepackage[square,sort]{natbib} \bibliographystyle{abbrvnat} \renewcommand{\cite}[1]{\citep{#1}} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % professional-quality tables % compact symbols for 1/2, etc. % microtypography % hyperlinks % simple URL typesetting \graphicspath{{../graphics/}} \newcommand{\Fig}[1]{Figure~\ref{#1}} \newcommand{\fig}[1]{Fig.~\ref{#1}} \newcommand{\Tab}[1]{Table~\ref{#1}} \newcommand{\tab}[1]{Tab.~\ref{#1}} \newcommand{\Eqn}[1]{Equation \eqref{#1}} \newcommand{\eqn}[1]{Eq.~\eqref{#1}} % Eq. (1.1) \newcommand{\eqnp}[1]{(Eq.~\ref{#1})} % (Eq. 1.1) \renewcommand{\sec}[1]{Section~\ref{#1}} % Section 1 \newcommand{\ie}{i.\,e.~} \newcommand{\eg}{e.\,g.~} \newcommand{\wrt}{w.\,r.\,t.~} \newcommand{\Real}{\ensuremath{\mathbb R}} % Real numbers \newcommand{\Unit}{\ensuremath{\mathbb I}} % Unit Matrix \newcommand{\T}{\ensuremath{\top}} % Transpose \newcommand{\sigm}{\ensuremath{\text{sigm}}} % sigmoid \newcommand\Tstrut{\rule{0pt}{2.6ex}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\method}{EQL} %{\textcolor{green}{EQL}}%{EQL,ABFNet} \newcommand{\typ}{\ensuremath{I}} % unit type \newcommand{\D}{\ensuremath{\mathbf{D}}} % dataset \newcommand{\train}{\ensuremath{\text{Tr}}} % training \newcommand{\test}{\ensuremath{\text{T}}} % training \newcommand{\val}{\ensuremath{\text{V}}} % training \newcommand{\extra}{\ensuremath{\text{X}}} % training \newcommand{\restr}{\ensuremath{\text{XTr}}} % training \newcommand{\restrval}{\ensuremath{\text{XV}}} % training \newcommand{\layer}[1]{{(#1)}} % layer l \renewcommand{\l}{{\layer{l}}} % layer l \newcommand{\lm}{{\layer{l-1}}} % layer l-1 \usepackage[disable]{todonotes} \setlength{\marginparwidth}{3cm} \newcommand{\Geo}[2][inline]{\todo[#1,color=yellow!60,size=\scriptsize]{#2}} \newcommand{\Chl}[2][inline]{\todo[#1,color=green!70,size=\scriptsize]{#2}} \pdfinfo{ /Title (Extrapolation and learning equations) /Author (Georg Martius and Christoph H. Lampert)} \setcounter{secnumdepth}{0} \begin{document} \title{Extrapolation and learning equations} \author{% Georg Martius \& Christoph H. Lampert\\ IST Austria\\ Am Campus 1, 3400 Klosterneuburg, Austria\\ \texttt{\{gmartius,chl\}@ist.ac.at} } \maketitle \begin{abstract} In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. \end{abstract} \input{intro.tex} \input{methods.tex} \input{relatedwork.tex} \input{results.tex} \vspace*{-.2em} \section{Conclusions}\vspace*{-.2em} We presented a new network architecture called \method{} that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing $L_1$ regularization and fixing $L_0$ norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies. The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor. If the origin of the data is not in the hypothesis class, \ie the underlying expression cannot be represented by the network, good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense. \subsubsection*{Acknowledgments} GM received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no.~[291734]. \begin{thebibliography}{24} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Basak et~al.(2007)Basak, Pal, and Patranabis]{basak2007:SVR} D.~Basak, S.~Pal, and D.~C. Patranabis. \newblock Support vector regression. \newblock \emph{Neural Information Processing-Letters and Reviews}, 11\penalty0 (10):\penalty0 203--224, 2007. \bibitem[Ben-David et~al.(2010)Ben-David, Blitzer, Crammer, Kulesza, Pereira, and Vaughan]{ben2010theory} S.~Ben-David, J.~Blitzer, K.~Crammer, A.~Kulesza, F.~Pereira, and J.~W. Vaughan. \newblock A theory of learning from different domains. \newblock \emph{Machine Learning}, 79\penalty0 (1-2):\penalty0 151--175, 2010. \bibitem[Bengio(2009)]{bengio2009learning} Y.~Bengio. \newblock Learning deep architectures for {AI}. \newblock \emph{Foundations and Trends in Machine Learning}, 2\penalty0 (1):\penalty0 1--127, 2009. \bibitem[Bengio et~al.(2013)Bengio, Courville, and Vincent]{bengio2013representation} Y.~Bengio, A.~Courville, and P.~Vincent. \newblock Representation learning: A review and new perspectives. \newblock \emph{IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, 35\penalty0 (8):\penalty0 1798--1828, 2013. \bibitem[Bishop(1995)]{bishop1995neural} C.~M. Bishop. \newblock \emph{Neural networks for pattern recognition}. \newblock Oxford University Press, 1995. \bibitem[Broomhead and Lowe(1988)]{broomhead1988radial} D.~S. Broomhead and D.~Lowe. \newblock Radial basis functions, multi-variable functional interpolation and adaptive networks. \newblock Technical report, DTIC Document, 1988. \bibitem[Deslattes et~al.(2003)Deslattes, Kessler~Jr, Indelicato, De~Billy, Lindroth, and Anton]{Deslattes2003:XrayTransEnergies} R.~D. Deslattes, E.~G. Kessler~Jr, P.~Indelicato, L.~De~Billy, E.~Lindroth, and J.~Anton. \newblock X-ray transition energies: new approach to a comprehensive evaluation. \newblock \emph{Reviews of Modern Physics}, 75\penalty0 (1):\penalty0 35, 2003. \bibitem[Durbin and Rumelhart(1989)]{DurbinRumelhart1989:ProductUnits} R.~Durbin and D.~E. Rumelhart. \newblock Product units: A computationally powerful and biologically plausible extension to backpropagation networks. \newblock \emph{Neural Computation}, 1\penalty0 (1):\penalty0 133--142, Mar. 1989. \newblock ISSN 0899-7667. \newblock \doi{10.1162/neco.1989.1.1.133}. \newblock URL \url{http://dx.doi.org/10.1162/neco.1989.1.1.133}. \bibitem[Gy{\"o}rfi et~al.(2013)Gy{\"o}rfi, H{\"a}rdle, Sarda, and Vieu]{gyorfi2013nonparametric} L.~Gy{\"o}rfi, W.~H{\"a}rdle, P.~Sarda, and P.~Vieu. \newblock \emph{Nonparametric curve estimation from time series}, volume~60. \newblock Springer, 2013. \bibitem[Kingma and Ba(2015)]{KingmaBa2015:Adam} D.~Kingma and J.~Ba. \newblock Adam: A method for stochastic optimization. \newblock In \emph{in Proceedings of ICLR}, 2015. \bibitem[M{\"u}ller et~al.(1997)M{\"u}ller, Smola, R{\"a}tsch, Sch{\"o}lkopf, Kohlmorgen, and Vapnik]{muller1997predicting} K.-R. M{\"u}ller, A.~J. Smola, G.~R{\"a}tsch, B.~Sch{\"o}lkopf, J.~Kohlmorgen, and V.~Vapnik. \newblock Predicting time series with support vector machines. \newblock In \emph{Artificial Neural Networks (ICANN)}, pages 999--1004. Springer, 1997. \bibitem[Pearl(2000)]{Pearl2000} J.~Pearl. \newblock \emph{Causality}. \newblock Cambridge {U}niversity {P}ress, 2000. \bibitem[Peters et~al.(2014)Peters, Mooij, Janzing, and Sch{\"o}lkopf]{PetersMJS2014} J.~Peters, J.~Mooij, D.~Janzing, and B.~Sch{\"o}lkopf. \newblock Causal discovery with continuous additive noise models. \newblock \emph{Journal of Machine Learning Research (JMLR)}, 15:\penalty0 2009--2053, 2014. \bibitem[Poon and Domingos(2012)]{PoonDomingos2011:sum-product-networks} H.~Poon and P.~M. Domingos. \newblock Sum-product networks: {A} new deep architecture, 2012. \bibitem[Quionero-Candela et~al.(2009)Quionero-Candela, Sugiyama, Schwaighofer, and Lawrence]{quionero2009dataset} J.~Quionero-Candela, M.~Sugiyama, A.~Schwaighofer, and N.~D. Lawrence. \newblock \emph{Dataset shift in machine learning}. \newblock The MIT Press, 2009. \bibitem[Schmidt and Lipson(2009)]{SchmidtLipson2009:learnnaturallaws} M.~Schmidt and H.~Lipson. \newblock Distilling free-form natural laws from experimental data. \newblock \emph{Science}, 324\penalty0 (5923):\penalty0 81--85, 2009. \newblock ISSN 0036-8075. \newblock \doi{10.1126/science.1165893}. \newblock URL \url{http://science.sciencemag.org/content/324/5923/81}. \bibitem[Shin and Ghosh(1991)]{ShinGhosh1991:pi-sigma} Y.~Shin and J.~Ghosh. \newblock The pi-sigma network : An efficient higher-order neural network for pattern classification and function approximation. \newblock In \emph{in Proceedings of the International Joint Conference on Neural Networks}, pages 13--18, 1991. \bibitem[Smola and Sch{\"o}lkopf(2004)]{smola2004tutorial} A.~J. Smola and B.~Sch{\"o}lkopf. \newblock A tutorial on support vector regression. \newblock \emph{Statistics and computing}, 14\penalty0 (3):\penalty0 199--222, 2004. \bibitem[Specht(1991)]{specht1991general} D.~F. Specht. \newblock A general regression neural network. \newblock \emph{IEEE Transactions on Neural Networks (TNN)}, 2\penalty0 (6):\penalty0 568--576, 1991. \bibitem[{Theano Development Team}(2016)]{2016arXiv160502688short} {Theano Development Team}. \newblock {Theano: A {Python} framework for fast computation of mathematical expressions}. \newblock \emph{arXiv e-prints}, abs/1605.02688, May 2016. \newblock URL \url{http://arxiv.org/abs/1605.02688}. \bibitem[Tibshirani(1996)]{tibshirani1996regression} R.~Tibshirani. \newblock Regression shrinkage and selection via the lasso. \newblock \emph{Journal of the Royal Statistical Society. Series B (Methodological)}, pages 267--288, 1996. \bibitem[Wiener(1949)]{wiener1949extrapolation} N.~Wiener. \newblock \emph{Extrapolation, interpolation, and smoothing of stationary time series}, volume~2. \newblock The MIT Press, 1949. \bibitem[Williams and Rasmussen(2006)]{williams2006gaussian} C.~K.~I. Williams and C.~E. Rasmussen. \newblock \emph{Gaussian processes for machine learning}. \newblock The MIT Press, 2006. \bibitem[Zaremba et~al.(2014)Zaremba, Kurach, and Fergus]{ZarembaFergus2014:LearnMathIdentities} W.~Zaremba, K.~Kurach, and R.~Fergus. \newblock Learning to discover efficient mathematical identities. \newblock In Z.~Ghahramani, M.~Welling, C.~Cortes, N.~Lawrence, and K.~Weinberger, editors, \emph{Advances in Neural Information Processing Systems 27}, pages 1278--1286. Curran Associates, Inc., 2014. \end{thebibliography} \input{appendix.tex} \end{document} \appendix \section{Appendix} \section{A1: Model selection details}\label{sec:modelsel:app} \subsection{Quantifying sparsity} We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by $s$. For a given network $phi$ we get \begin{align} s(\phi) = \sum_{l=1}^L\sum_{i=1}^k\Theta( |W^\l_{i,\cdot}| * |W^{\layer{l+1}}_{\cdot,i}| - 0.01)\,,\label{eqn:s} \end{align} where $\Theta$ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula). \subsection{Selection criteria} As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let $r^v(\phi)$ and $r^s(\phi)$ be the ranks of the network $\phi$ \wrt the validation error and sparsity $s(\phi)$respectively, then the network with minimal squared rank norm is selected: \begin{align} \argmin_\phi\left[ r^v(\phi)^2 + r^s(\phi)^2\right] \label{eqn:model:sel} \end{align} In \fig{fig:model:sel} the extrapolation performance of all considered networks for the kin2D-3 dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error. \section{A2: Dependence on noise and number of data points}\label{sec:dep:noise-pts} In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in \fig{fig:dep:noise-pts}. In general the method is robust to noise and as expected, more noise can be compensated by more data.
Extrapolation and learning equations
1610.02995
Table 4: Interpolation and extrapolation performance for cart-pendulum dynamics. See Tab. 1 for details. Note that predicting 0 would yield an error of 0.96 on the far test set.
[ "[EMPTY]", "interpolation", "extrapol. (near)", "extrapol. (far)" ]
[ [ "EQL", "0.0103±0.0000", "0.0621±0.0208", "0.180±0.056" ], [ "MLP", "0.0101±0.0000", "0.0184±0.0008", "0.195±0.006" ], [ "SVR", "0.0118", "0.227", "0.639" ] ]
We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where y1=˙x1=x3, y2=˙x2=x4 and y3 =−x1−0.01x3+x24sin(x2)+0.1x4cos(x2)+9.81sin(x2)cos(x2)sin2(x2)+1, (13) y4 = −0.2x4−19.62sin(x2)+x1cos(x2)+0.01x3cos(x2)−x24sin(x2)cos(x2)sin2(x2)+1. The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we cannot expect great extrapolation performance and this is confirmed by the experiments. In Fig. the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both EQL and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in Tab. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used.
\section{Introduction}\label{sec:intro} The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, \ie in a way to not overfit the data, the regression problem is well understood and can -- at least conceptually -- be considered solved. % However, when working with data from real-world devices, \eg controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, \eg when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call \emph{extrapolation generalization}, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, \eg mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. %\sec{sec:method}) We present our results in the Section \emph{Experimental evaluation} %\sec{sec:results} and close with conclusions. \subsection{Related work}% In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, \eg a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR)~\cite{williams2006gaussian} or Support Vector Regression (SVR)~\cite{smola2004tutorial}, or a multi-layer network of suitable expressive power~\cite{specht1991general}. The goal is to find a prediction function that leads to a small expected error on future data, not necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a ``biologically plausible'' model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, \eg allowing only for sparse linear models. The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of \emph{system identification}. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common. \emph{Causal learning} is an area of recent research that aims at identifying a causal relation between multiple observables, % phenomena, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence~\cite{Pearl2000}. Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. %\Geo[]{actually they explain it in terms of the conditional probability.}. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions~\cite{PetersMJS2014}. % one expects to observe The topic of learning a regression function with emphasis on \emph{extrapolation} performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, \ie predict the next value(s)~\cite{wiener1949extrapolation}. By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution~\cite{muller1997predicting,gyorfi2013nonparametric}. Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the \emph{domain adaptation} setting. In particular, since we assume a common labeling function, our setting would fall under the \emph{covariate shift} setting~\cite{quionero2009dataset}. Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time~\cite{ben2010theory}. In %,ben2010impossibility our setting this is not possible to obtain. On the technical level, \method{} networks are an instance of general feed-forward networks for function approximation~\cite{bishop1995neural}. In contrast to recent trends towards \emph{deep learning}~\cite{bengio2009learning,bengio2013representation}, our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, \method{} networks resemble \emph{sum-product networks (SPNs)}~\cite{PoonDomingos2011:sum-product-networks} and \emph{Pi-Sigma networks (PSNs)}~\cite{ShinGhosh1991:pi-sigma}, in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas \method{} networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in \method{} multiplication is optional. Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities~\cite{SchmidtLipson2009:learnnaturallaws}. Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In \cite{ZarembaFergus2014:LearnMathIdentities} this was done using machine learning to overcome the potentially exponential search space. \section{Experimental evaluation}\label{sec:results} We demonstrate the ability of \method{} to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and evaluation procedure in \emph{python} based on the \emph{theano} framework~\cite{2016arXiv160502688short}. We will make the code for training and evaluation public after acceptance of the manuscript. %Todo \paragraph{Pendulum.} We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is $X=\Real\times\Real$ where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as $(\theta,\omega)$, but for our purposes, we call them $(x_1,x_2)$ in order to keep the notation consistent between experiments. The pendulum's dynamic behavior is governed by the following two ordinary differential equations: \begin{equation} \dot x_1 = x_2 \qquad\qquad\text{and}\qquad\qquad \dot x_2 = -g \sin(x_1)\,,\label{eqn:pend} % y_2 := \end{equation} where $g=9.81$ is the gravitation constant. We divide each equation by $g$ in order to balance the output scales and form a regression problem with two output values, $y_1=\frac{1}{g}x_2$ and $y_2=-\sin(x_1)$. As training data, we sample 1000 points uniformly in the hypercube {\small $[-h,h] \times [-h,h]$} for $h=2$. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation $\sigma=0.01$. We also define three test sets, each with 1000 points. The \emph{interpolation test set} is sampled from the same data distribution as the training set. The \emph{extrapolation (near) test set} contains data sampled uniformly from the data domain {\small $[-\frac32 h,\frac32 h] \times [-\frac32 h,\frac32 h]\setminus [-h,h] \times [-h,h]$}, which is relatively near the training region and the \emph{extrapolation (far) test set} extends the region to further outside: {\small $[-2h,2h] \times [-2h,2h]\setminus [-h,h] \times [-h,h]$}. We train a 2-layer \method{} and perform model selection among the hyper-parameters: the regularization strength {\small $\lambda\in10^{\{-7,-6.3,-6,-5.3,-5,-4.3,-4,-3.3,-3\}}$} and the number of nodes {\small $\frac 1 4 u=v\in\{1,3,5\}$}. All weights are randomly initialized from a normal distribution with {\small $\sigma = \sqrt{1/(k'+d)}$}. The unit selection $\typ{}$ is set such that all unit types are equally often. To ensure convergence we chose $T=10000$ epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with $\tanh$ activation functions and possible hyperparameters: $\lambda$ as for \method, number of layers {\small $L\in\{2,3\}$}, and number of neurons {\small $k\in\{5,10,20\}$}. A second baseline is given by epsilon support vector regression (SVR)~\cite{basak2007:SVR} with two hyperparameters {\small $C\in10^{\{-3,-2,-1,0,1,2,3,3.5\}}$} and {\small $\epsilon \in 10^{\{-3,-2,-1,0\}}$} using radial basis function kernel with width {\small $\gamma\in \{0.05,0.1,0.2,0.5,1.0\}$}. Numeric results are reported in \tab{tab:pend:results}. As expected all models are able to interpolate well with a test error on the order of the noise level ($\sigma=0.01$). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. \method, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure~\ref{fig:pend}: while the MLP and SVR simply learns a function that interpolates the training values, \method{} finds the correct functional expression and therefore predicts the correct values for any input data. \paragraph{Double pendulum kinematics.} The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum \cite{SchmidtLipson2009:learnnaturallaws}. The task here is to learn the position of the tips of the double pendulum segments from the given joint angles ($x_1,x_2$). These positions where not measured such that we supply them by the following formula: $y_1=\cos(x_1), y_2=\cos(x_1)+\cos(x_1+x_2), y_3=\sin(x_1), y_4=\sin(x_1)+\sin(x_1+x_2)$ where $(y_1,y_3)$ and $(y_2,y_4)$ correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first covers only part of the domain (input as well as output) and consists of 819 samples where 10\% was used as validation set (randomly sampled), see \fig{fig:dpk}(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to $[-\pi,\pi]$. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in \fig{fig:dpk}(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see \fig{fig:dpk}(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in \fig{fig:dpk}(d). Model selection is performed to determine $\lambda$ as above, $u=v\in\{3,5\}$, (MLP: $k\in\{5,10,20\}$) and layer number $L\in\{2,3\}$. \paragraph{Robotic arms.} A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in $[-\nicefrac{\pi}{2},\nicefrac{\pi}{2}]$, each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude $[-\pi,\pi]$ was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (\emph{kin-3-end}, \emph{kin-4-end}) and the coordinates of all segment positions \emph{kin-5-all}. The numerical results, see \tab{tab:kin}, shows that our method is able to extrapolate in these cases. Model selection as above with $u=v\in\{10,20\}$, (MLP: $k\in\{10,50\}$) and layer number $L\in\{2,3,4\}$. To illustrate the dependence on the amount of noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance. % linear relationship? \paragraph{Learning complex formula.} In order to find out whether \method{} can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output: \begin{align} y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + \sin\left(2 \pi x_2 + \nicefrac{\pi}{8}\right)+x_2 - x_3 x_4 \right)&\text{F-1}\label{eqn:syn1}\\ y &= \nicefrac{1}{3} \left(\sin(\pi x_1) + x_2 \cos(2\pi x_1 + \nicefrac{\pi}{4}) + x_3-x_4^2\right) &\text{F-2}\label{eqn:syn2}\\ y &= \nicefrac{1}{3} \left( (1+x_2) \sin(\pi x_1) + x_2 x_3 x_4\right) &\text{F-3}\label{eqn:syn3} \end{align} The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of $x_2$ and $\cos$ and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with $h=1$ as input data range. We use 10000 points for training set and validation set (90\%-10\% split) and 5000 points for each of the test sets. Model selection for \method{} is performed as above using the number of layers {\small $L\in{2,3,4}$}. The number of units is set to $\frac{1}{4}u=v=10$. For the MLP, we select $L$ and $\lambda$ from the same set as above as well as {\small $k\in\{10,30\}$}. \Tab{tab:syn:results} shows the numerical results. Again, all methods are able to interpolate, but only \method{} achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure~\fig{fig:syn} illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement $x_2\cos(a x_1+b)$? Apparently it uses $1.21 \cos(a x_1 + \pi + b + 0.41 x_2) + \sin(a x_1 + b + 0.41 x_2)$ which is a good approximation for $x_2 \in [-2,2]$. The sparsity of this solution is $5$ whereas the true solution needs at least $6$, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating $(1+x_2)\sin(x_1)$ using $(x_1 + x_1 x_2)\cos(\beta x_1)$, which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see \fig{fig:syn}(c). \paragraph{X-Ray transition energies.} As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from~\cite{Deslattes2003:XrayTransEnergies}, where we consider one specific transition, called the $K\,\alpha_2$ line, because it was measured for all elements. The true relationship between atomic number $Z$ and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is $K\,\alpha_2 \propto Z^2$ according to Moseley's law. Further correction terms for elements with larger $Z$ are potentially of higher order. We have data for elements with $10\le Z \le 100$, which is split into training/validation sets in the range $[10,91]$ (70/10 data points) and extrapolation test set in the interval $[92,100]$ (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation splits. The data is scaled to lie in $[0,1]$, \ie $x= Z/100$ and $y=K\alpha_2/100000$. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the $Z^2$ relationship. Mini-batch size is 2 here and $T=50000$ was used. \Fig{fig:xray} presents the data, the predictions, the learned formulas and the numerical results. \method{} and SVR achieve similar performance and MLP is significantly worse. However, \method{} also yields interpretable formulas, see \fig{fig:xray}(e) that can be used to gain insights into the potential relationship. \subsection{Poor extrapolation out of model class --- cart-pendulum system} Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set. Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see \fig{fig:cp}(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector $x=(x_1,\dots,x_4)$. We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where $y_1 = \dot x_1 = x_3$, $y_2 = \dot x_2 = x_4$ and \begin{align} y_3&= \frac{-x_1-0.01 x_3+x_4^2 \sin\left(x_2\right)+0.1 x_4 \cos \left(x_2\right)+9.81 \sin \left(x_2\right) \cos \left(x_2\right)}{\sin ^2\left(x_2\right)+1}\label{eqn:cp}, \\ y_4&= \frac{-0.2 x_4 - 19.62 \sin \left(x_2\right) + x_1 \cos \left(x_2\right) + 0.01 x_3 \cos \left(x_2\right) - x_4^2 \sin \left(x_2\right)\cos \left(x_2\right)} {\sin^2\left(x_2\right)+1}.\nonumber \end{align} The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we {\bf cannot} expect great extrapolation performance and this is confirmed by the experiments. In \fig{fig:cp}(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both \method{} and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in \tab{tab:cp:results}. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used. \section{Regression and extrapolation}\label{sec:setting} We consider a multivariate regression problem with a training set $\{(x_1,y_1),\dots,(x_N,y_N)\}$ with $x \in \Real^n$, $y\in \Real^m$. % sampled from a data distribution $p(x,y)$. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), $\phi:\Real^n\to\Real^m$ with additive zero-mean noise, $\xi$, \ie $y=\phi(x)+\xi$ and $\mathbb{E}\xi=0$. The function $\phi$ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function $\psi:\Real^n\to\Real^m$ that approximates the true functional relation as well as possible in the squared loss sense, \ie achieves minimal expected error $\mathbb{E}\|\psi(x) - \phi(x)\|^2$. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on training or test data $D$, %=[(x_i,y_i)]$, \begin{align} E(D)&=\frac{1}{N}\sum^{N}_{i=1}\|\psi(x_i) - y_i\|^2\,. \label{eqn:error} \end{align} If training and test data are sampled from the same distribution then we speak about an \emph{interpolation} problem. In the \emph{extrapolation} setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, \eg for higher velocities. As usual, we split the data that is available at training time into a part for model training %, $\D^\train$, and a part for validation or model selection. %, $\D^\val$. \section{Learning a network for function extrapolation}\label{sec:method}%Learning physical equations The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an $L$-layer network, there are $L-1$ hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure ($k'$ inputs, $k$ outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match. The linear mapping at level $l$ maps the $k'$-dimensional input $y^{\lm}$ to the $d$-dimensional intermediate representation $z$ given by \begin{align} z^\l &= W^\l y^\lm + b^\l, \end{align} where $y^\lm$ is the output of the previous layer, with the convention $y^{(0)}=x$. The weight matrix $W^\l\in \Real^{d \times k'}$ and the bias vector $b^\l\in\Real^{d}$ are free parameters that are learned during training. The non-linear transformation contains $u$ \emph{unary units}, $f_i:\Real\to\Real$, for $i=1,\dots,u$, and $v$ \emph{binary units}, $g_j:\Real\times\Real\to\Real$ for $j=1,\dots,v$. Their outputs are concatenated to form the layer output \begin{align} y^\l &:= \Big(f_1(z^\l_1),f_2(z^\l_2),\dots,f_{u}(z^\l_{u}),\nonumber\\ & \qquad g_{1}(z^\l_{u+1},z^\l_{u+2}),\dots,g_{v}(z^\l_{u+2v-1},z^\l_{u+2v}) \Big)\,. \end{align} In total, the nonlinear stage has $k = u + v$ outputs and $d = u + 2 v$ inputs. The unary units, $f_1,\dots,f_u$ receive the respective component, $z_1,\dots,z_u$ as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter $\typ_i\in\{0,1,2,3\}$ \begin{align} f_i(z_i) &:= \begin{cases} z_i & \text{ if } \typ_i=0,\\ \sin(z_i) & \text{ if } \typ_i=1,\\ \cos(z_i) & \text{ if } \typ_i=2,\\ \sigm(z_i) & \text{ if } \typ_i=3, \end{cases}&\text{ for } i=1,\dots,u, \end{align} where $\sigm(z)=\frac{1}{1+e^{-z}}$ is the standard sigmoid function. The binary units, $g_1,\dots,g_v$ receive the remaining component, $z_{u+1},\dots,z_{u+2v}$, as input in pairs of two. They are \emph{multiplication units} that compute the product of their two input values: \begin{align} g_j(z_{u+2j-1}, z_{u+2j}) &:= z_{u+2j-1} \cdot z_{u+2j}&\text{ for }j=1,\dots,v. \end{align} Finally, the $L$-th and last layer computes the regression values by a linear read-out \begin{align} y^{\layer{L}} &:= W^{\layer{L}} y^{\layer{L-1}} + b^{\layer{L}}. \end{align} The architecture is depicted in \fig{fig:network}. We call the new architecture Equation Learner (\method{}) and denote the function it defines by $\psi$. \subsection{Discussion of the architecture} The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of \emph{sine} and \emph{cosine} as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space. \emph{Sigmoid} nonlinearities are the canonical choice of \emph{activation function} for \emph{artificial neural networks} (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular \emph{radial basis functions}~\cite{broomhead1988radial} we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as \emph{(square) roots} and \emph{logarithms}, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work. The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units~\cite{DurbinRumelhart1989:ProductUnits} and Pi-Sigma-unit~\cite{ShinGhosh1991:pi-sigma}. The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network. Finally, each layer of the network contains unary units that act as \emph{identity} maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths. \subsection{Network training} The \method{} is fully differentiable in its free parameters $\theta=\{W^{(1)},\dots,W^{(L)},b^{(1)},\dots,b^{(L)}\}$, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective~\cite{tibshirani1996regression}, \begin{align} L(D)&=\frac{1}{N}\sum^{|D|}_{i=1}\|\psi(x_i) - y_i\|^2 + \lambda \sum_{l=1}^L\big|W^\l\big|_1\,,\label{eqn:loss} \end{align} that is, a linear combination of $L_2$ loss and $L_1$ regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam~\cite{KingmaBa2015:Adam} for calculating the updates: \begin{align} \theta_{t+1} &= \theta_{t} + \text{Adam}\left(\frac{\partial L(D_{(t)})}{\partial \theta}, \alpha\right), \end{align} where $D_{(t)}$ denotes the current mini-batch and $\alpha$ is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use $\alpha=0.001$ and a mini-batch size of 20. The role of the $L_1$ regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values. Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure ($t<t_1$) we use no regularization ($\lambda=0$), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting $\lambda$ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training ($t>t_2$) we disable $L_1$ regularization ($\lambda=0$) but enforce the same $L_0$ norm of the weights. This is achieved by keeping all weights $w\in W^{1\dots L}$ that are close to 0 at 0, \ie if $|w|<0.001$ then $w=0$ during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints $t_1$ and $t_2$ is not critical. In practice, we use $t_1 = \frac{1}{4} T$ and $t_2=\frac{19}{20} T$, where $T$ is total number of update steps. $T$ was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous. \subsection{Model selection for extrapolation}\label{sec:modelsel} \method{} networks have a number of hyper-parameters, \eg the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the ``right'' formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between $cos(x)$ and its truncated power series approximation $1-x^2/2 + x^4/24$, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 %\sec{sec:modelsel:app} for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, \ie it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances \wrt validation error and sparsity and select the one with the smallest $L_2$ norm (in rank-space), see \eqn{eqn:model:sel}. Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations. \documentclass[a4paper]{article} % For LaTeX2e \usepackage[margin=2.5cm,top=2cm]{geometry} \usepackage[square,sort]{natbib} \bibliographystyle{abbrvnat} \renewcommand{\cite}[1]{\citep{#1}} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % professional-quality tables % compact symbols for 1/2, etc. % microtypography % hyperlinks % simple URL typesetting \graphicspath{{../graphics/}} \newcommand{\Fig}[1]{Figure~\ref{#1}} \newcommand{\fig}[1]{Fig.~\ref{#1}} \newcommand{\Tab}[1]{Table~\ref{#1}} \newcommand{\tab}[1]{Tab.~\ref{#1}} \newcommand{\Eqn}[1]{Equation \eqref{#1}} \newcommand{\eqn}[1]{Eq.~\eqref{#1}} % Eq. (1.1) \newcommand{\eqnp}[1]{(Eq.~\ref{#1})} % (Eq. 1.1) \renewcommand{\sec}[1]{Section~\ref{#1}} % Section 1 \newcommand{\ie}{i.\,e.~} \newcommand{\eg}{e.\,g.~} \newcommand{\wrt}{w.\,r.\,t.~} \newcommand{\Real}{\ensuremath{\mathbb R}} % Real numbers \newcommand{\Unit}{\ensuremath{\mathbb I}} % Unit Matrix \newcommand{\T}{\ensuremath{\top}} % Transpose \newcommand{\sigm}{\ensuremath{\text{sigm}}} % sigmoid \newcommand\Tstrut{\rule{0pt}{2.6ex}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\method}{EQL} %{\textcolor{green}{EQL}}%{EQL,ABFNet} \newcommand{\typ}{\ensuremath{I}} % unit type \newcommand{\D}{\ensuremath{\mathbf{D}}} % dataset \newcommand{\train}{\ensuremath{\text{Tr}}} % training \newcommand{\test}{\ensuremath{\text{T}}} % training \newcommand{\val}{\ensuremath{\text{V}}} % training \newcommand{\extra}{\ensuremath{\text{X}}} % training \newcommand{\restr}{\ensuremath{\text{XTr}}} % training \newcommand{\restrval}{\ensuremath{\text{XV}}} % training \newcommand{\layer}[1]{{(#1)}} % layer l \renewcommand{\l}{{\layer{l}}} % layer l \newcommand{\lm}{{\layer{l-1}}} % layer l-1 \usepackage[disable]{todonotes} \setlength{\marginparwidth}{3cm} \newcommand{\Geo}[2][inline]{\todo[#1,color=yellow!60,size=\scriptsize]{#2}} \newcommand{\Chl}[2][inline]{\todo[#1,color=green!70,size=\scriptsize]{#2}} \pdfinfo{ /Title (Extrapolation and learning equations) /Author (Georg Martius and Christoph H. Lampert)} \setcounter{secnumdepth}{0} \begin{document} \title{Extrapolation and learning equations} \author{% Georg Martius \& Christoph H. Lampert\\ IST Austria\\ Am Campus 1, 3400 Klosterneuburg, Austria\\ \texttt{\{gmartius,chl\}@ist.ac.at} } \maketitle \begin{abstract} In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. \end{abstract} \input{intro.tex} \input{methods.tex} \input{relatedwork.tex} \input{results.tex} \vspace*{-.2em} \section{Conclusions}\vspace*{-.2em} We presented a new network architecture called \method{} that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing $L_1$ regularization and fixing $L_0$ norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies. The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor. If the origin of the data is not in the hypothesis class, \ie the underlying expression cannot be represented by the network, good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense. \subsubsection*{Acknowledgments} GM received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no.~[291734]. \begin{thebibliography}{24} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Basak et~al.(2007)Basak, Pal, and Patranabis]{basak2007:SVR} D.~Basak, S.~Pal, and D.~C. Patranabis. \newblock Support vector regression. \newblock \emph{Neural Information Processing-Letters and Reviews}, 11\penalty0 (10):\penalty0 203--224, 2007. \bibitem[Ben-David et~al.(2010)Ben-David, Blitzer, Crammer, Kulesza, Pereira, and Vaughan]{ben2010theory} S.~Ben-David, J.~Blitzer, K.~Crammer, A.~Kulesza, F.~Pereira, and J.~W. Vaughan. \newblock A theory of learning from different domains. \newblock \emph{Machine Learning}, 79\penalty0 (1-2):\penalty0 151--175, 2010. \bibitem[Bengio(2009)]{bengio2009learning} Y.~Bengio. \newblock Learning deep architectures for {AI}. \newblock \emph{Foundations and Trends in Machine Learning}, 2\penalty0 (1):\penalty0 1--127, 2009. \bibitem[Bengio et~al.(2013)Bengio, Courville, and Vincent]{bengio2013representation} Y.~Bengio, A.~Courville, and P.~Vincent. \newblock Representation learning: A review and new perspectives. \newblock \emph{IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, 35\penalty0 (8):\penalty0 1798--1828, 2013. \bibitem[Bishop(1995)]{bishop1995neural} C.~M. Bishop. \newblock \emph{Neural networks for pattern recognition}. \newblock Oxford University Press, 1995. \bibitem[Broomhead and Lowe(1988)]{broomhead1988radial} D.~S. Broomhead and D.~Lowe. \newblock Radial basis functions, multi-variable functional interpolation and adaptive networks. \newblock Technical report, DTIC Document, 1988. \bibitem[Deslattes et~al.(2003)Deslattes, Kessler~Jr, Indelicato, De~Billy, Lindroth, and Anton]{Deslattes2003:XrayTransEnergies} R.~D. Deslattes, E.~G. Kessler~Jr, P.~Indelicato, L.~De~Billy, E.~Lindroth, and J.~Anton. \newblock X-ray transition energies: new approach to a comprehensive evaluation. \newblock \emph{Reviews of Modern Physics}, 75\penalty0 (1):\penalty0 35, 2003. \bibitem[Durbin and Rumelhart(1989)]{DurbinRumelhart1989:ProductUnits} R.~Durbin and D.~E. Rumelhart. \newblock Product units: A computationally powerful and biologically plausible extension to backpropagation networks. \newblock \emph{Neural Computation}, 1\penalty0 (1):\penalty0 133--142, Mar. 1989. \newblock ISSN 0899-7667. \newblock \doi{10.1162/neco.1989.1.1.133}. \newblock URL \url{http://dx.doi.org/10.1162/neco.1989.1.1.133}. \bibitem[Gy{\"o}rfi et~al.(2013)Gy{\"o}rfi, H{\"a}rdle, Sarda, and Vieu]{gyorfi2013nonparametric} L.~Gy{\"o}rfi, W.~H{\"a}rdle, P.~Sarda, and P.~Vieu. \newblock \emph{Nonparametric curve estimation from time series}, volume~60. \newblock Springer, 2013. \bibitem[Kingma and Ba(2015)]{KingmaBa2015:Adam} D.~Kingma and J.~Ba. \newblock Adam: A method for stochastic optimization. \newblock In \emph{in Proceedings of ICLR}, 2015. \bibitem[M{\"u}ller et~al.(1997)M{\"u}ller, Smola, R{\"a}tsch, Sch{\"o}lkopf, Kohlmorgen, and Vapnik]{muller1997predicting} K.-R. M{\"u}ller, A.~J. Smola, G.~R{\"a}tsch, B.~Sch{\"o}lkopf, J.~Kohlmorgen, and V.~Vapnik. \newblock Predicting time series with support vector machines. \newblock In \emph{Artificial Neural Networks (ICANN)}, pages 999--1004. Springer, 1997. \bibitem[Pearl(2000)]{Pearl2000} J.~Pearl. \newblock \emph{Causality}. \newblock Cambridge {U}niversity {P}ress, 2000. \bibitem[Peters et~al.(2014)Peters, Mooij, Janzing, and Sch{\"o}lkopf]{PetersMJS2014} J.~Peters, J.~Mooij, D.~Janzing, and B.~Sch{\"o}lkopf. \newblock Causal discovery with continuous additive noise models. \newblock \emph{Journal of Machine Learning Research (JMLR)}, 15:\penalty0 2009--2053, 2014. \bibitem[Poon and Domingos(2012)]{PoonDomingos2011:sum-product-networks} H.~Poon and P.~M. Domingos. \newblock Sum-product networks: {A} new deep architecture, 2012. \bibitem[Quionero-Candela et~al.(2009)Quionero-Candela, Sugiyama, Schwaighofer, and Lawrence]{quionero2009dataset} J.~Quionero-Candela, M.~Sugiyama, A.~Schwaighofer, and N.~D. Lawrence. \newblock \emph{Dataset shift in machine learning}. \newblock The MIT Press, 2009. \bibitem[Schmidt and Lipson(2009)]{SchmidtLipson2009:learnnaturallaws} M.~Schmidt and H.~Lipson. \newblock Distilling free-form natural laws from experimental data. \newblock \emph{Science}, 324\penalty0 (5923):\penalty0 81--85, 2009. \newblock ISSN 0036-8075. \newblock \doi{10.1126/science.1165893}. \newblock URL \url{http://science.sciencemag.org/content/324/5923/81}. \bibitem[Shin and Ghosh(1991)]{ShinGhosh1991:pi-sigma} Y.~Shin and J.~Ghosh. \newblock The pi-sigma network : An efficient higher-order neural network for pattern classification and function approximation. \newblock In \emph{in Proceedings of the International Joint Conference on Neural Networks}, pages 13--18, 1991. \bibitem[Smola and Sch{\"o}lkopf(2004)]{smola2004tutorial} A.~J. Smola and B.~Sch{\"o}lkopf. \newblock A tutorial on support vector regression. \newblock \emph{Statistics and computing}, 14\penalty0 (3):\penalty0 199--222, 2004. \bibitem[Specht(1991)]{specht1991general} D.~F. Specht. \newblock A general regression neural network. \newblock \emph{IEEE Transactions on Neural Networks (TNN)}, 2\penalty0 (6):\penalty0 568--576, 1991. \bibitem[{Theano Development Team}(2016)]{2016arXiv160502688short} {Theano Development Team}. \newblock {Theano: A {Python} framework for fast computation of mathematical expressions}. \newblock \emph{arXiv e-prints}, abs/1605.02688, May 2016. \newblock URL \url{http://arxiv.org/abs/1605.02688}. \bibitem[Tibshirani(1996)]{tibshirani1996regression} R.~Tibshirani. \newblock Regression shrinkage and selection via the lasso. \newblock \emph{Journal of the Royal Statistical Society. Series B (Methodological)}, pages 267--288, 1996. \bibitem[Wiener(1949)]{wiener1949extrapolation} N.~Wiener. \newblock \emph{Extrapolation, interpolation, and smoothing of stationary time series}, volume~2. \newblock The MIT Press, 1949. \bibitem[Williams and Rasmussen(2006)]{williams2006gaussian} C.~K.~I. Williams and C.~E. Rasmussen. \newblock \emph{Gaussian processes for machine learning}. \newblock The MIT Press, 2006. \bibitem[Zaremba et~al.(2014)Zaremba, Kurach, and Fergus]{ZarembaFergus2014:LearnMathIdentities} W.~Zaremba, K.~Kurach, and R.~Fergus. \newblock Learning to discover efficient mathematical identities. \newblock In Z.~Ghahramani, M.~Welling, C.~Cortes, N.~Lawrence, and K.~Weinberger, editors, \emph{Advances in Neural Information Processing Systems 27}, pages 1278--1286. Curran Associates, Inc., 2014. \end{thebibliography} \input{appendix.tex} \end{document} \appendix \section{Appendix} \section{A1: Model selection details}\label{sec:modelsel:app} \subsection{Quantifying sparsity} We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by $s$. For a given network $phi$ we get \begin{align} s(\phi) = \sum_{l=1}^L\sum_{i=1}^k\Theta( |W^\l_{i,\cdot}| * |W^{\layer{l+1}}_{\cdot,i}| - 0.01)\,,\label{eqn:s} \end{align} where $\Theta$ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula). \subsection{Selection criteria} As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let $r^v(\phi)$ and $r^s(\phi)$ be the ranks of the network $\phi$ \wrt the validation error and sparsity $s(\phi)$respectively, then the network with minimal squared rank norm is selected: \begin{align} \argmin_\phi\left[ r^v(\phi)^2 + r^s(\phi)^2\right] \label{eqn:model:sel} \end{align} In \fig{fig:model:sel} the extrapolation performance of all considered networks for the kin2D-3 dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error. \section{A2: Dependence on noise and number of data points}\label{sec:dep:noise-pts} In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in \fig{fig:dep:noise-pts}. In general the method is robust to noise and as expected, more noise can be compensated by more data.
Tensorial Mixture Models
1610.04167
(a) MNIST with i.i.d. corruption
[ "[width=5em, height= 1.7em] [ITALIC] ptrain [ITALIC] ptest", "0.25", "0.50", "0.75", "0.90", "0.95", "0.99" ]
[ [ "0.25", "98.9", "97.8", "78.9", "32.4", "17.6", "11.0" ], [ "0.50", "[BOLD] 99.1", "98.6", "94.6", "68.1", "37.9", "12.9" ], [ "0.75", "98.9", "[BOLD] 98.7", "[BOLD] 97.2", "83.9", "56.4", "16.7" ], [ "0.90", "97.6", "97.5", "96.7", "[BOLD] 89.0", "71.0", "21.3" ], [ "0.95", "95.7", "95.6", "94.8", "88.3", "[BOLD] 74.0", "30.5" ], [ "0.99", "87.3", "86.7", "85.0", "78.2", "66.2", "[BOLD] 31.3" ], [ "i.i.d. (rand)", "98.7", "98.4", "97.0", "87.6", "70.6", "29.6" ], [ "rects (rand)", "98.2", "95.7", "83.2", "54.7", "35.8", "17.5" ] ]
Heading on to multi-class prediction under missing data, we focus on the challenging “blind” setting, where the missingness distribution at test time is completely unknown during training. We simulate two kinds of MAR missingness distributions: ( i) an i.i.d. mask with a fixed probability p∈[0,1] of dropping each pixel, and (ii) a mask composed of the union of n (possibly overlapping) rectangles of width and height W, each positioned randomly in the image (uniform distribution). We first demonstrate that purely discriminative classifiers cannot generalize to all missingness distributions, by training the standard LeNeT ConvNet on one set of distributions and then testing it on others (see fig. Next, we present our main results. We compare our model against three different approaches. First, as a baseline, we use K-Nearest Neighbors (KNN) to vote on the most likely class, augmented with an l2-metric that disregards missing coordinates. KNN actually scores better than most methods, but its missingness-aware distance metric prevents the common memory and runtime optimizations, making it impractical for real-world settings. Second, we test various data-imputation methods, ranging from simply filling missing pixels with zeros or their mean, to modern generative models suited to inpainting. Data imputation is followed by a ConvNet prediction on the completed image. In general, we find that this approach only works well when few pixels are missing. Finally, we test generative classifiers other than our model, including MP-DBM and SPN (sum-product networks). MP-DBM is notable for being limited to approximations, and its results show the importance of using exact inference instead. For SPN, we have augmented the model from Poon and Domingos The inferior performance of SPN suggests that the structure of TMMs, which are in fact a special case, is advantageous. Due to limitations of available public code and time, not all methods were tested on all datasets and distributions. See fig.
\newcommand*{\NIPS}{} \newcommand*{\CAMREADY}{} \newcommand*{\ARXIV}{} \ifdefined\ICML \documentclass{article} \ifdefined\CAMREADY \usepackage[accepted]{icml2017} \else \fi \ifdefined\ARXIV \makeatletter \renewcommand{\ICML@appearing}{} \makeatother \fi \fi \ifdefined\NIPS \documentclass{article} \PassOptionsToPackage{numbers}{natbib} \ifdefined\CAMREADY \usepackage[final]{nips_2017} \else \fi \ifdefined\ARXIV \makeatletter \renewcommand{\@noticestring}{} \makeatother \fi \fi \usepackage[titletoc,title]{appendix} \newtheorem{definition}{Definition} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{assumption}{Assumption} \newtheorem{example}{Example} \newtheorem{remark}{Remark} \newtheorem{claim}{Claim} \newtheorem{fact}{Fact} \newcommand\todo[1]{\textcolor{red}{(TODO: #1)}} \newcommand{\h}{{\mathbf h}} \newcommand{\m}{{\mathbf m}} \newcommand{\x}{{\mathbf x}} \newcommand{\y}{{\mathbf y}} \newcommand{\z}{{\mathbf z}} \newcommand{\q}{{\mathbf q}} \newcommand{\uu}{{\mathbf u}} \newcommand{\vv}{{\mathbf v}} \newcommand{\w}{{\mathbf w}} \newcommand{\s}{{\mathbf s}} \newcommand{\e}{{\mathbf e}} \newcommand{\aaa}{{\mathbf a}} \newcommand{\bb}{{\mathbf b}} \newcommand{\cc}{{\mathbf c}} \newcommand{\dd}{{\mathbf d}} \newcommand{\p}{{\mathbf p}} \newcommand{\K}{{\mathbf K}} \newcommand{\Z}{{\mathbf Z}} \newcommand{\1}{{\mathbf 1}} \newcommand{\0}{{\mathbf 0}} \newcommand{\A}{{\mathcal A}} \newcommand{\B}{{\mathcal B}} \newcommand{\D}{{\mathcal D}} \newcommand{\E}{{\mathcal E}} \newcommand{\F}{{\mathcal F}} \newcommand{\G}{{\mathcal G}} \newcommand{\HH}{{\mathcal H}} \newcommand{\KK}{{\mathcal K}} \newcommand{\M}{{\mathcal M}} \newcommand{\PP}{{\mathbb P}} \newcommand{\T}{{\mathcal T}} \newcommand{\X}{{\mathcal X}} \newcommand{\Y}{{\mathcal Y}} \newcommand{\OO}{{\mathcal O}} \newcommand{\I}{{\mathcal I}} \newcommand{\C}{{\mathbb C}} \newcommand{\EE}{{\mathbb E}} \newcommand{\R}{{\mathbb R}} \newcommand{\N}{{\mathbb N}} \newcommand{\NN}{{\mathcal N}} \newcommand{\Q}{{\mathcal Q}} \newcommand{\nores}{\multicolumn{1}{c}{--}} \newcommand{\alphabf}{{\boldsymbol{\alpha}}} \newcommand{\betabf}{{\boldsymbol{\beta}}} \newcommand{\gammabf}{{\boldsymbol{\gamma}}} \newcommand{\lambdabf}{{\boldsymbol{\lambda}}} \newcommand{\mubf}{{\boldsymbol{\mu}}} \newcommand{\sigmabf}{{\boldsymbol{\sigma}}} \newcommand{\psibf}{{\boldsymbol{\psi}}} \newcommand{\abs}[1]{\left\lvert#1 \right\rvert} \newcommand{\norm}[1]{\left\|#1 \right\|} \newcommand{\rank}[1]{\textrm{rank}\left( #1 \right)} \newcommand{\indc}[1]{\mathbbm{1}\left[#1\right]} \newcommand{\setprod}[2]{\underset{#1}{{\overset{#2}{\times}}}} \newcommand{\inprod}[2] {\left\langle{#1},{#2}\right\rangle} \newcommand{\shortminus}{\scalebox{1}[1]{-}} \newcommand{\shortcdots}{\scalebox{0.5}[1]{\text{$\cdots$}}} \newcommand{\shorteq}{\scalebox{0.8}[1]{\text{$=$}}} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\prob}{\mathbb{P}} \newcommand{\mexu}[2] {\underset{#2}{MEX_{#1}}} \renewcommand{\bibsection}{} \ifdefined\CAMREADY \newcommand{\githuburl}[1]{\url{https://github.com/HUJI-Deep/#1}} \else \newcommand{\githuburl}[1]{\url{https://<anonymized>}} \fi \newcommand\nadav[1]{\textcolor{blue}{(Nadav: #1)}} \newcommand\ronen[1]{\textcolor{magenta}{(Ronen: #1)}} \newcommand\gapaftersection{\vspace{0mm}} \newcommand\gapaftersubsection{\vspace{0mm}} \newcommand\gapbeforesection{\vspace{0mm}} \newcommand\gapbeforesubsection{\vspace{0mm}} \ifdefined\ICML \icmltitlerunning{Tensorial Mixture Models} \fi \begin{document} \ifdefined\ICML \twocolumn[ \icmltitle{Tensorial Mixture Models} \begin{icmlauthorlist} \icmlauthor{Or Sharir}{huji} \icmlauthor{Ronen Tamari}{huji} \icmlauthor{Nadav Cohen}{huji} \icmlauthor{Amnon Shashua}{huji} \end{icmlauthorlist} \icmlaffiliation{huji}{Hebrew University of Jerusalem, Israel} \icmlcorrespondingauthor{Or Sharir}{or.sharir@cs.huji.ac.il} \icmlkeywords{Deep Learning, Generative Models, Tractable Inference, Missing Data, Tensors} \vskip 0.3in ] \printAffiliationsAndNotice{} \fi \ifdefined\NIPS \title{Tensorial Mixture Models} \author{ Or~Sharir\\ Department of Computer Science\\ The Hebrew University of Jerusalem\\ Israel \\ \texttt{or.sharir@cs.huji.ac.il} \\ \And Ronen~Tamari\\ Department of Computer Science\\ The Hebrew University of Jerusalem\\ Israel \\ \texttt{ronent@cs.huji.ac.il} \\ \And Nadav~Cohen\\ Department of Computer Science\\ The Hebrew University of Jerusalem\\ Israel \\ \texttt{cohennadav@cs.huji.ac.il} \\ \And Amnon~Shashua\\ Department of Computer Science\\ The Hebrew University of Jerusalem\\ Israel \\ \texttt{shashua@cs.huji.ac.il} \\ } \maketitle \fi \begin{abstract} Casting neural networks in generative frameworks is a highly sought-after endeavor these days. Contemporary methods, such as Generative Adversarial Networks, capture some of the generative capabilities, but not all. In particular, they lack the ability of tractable marginalization, and thus are not suitable for many tasks. Other methods, based on arithmetic circuits and sum-product networks, do allow tractable marginalization, but their performance is challenged by the need to learn the structure of a circuit. Building on the tractability of arithmetic circuits, we leverage concepts from tensor analysis, and derive a family of generative models we call Tensorial Mixture Models (TMMs). TMMs assume a simple convolutional network structure, and in addition, lend themselves to theoretical analyses that allow comprehensive understanding of the relation between their structure and their expressive properties. We thus obtain a generative model that is tractable on one hand, and on the other hand, allows effective representation of rich distributions in an easily controlled manner. These two capabilities are brought together in the task of classification under missing data, where TMMs deliver state of the art accuracies with seamless implementation and design. \end{abstract} \section{Introduction} \label{sec:intro} \gapaftersection There have been many attempts in recent years to marry generative models with neural networks, including successful methods, such as Generative Adversarial Networks~\citep{Goodfellow:2014td}, Variational Auto-Encoders~\citep{Kingma:2013tz}, NADE~\citep{JMLR:v17:16-272}, and PixelRNN~\citep{vandenOord:2016um}. Though each of the above methods has demonstrated its usefulness on some tasks, it is yet unclear if their advantage strictly lies in their generative nature or some other attribute. More broadly, we ask if combining generative models with neural networks could lead to methods who have a {clear advantage} over purely discriminative models. On the most fundamental level, if $X$ stands for an instance and~$Y$ for its class, generative models learn $\PP(X,Y)$, from which we can also infer $\PP(Y|X)$, while discriminative models learn only $\PP(Y|X)$. It might not be immediately apparent if this sole difference leads to any advantage. In~\citet{Ng:2001wg}, this question was studied with respect to the sample complexity, proving that under \emph{some cases} it can be significantly lesser in favor of the generative classifier. We wish to highlight a more clear cut case, by examining the problem of classification under missing data~--~where the value of some of the entries of $X$ are unknown at prediction time. Under these settings, discriminative classifiers typically rely on some form of data imputation, i.e. filling missing values by some auxiliary method prior to prediction. Generative classifiers, on the other hand, are naturally suited to handle missing values through marginalization~--~effectively assessing every possible completion of the missing values. Moreover, under mild assumptions, this method is optimal \emph{regardless of the process by which values become missing} (see sec.~\ref{sec:missing_data}). It is evident that such application of generative models assumes we can efficiently and exactly compute $\PP(X,Y)$, a process known as \emph{tractable inference}. Moreover, it assumes we may efficiently marginalize over any subset of $X$, a procedure we refer to as \emph{tractable marginalization}. Not all generative models have both of these properties, and specifically not the ones mentioned in the beginning of this section. Known models that do possess these properties, e.g. Latent Tree Model~\citep{Mourad:2013kz}, have other limitations. A detailed discussion can be found in sec.~\ref{sec:related_works}, but in broad terms, all known generative models possess one of the following shortcomings: (i) they are insufficiently expressive to model high-dimensional data (images, audio, etc.), (ii) they require explicitly designing all the dependencies of the data, or (iii)~they do not have tractable marginalization. Models based on neural networks typically solve~(i) and~(ii), but are incapable of~(iii), while more classical methods, e.g. mixture models, solve~(iii) but suffer from~(i) and~(ii). There is a long history of specifying tractable generative models through arithmetic circuits and sum-product networks~\citep{Darwiche:2003hx,Poon:2012vd}~--~computational graphs comprised solely of product and weighted sum nodes. To address the shortcomings above, we take a similar approach, but go one step further and leverage tensor analysis to distill it to a specific family of models we call Tensorial Mixture Models. A Tensorial Mixture Model assumes a convolutional network structure, but as opposed to previous methods tying generative models with neural networks, lends itself to theoretical analyses that allow a thorough understanding of the relation between its structure and its expressive properties. We thus obtain a generative model that is tractable on one hand, and on the other hand, allows effective representation of rich distributions in an easily controlled manner. \gapbeforesection \section{Tensorial Mixture Models} \label{sec:model} \gapaftersection One of the simplest types of tractable generative models are mixture models, where the probability distribution is defined שד the convex combination of $M$ mixing components % $\{\PP(\x|d;\theta_d)\}_{d=1}^M$ (e.g. Normal distributions): $\PP(\x) = \sum\nolimits_{d=1}^M \PP(d) \PP(\x|d;\theta_d)$. Mixture models are very easy to learn, and many of them are able to approximate any probability distribution, given sufficient number of components, rendering them suitable for a variety of tasks. The disadvantage of classic mixture models is that they do not scale will to high dimensional data (``curse of dimensionality''). To address this challenge, we extend mixture models, leveraging the fact many high dimensional domains (e.g.~images) are typically comprised of small, simple local structures. We represent a high dimensional instance as $X = (\x_1,\ldots,\x_N)$~--~an $N$-length sequence of $s$-dimensional vectors $\x_1,\ldots,\x_N~\in~\R^s$ (called \emph{local structures}). $X$ is typically thought of as an image, where each local structure $\x_i$ corresponds to a local patch from that image, where no two patches are overlapping. We assume that the distribution of individual local structures can be efficiently modeled by some mixture model of few components, which for natural image patches, was shown to be the case~\citep{Zoran:2011jn}. Formally, for all $i \in [N]$ there exists $d_i \in [M]$ such that $\x_i \sim P(\x|d_i;\theta_{d_i})$, where $d_i$ is a hidden variable specifying the matching component for the $i$-th local structure. The probability density of sampling $X$ is thus described by: \begin{align}\label{eq:tmm} P(X) &= \sum\nolimits_{d_1,\ldots,d_N=1}^M P(d_1,\ldots,d_N) \prod\nolimits_{i=1}^N P(\x_i | d_i; \theta_{d_i}) \end{align} where $P(d_1,\ldots,d_N)$ represents the prior probability of assigning components $d_1,\ldots,d_N$ to their respective local structures $\x_1,\ldots,\x_N$. As with classical mixture models, any probability density function $\PP(X)$ could be approximated arbitrarily well by eq.~\ref{eq:tmm}, as $M \to \infty$ (see app.~\ref{app:universal}). At first glance, eq.~\ref{eq:tmm} seems to be impractical, having an exponential number of terms. In the literature, this equation is known as the ``Network Polynomial''~\citep{Darwiche:2003hx}, and the traditional method to overcome its intractability is to express $P(d_1,\ldots,d_N)$ by an arithmetic circuit, or sum-product networks, following certain constraints (decomposable and complete). We augment this method by viewing $P(d_1,\ldots,d_N)$ from an algebraic perspective, treating it as a tensor of order $N$ and dimension $M$ in each mode, i.e., as a multi-dimensional array, $\A_{d_1,\ldots,d_N}$ specified by $N$ indices $d_1,\ldots,d_N$, each ranging in $[M]$, where $[M]{\equiv}\{1,\ldots,M\}$. We refer to $\A_{d_1,\ldots,d_N}{\equiv}P(d_1,\ldots,d_N)$ as the \emph{prior tensor}. Under this perspective, eq.~\ref{eq:tmm} can be thought of as a mixture model with tensorial mixing weights, thus we call the arising models \emph{Tensorial Mixture Models}, or TMMs for short. \gapbeforesubsection \subsection{Tensor Factorization, Tractability, and Convolutional Arithmetic Circuits} \gapaftersubsection Not only is it intractable to compute eq.~\ref{eq:tmm}, but it is also impossible to even store the prior tensor. We argue that addressing the latter is intrinsically tied to addressing the former. For example, if we impose a sparsity constraint on the prior tensor, then we only need to compute the few non-zero terms of eq.~\ref{eq:tmm}. TMMs with sparsity constraints can represent common generative models, e.g. GMMs~(see app.~\ref{app:sparsity_example}). However, they do not take full advantage of the prior tensor. Instead, we consider constraining TMMs with prior tensors that adhere to non-negative low-rank factorizations. We begin by examining the simplest case, where the prior tensor $\A$ takes a \emph{rank-1} form, i.e. there exist vectors $\vv^{(1)},\ldots,\vv^{(N)} \in \R^M$ such that $\A_{d_1,\ldots,d_N} = \prod_{i=1}^N v^{(i)}_{d_i}$, or in tensor product notation, $\A = \vv^{(1)} \otimes \cdots \otimes \vv^{(N)}$. If we interpret\footnote{$\A$ represents a probability, and w.l.o.g. we can assume all entries of $\vv^{(i)}$ are non-negative and $\sum_{d{=}1}^M v^{(i)}_d{\shorteq}1$} $v^{(i)}_d = P(d_i{=}d)$ as a probability over $d_i$, and so $P(d_1,\ldots,d_N) = \prod_i P(d_i)$, then it reveals that imposing a rank-1 constraint is actually equivalent to assuming the hidden variables $d_1,\ldots,d_N$ are statistically independent. Applying it to eq.~\ref{eq:tmm} results in the tractable form $P(X)=\prod_{i=1}^N \sum_{d=1}^M P(d_i{=}d) P(\x_i|d_i,\theta_{d_i})$, or in other words, a product of mixture models. Despite the familiar setting, this strict assumption severely limits expressivity. In a broader setting, we look at general factorization schemes that given sufficient resources could represent any tensor. Namely, the CANDECOMP/PARAFAC~(CP) and the Hierarchical Tucker~(HT) factorizations. The CP factorization is simply a sum of rank-1 tensors, extending the previous case, and HT factorization can be seen as a recursive application of CP (see def. in app.~\ref{app:tensor_background}). Since both factorization schemes are solely based on product and weighted sum operations, they could be realized through arithmetic circuits. As shown by \citet{expressive_power}, this gives rise to a specific class of convolutional networks named Convolutional Arithmetic Circuits~(ConvACs), which consist of $1{\times}1$-convolutions, non-overlapping product pooling layers, and linear activations. More specifically, the CP factorization corresponds to shallow ConvACs, HT corresponds to deep ConvACs, and the number of channels in each layer corresponds to the respective concept of ``rank'' in each factorization scheme. In general, when a tensor factorization is applied to eq.~\ref{eq:tmm}, inference is equivalent to first computing the likelihoods of all mixing components $\{P(\x_i|d;\theta_d)\}_{d{\shorteq}1,i{\shorteq}1}^{M,N}$, in what we call the \emph{representation} layer, followed by a ConvAC. A complete network is illustrated in fig.~\ref{fig:generative_convac}. \begin{wrapfigure}{r}{.3\columnwidth} \centering \includegraphics[width=\linewidth]{figures/ht_graphical_model} \caption{Graphical model description of HT-TMM} \label{fig:graphical_model} \end{wrapfigure} When restricting the prior tensor of eq.~\ref{eq:tmm} to a factorization, we must ensure it represents actual probabilities, i.e. it is non-negative and its entries sum to one. This can be addressed through a restriction to non-negative factorizations, which translates to limiting the parameters of each convolutional kernel to the simplex. There is a vast literature on the relations between non-negative factorizations and generative models~\citep{Hofmann:1999vka,Mourad:2013kz}. As opposed to most of these works, we apply factorizations merely to derive our model and analyze its expressivity~--~not for learning its parameters~(see sec.~\ref{sec:training}). From a generative perspective, the restriction of convolutional kernels to the simplex results in a latent tree graphical model, as illustrated in fig.~\ref{fig:graphical_model}. Each \emph{hidden layer} in the ConvAC network~--~a pair of convolution and pooling operations, corresponds to a transition between two levels in the tree. More specifically, each level is comprised of multiple latent variables, one for each spatial position in the input to a hidden layer in the network. Each latent variable in the input to the $l$-th layer takes values in $[r_{l-1}]$~--~the number of channels in the layer that precedes it. Pooling operations in the network correspond to the parent-child relationships in the tree~--~a set of latent variables are siblings with a shared parent in the tree, if they are positioned in the same pooling window in the network. The weights of convolution operations correspond to the transition matrix between a parent and each of its children, i.e.~if $H_p$ is the parent latent variable, taking values in $[r_l]$, and $H_{child}$ is one of its child variables, taking values in $[r_{l-1}]$, then $P(H_{child}{\shorteq}i| H_p{\shorteq}c) \shorteq w^{(c)}_i$, where $\w^{(c)}$ is the $1{\times}1$ convolutional kernel for the $c$-th output channel. With the above graphical representation in place, we can easily draw samples from our model. To conclude this subsection, by leveraging an algebraic perspective of the network polynomial (eq.~\ref{eq:tmm}), we show that tractability is related to the tensor properties of the priors, and in particular, that low rank factorizations are equivalent to inference via ConvACs. The application of arithmetic circuits to achieve tractability is by itself not a novelty. However, the particular convolutional arithmetic circuits we propose lead to a comprehensive understanding of representational abilities, and as a result, to a straightforward architectural design of TMMs. \gapbeforesubsection \subsection{Controlling the Expressivity and Inductive Bias of TMMs}\label{sec:theory} \gapaftersubsection As discussed in sec.~\ref{sec:intro}, it is not enough for a generative model to be tractable~--~it must also be sufficiently expressive, and moreover, we must also be able to understand how its structure affects its expressivity. In this section we explain how our algebraic perspective enables us to achieve this. To begin with, since we derived our model by factorizing the prior tensor, it immediately follows that given sufficient number of channels in the ConvAC, i.e.~given sufficient ranks in the tensor factorization, any distribution could be approximated arbitrarily well (assuming~$M$ is allowed to grow). In short, this amounts to saying that TMMs are universal. Though many other generative models are known to be universal, it is typically not clear how one may assess what a given structure of finite size can and cannot express. In contrast, the expressivity of ConvACs has been throughly studied in a series of works~\citep{expressive_power,inductive_bias,Cohen:0ZJHmEow,Levine:2017wt}, each of which examined a different attribute of its structure. In \citet{expressive_power} it was proven that ConvACs exhibit the Depth Efficiency property, i.e. deep networks are exponentially more expressive than shallow ones. In \citet{inductive_bias} it was shown that deep networks can efficiently model some input correlations but not all, and that by designing appropriate pooling schemes, different preferences may be encoded, i.e.~the inductive bias may be controlled. In \citet{Cohen:0ZJHmEow} this result was extended to more complex connectivity patterns, involving mixtures of pooling schemes. Finally, in \citet{Levine:2017wt}, an exact relation between the number of channels and the correlations supported by a network has been found, enabling tight control over expressivity and inductive bias. All of these results are brought forth by the relations of ConvACs to tensor factorizations. They allow TMMs to be analyzed and designed in much more principled ways than alternative high-dimensional generative models.\footnote{ As a demonstration of the fact that ConvAC analyses are not affected by the non-negativity and normalization restrictions of our generative variant, we prove in app.~\ref{app:depth_efficiency} that the Depth Efficiency property still holds.} \gapbeforesubsection \subsection{Classification and Learning}\label{sec:training} \gapaftersubsection TMMs realized through ConvACs, sharing many of the same traits as ConvNets, are especially suitable to serve as classifiers. We begin by introducing a class variable $Y$, and model the conditional likelihood $\PP(X|Y{\shorteq}y)$ for each $y\in [K]$. Though it is possible to have separate generative models for each class, it is much more efficient to leverage the relation to ConvNets and use a shared ConvAC instead, which is equivalent to a joint-factorization of the prior tensors for all classes. This results in a single network, where instead of a single scalar output representing $\PP(X)$, multiple outputs are driven by the network, representing $\PP(X|Y{\shorteq}y)$ for each class~$y$. Predicting the class of a given instance is carried through Maximum A-Posteriori, i.e.~by returning the most likely class. In the common setting of uniform class priors, i.e. $\PP(Y{\shorteq}y){\equiv}\frac{1}{K}$, this corresponds to classification by maximal network output, as customary with ConvNets. We note that in practice, na\"{\i}ve implementation of ConvACs is not numerically stable\footnote{Since high degree polynomials (as computed by ACs) are susceptible to numerical underflow or overflow.}, and this is treated by performing all computations in log-space, which transforms ConvACs into \emph{SimNets}~--~a recently introduced deep learning architecture~\citep{simnets1,simnets2}. Suppose now that we are given a training set $S=\{(X^{(i)}{\in}(\R^s)^N,Y^{(i)}{\in}[K])\}_{i=1}^{|S|}$ of instances and labels, and would like to fit the parameters $\Theta$ of our model according to the Maximum Likelihood principle, or equivalently, by minimizing the Negative Log-Likelihood~(NLL) loss function: $\mathcal{L}(\Theta) = \mathbb{E}[-\log \PP_\Theta(X,Y)]$. The latter can be factorized into two separate loss terms: \begin{align*} \mathcal{L}(\Theta) = \mathbb{E}[-\log \PP_\Theta(Y|X)] + \mathbb{E}[-\log \PP_\Theta(X)] \end{align*} where $\mathbb{E}[-\log \PP_\Theta(Y|X)]$, which we refer to as the \emph{discriminative loss}, is commonly known as the cross-entropy loss, and $\mathbb{E}[-\log \PP_\Theta(X)]$, which corresponds to maximizing the prior likelihood $\PP(X)$, has no analogy in standard discriminative classification. It is this term that captures the generative nature of the model, and we accordingly refer to it as the \emph{generative loss}. Now, let $N_\Theta(X^{(i)};y){:=}\log \PP_\Theta(X^{(i)}|Y{=}y)$ stand for the $y$'th output of the SimNet (ConvAC in log-space) realizing our model with parameters~$\Theta$. In the case of uniform class priors ($\PP(Y{=}y)\equiv\nicefrac{1}{K}$), the empirical estimation of $\mathcal{L}(\Theta)$ may be written as: \begin{align} \mathcal{L}(\Theta;S) = -\frac{1}{\abs{S}}\sum\nolimits_{i=1}^{\abs{S}} \log \frac{e^{N_\Theta(X^{(i)};Y^{(i)})}}{\sum\nolimits_{y=1}^{K}e^{N_\Theta(X^{(i)};y)}} - \frac{1}{\abs{S}}\sum\nolimits_{i=1}^{\abs{S}} \log \sum\nolimits_{y=1}^{K}e^{N_\Theta(X^{(i)};y)} \label{eq:objective} \end{align} This objective includes the standard softmax loss as its first term, and an additional generative loss as its second. Rather than employing dedicated Maximum Likelihood methods for training (e.g. Expectation Minimization), we leverage once more the resemblance between our networks and ConvNets, and optimize the above objective using Stochastic Gradient Descent~(SGD). \gapbeforesection \section{Classification under Missing Data through Marginalization}\label{sec:missing_data} \gapaftersection A major advantage of generative models over discriminative ones lies in their ability to cope with missing data, specifically in the context of classification. By and large, discriminative methods either attempt to complete missing parts of the data before classification (a process known as \emph{data imputation}), or learn directly to classify data with missing values~\citep{Little:2002uj}. The first of these approaches relies on the quality of data completion, a much more difficult task than the original one of classification under missing data. Even if the completion was optimal, the resulting classifier is known to be sub-optimal~(see app.~\ref{app:mbayes_proof}). The second approach does not rely on data completion, but nonetheless assumes that the distribution of missing values at train and test times are similar, a condition which often does not hold in practice. Indeed, \citet{Globerson:2006jv} coined the term ``nightmare at test time'' to refer to the common situation where a classifier must cope with missing data whose distribution is different from that encountered in training. As opposed to discriminative methods, generative models are endowed with a natural mechanism for classification under missing data. Namely, a generative model can simply marginalize over missing values, effectively classifying under all possible completions, weighing each completion according to its probability. This, however, requires tractable inference and marginalization. We have already shown in sec.~\ref{sec:model} that TMMs support the former, and will show in sec.~\ref{sec:missing_data:margin} that marginalization can be just as efficient. Beforehand, we lay out the formulation of classification under missing data. Let $\X$ be a random vector in~$\R^s$ representing an object, and let $\Y$ be a random variable in~$[K]$ representing its label. Denote by~$\D(\X,\Y)$ the joint distribution of~$(\X,\Y)$, and by $(\x{\in}\R^s,y{\in}[K])$ specific realizations thereof. Assume that after sampling a specific instance $(\x,y)$, a random binary vector $\M$ is drawn conditioned on $\X{=}\x$. More concretely, we sample a binary mask $\m{\in}\{0,1\}^s$ (realization of~$\M$) according to a distribution $\Q(\cdot|\X{=}\x)$. $x_i$ is considered missing if~$m_i$ is equal to zero, and observed otherwise. Formally, we consider the vector $\x{\odot}\m$, whose~$i$'th coordinate is defined to hold~$x_i$ if $m_i{=}1$, and the wildcard~$*$ if $m_i{=}0$. The classification task is then to predict~$y$ given access solely to $\x{\odot}\m$. Following the works of~\citet{Rubin:1976gv,Little:2002uj}, we consider three cases for the missingness distribution $\Q(\M{=}\m|\X{=}\x)$: missing completely at random~(\emph{MCAR}), where~$\M$ is independent of~$\X$, i.e.~$\Q(\M{=}\m|\X{=}\x)$ is a function of~$\m$ but not of~$\x$; missing at random~(\emph{MAR}), where~$\M$ is independent of the missing values in~$\X$, i.e.~$\Q(\M{=}\m|\X{=}\x)$ is a function of both~$\m$ and $\x$, but is not affected by changes in~$x_i$ if~$m_i{=}0$; and missing not at random~(\emph{MNAR}), covering the rest of the distributions for which~$\M$ depends on missing values in~$\X$, i.e.~$\Q(\M{=}\m|\X{=}\x)$ is a function of both~$\m$ and $\x$, which at least sometimes is sensitive to changes in~$x_i$ when~$m_i{=}0$. Let $\PP$ be the joint distribution of the object~$\X$, label~$\Y$, and missingness mask~$\M$: \begin{align*} \PP(\X{\shorteq}\x,\Y{\shorteq}y,\M{\shorteq}\m) = \D\left(\X{\shorteq}\x, \Y{\shorteq}y\right) \cdot \Q(\M{\shorteq}\m|\X{\shorteq}\x) \end{align*} For given $\x{\in}\R^s$ and $\m{\in}\{0,1\}^s$, denote by $o(\x,\m)$ the event where the random vector~$\X$ coincides with~$\x$ on the coordinates~$i$ for which $m_i{=}1$. For example, if~$\m$ is an all-zero vector, $o(\x,\m)$ covers the entire probability space, and if~$\m$ is an all-one vector, $o(\x,\m)$ corresponds to the event $\X{=}\x$. With these notations in hand, we are now ready to characterize the optimal predictor in the presence of missing data. The proofs are common knowledge, but provided in app.~\ref{app:mbayes_proof} for completeness. \begin{claim}\label{claim:optimal_rule} For any data distribution~$\D$ and missingness distribution~$\Q$, the optimal classification rule in terms of 0-1 loss is given by predicting the class $y\in[K]$, that maximizes $\PP(\Y{\shorteq}y|o(\x,\m))\cdot\PP(\M{\shorteq}\m|o(\x,\m),\Y{\shorteq}y)$, for an instance $\x {\odot} \m$. \end{claim} When the distribution~$\Q$ is MAR (or MCAR), the optimal classifier admits a simpler form, referred to as the \emph{marginalized Bayes predictor}: \begin{corollary}\label{corollary:mar} Under the conditions of claim~\ref{claim:optimal_rule}, if the distribution $\Q$ is MAR (or MCAR), the optimal classification rule may be written as: \begin{equation} h^*(\x \odot \m) = \argmax\nolimits_y~\PP(\Y{=}y|o(\x,\m)) \label{eq:mbayes} \end{equation} \end{corollary} Corollary~\ref{corollary:mar} indicates that in the MAR setting, which is frequently encountered in practice, optimal classification does \emph{not} require prior knowledge regarding the missingness distribution~$\Q$. As long as one is able to realize the marginalized Bayes predictor (eq.~\ref{eq:mbayes}), or equivalently, to compute the likelihoods of observed values conditioned on labels ($\PP(o(\x,\m)|Y{=}y)$), classification under missing data is guaranteed to be optimal, regardless of the corruption process taking place. This is in stark contrast to discriminative methods, which require access to the missingness distribution during training, and thus are not able to cope with unknown conditions at test time. Most of this section dealt with the task of prediction given an input with missing data, where we assumed we had access to a ``clean'' training set, and only faced missingness during prediction. However, many times we wish to tackle the reverse task, where the training set itself is riddled with missing data. Tractability leads to an advantage here as well: under the MAR assumption, learning from missing data with the marginalized likelihood objective results in an unbiased classifier~\citep{Little:2002uj}. In the case of TMMs, marginalizing over missing values is just as efficient as plain inference~--~requires only a single pass through the corresponding network. The exact mechanism is carried out in similar fashion as in sum-product networks, and is covered in app.~\ref{sec:missing_data:margin}. Accordingly, the marginalized Bayes predictor (eq.~\ref{eq:mbayes}) is realized efficiently, and classification under missing data (in the MAR setting) is optimal (under generative assumption), regardless of the missingness distribution. \gapbeforesection \section{Related Works} \label{sec:related_works} \gapaftersection There are many generative models realized through neural networks, and convolutional networks in particular, e.g. Generative Adversarial Networks~\citep{Goodfellow:2014td}, Variational Auto-Encoders~\citep{Kingma:2013tz}, and NADE~\citep{JMLR:v17:16-272}. However, most do not posses tractable inference, and of the few that do, non posses tractable marginalization over any set of variables. Due to limits of space, we defer the discussion on the above to app.~\ref{app:extended_related_works}, and in the remainder of this section focus instead on the most relevant works. As mentioned in sec.~\ref{sec:model}, we build on the approach of specifying generative models through Arithmetic Circuits~(ACs)~\citep{Darwiche:2003hx}, and specifically, our model is a strict subclass of the well-known Sum-Product Networks~(SPNs)~\citep{Poon:2012vd}, under the decomposable and complete restrictions. Where our work differs is in our algebraic approach to eq.~\ref{eq:tmm}, which gives rise to a specific structure of ACs, called ConvACs, and a deep theory regarding their expressivity and inductive bias (see sec.~\ref{sec:theory}). In contrast to the structure we proposed, the current literature on general SPNs does not prescribe any specific structures, and its theory is limited to either very specific instances~\citep{Delalleau:2011vh}, or very broad classes, e.g fixed-depth circuits~\citep{Martens:2014tr}. In the early works on SPNs, specialized networks of complex structure were designed for each task based mainly on heuristics, often bearing little resemblance to common neural networks. Contemporary works have since moved on to focus mainly on learning the structure of SPNs directly from data~\citep{Peharz:2013cl,Gens:2013ufa,Adel:2015wf,Rooshenas:2014wb}, leading to improved results in many domains. Despite that, only few published studies have applied this method to natural domains (images, audio, etc.), on which only limited performance, compared to other common methods, was reported, specifically on the MNIST dataset~\citep{Adel:2015wf}. The above suggests that choosing the right architecture of general SPNs, at least on some domains, remains to be an unsolved problem. In addition, both the previously studied manually-designed SPNs, as well as ones with a learned structure, lead to models, which according to recent works on GPU-optimized algorithms~\citep{maps-multi}, cannot be efficiently implemented due to their irregular memory access patterns. This is in stark contrast to our model, which leverages the same patterns as modern ConvNets, and thus enjoys similar run-time performance. An additional difference in our work is that we manage to successfully train our model using standard SGD. Even though this approach has already been considered by~\citet{Poon:2012vd}, they deemed it lacking and advocated for specialized optimization algorithms instead. Outside the realm of generative networks, tractable graphical models, e.g. Latent Tree Models~(LTMs)~\citep{Mourad:2013kz}, are the most common method for tractable inference. Similar to SPNs, it is not straightforward to find the proper structure of graphical models for a particular problem, and most of the same arguments apply here as well. Nevertheless, it is noteworthy that recent progress in structure and parameters learning of LTMs~\citep{Huang:2015tb,Anandkumar:2014uc} was also brought forth by connections to tensor factorizations, similar to our approach. Unlike the aforementioned algorithms, we utilize tensor factorizations solely for deriving our model and analyzing its expressivity, while leaving learning to SGD~--~the most successful method for training neural networks. Leveraging their perspective to analyze the optimization properties of our model is viewed as a promising avenue for future research. \gapbeforesection \section{Experiments} \label{sec:exp} \gapaftersection We demonstrate the properties of TMMs through both qualitative and quantitative experiments. In sec.~\ref{subsec:exp:missing} we present state of the art results on image classification under missing data, with robustness to various missingness distributions. In sec.~\ref{subsec:exp:timit} we show that our results are not limited to images, by applying TMMs for speech recognition. Finally, in app.~\ref{app:exp:vis} we show visualizations of samples drawn from TMMs, shedding light on their generative nature. Our implementation, based on Caffe~\citep{Jia:2014up} and MAPS~\citep{maps-multi} (toolbox for efficient GPU code generation), as well as all other code for reproducing our experiments, are available at: \githuburl{Generative-ConvACs}. Extended details regarding the experiments are provided in app.~\ref{app:exp_details}. \gapbeforesubsection \subsection{Image Classification under Missing Data}\label{subsec:exp:missing} \gapaftersubsection In this section we experiment on two datasets: MNIST~\citep{LeCun:1998hy} for digit classification, and small NORB~\citep{LeCun:2004wl} for 3D object recognition. In our results, we refer to models using shallow networks as CP-TMM, and to those using deep networks as HT-TMM, in accordance with the respective tensor factorizations~(see sec.~\ref{sec:model}). The theory discussed in sec.~\ref{sec:theory} guided our exact choice of architectures. Namely, we used the fact~\citep{Levine:2017wt} that the capacity to model either short- or long-range correlations in the input, is related to the number of channels in the beginning or end of a network, respectively. In MNIST, discriminating between digits has more to do with long-range correlations than the basic strokes digits are made of, hence we chose to start with few channels and end with many~--~layer widths were set to 64-128-256-512. In contrast, the classes of NORB differ in much finer details, requiring more channels in the first layers, hence layer widths were set to 256-256-256-512. In both cases, $M=32$ Gaussian mixing components were used. We begin by comparing our generative approach to missing data against classical methods, namely, methods based on \citet{Globerson:2006jv}. They regard missing data as ``feature deletion'' noise, replace missing entries by zeros, and devise a learning algorithm over linear predictors that takes the number of missing features, $n$, into account. The method was later improved by \citet{Dekel:2008jb}. We compare TMMs to the latter, with $n$ non-zero pixels randomly chosen and changed to zero, in the two-class prediction task derived from each pair of MNIST digits. Due to limits of their implementation, only 300 images per digit are used for training. Despite this, and the fact that the evaluated scenario is of the MNAR type (on which optimality is not guaranteed~--~see sec.~\ref{sec:missing_data}), we achieve significantly better results (see table~\ref{table:exp_shamir}), and unlike their method, which requires several classifiers and knowing $n$, we use a single TMM with no prior knowledge. Heading on to multi-class prediction under missing data, we focus on the challenging ``blind'' setting, where the missingness distribution at test time is completely unknown during training. We simulate two kinds of MAR missingness distributions: (i)~an i.i.d. mask with a fixed probability~$p\in[0,1]$ of dropping each pixel, and (ii)~a mask composed of the union of $n$ (possibly overlapping) rectangles of width and height~$W$, each positioned randomly in the image (uniform distribution). We first demonstrate that purely discriminative classifiers cannot generalize to all missingness distributions, by training the standard LeNeT ConvNet~\citep{LeCun:1998hy} on one set of distributions and then testing it on others (see fig.~\ref{fig:convnet}). Next, we present our main results. We compare our model against three different approaches. First, as a baseline, we use K-Nearest Neighbors~(KNN) to vote on the most likely class, augmented with an $l^2$-metric that disregards missing coordinates. KNN actually scores better than most methods, but its missingness-aware distance metric prevents the common memory and runtime optimizations, making it impractical for real-world settings. Second, we test various data-imputation methods, ranging from simply filling missing pixels with zeros or their mean, to modern generative models suited to inpainting. Data imputation is followed by a ConvNet prediction on the completed image. In general, we find that this approach only works well when few pixels are missing. Finally, we test generative classifiers other than our model, including MP-DBM and SPN (sum-product networks). MP-DBM is notable for being limited to approximations, and its results show the importance of using exact inference instead. For SPN, we have augmented the model from~\citet{Poon:2012vd} with a class variable $Y$, and trained it to maximize the joint probability $P(X,Y)$ using the code of~\citet{Zhao:2016va}. The inferior performance of SPN suggests that the structure of TMMs, which are in fact a special case, is advantageous. Due to limitations of available public code and time, not all methods were tested on all datasets and distributions. See fig.~\ref{fig:exp:multiclass} for the complete results. To conclude, TMMs significantly outperform all other methods tested on image classification with missing data. Although they are a special case of SPNs, their particular structure appears to be more effective than ones existing in the literature. We attribute this superiority to the fact that their architectural design is backed by comprehensive theoretical studies (see sec.~\ref{sec:theory}). \gapbeforesubsection \subsection{Speech Recognition under Missing Data}\label{subsec:exp:timit} \gapaftersubsection To demonstrate the versatility of TMMs, we also conducted limited experiments on the TIMIT speech recognition dataset, following the same protocols as in sec.~\ref{subsec:exp:missing}. We trained a TMM and a standard ConvNet on 256ms windows of raw data at 16Hz sample rate to predict the phoneme at the center of a window. Both the TMM and the ConvNet reached $78\%$ accuracy on the clean dataset, but when half of the audio is missing i.i.d., accuracy of the ConvNet with mean imputation drops to $34\%$, while the TMM remains at $63\%$. Utilizing common audio inpainting methods~\citep{Adler:2012dd} only improves accuracy of the ConvNet to $48\%$, well below that of TMM. \gapbeforesection \section{Summary} \label{sec:summary} \gapaftersection This paper focuses on generative models which admit tractable inference and marginalization, capabilities that lie outside the realm of contemporary neural network-based generative methods. We build on prior works on tractable models based on arithmetic circuits and sum-product networks, and leverage concepts from tensor analysis to derive a sub-class of models we call Tensorial Mixture Models (TMMs). In contrast to existing methods, our algebraic approach leads to a comprehensive understanding of the relation between model structure and representational properties. In practice, utilizing this understanding for the design of TMMs has led to state of the art performance in classification under missing data. We are currently investigating several avenues for future research, including semi-supervised learning, and examining more intricate ConvAC architectures, such as the ones suggested by \citet{Cohen:0ZJHmEow}). \newcommand{\acknowledgments}{This work is supported by Intel grant ICRI-CI \#9-2012-6133, by ISF Center grant 1790/12 and by the European Research Council (TheoryDL project). Nadav Cohen is supported by a Google Fellowship in Machine Learning.} \ifdefined\CAMREADY \subsubsection*{Acknowledgments} \acknowledgments \fi \subsubsection*{References} \small{ \bibliographystyle{plainnat} } \clearpage \appendix \section{The Universality of Tensorial Mixture Models}\label{app:universal} In this section we prove the universality property of Generative ConvACs, as discussed in sec.~\ref{sec:model}. We begin by taking note from functional analysis and define a new property called \emph{PDF total set}, which is similar in concept to a \emph{total set}, followed by proving that this property is invariant under the cartesian product of functions, which entails the universality of these models as a corollary. \begin{definition} Let $\mathcal{F}$ be a set of PDFs over $\R^s$. $\mathcal{F}$ is PDF total iff for any PDF $h(\x)$ over $\R^s$ and for all $\epsilon > 0$ there exists $M~\in~\N$, $\{f_1(\x),\ldots,f_M(\x)\} \subset \mathcal{F}$ and $\w \in \triangle^{M-1}$ s.t. $\left\| h(\x) - \sum_{i=1}^M w_i f_i(\x) \right\|_1 < \epsilon$. In other words, a set is a PDF total set if its convex span is a dense set under $L^1$ norm. \end{definition} \begin{claim} Let $\mathcal{F}$ be a set of PDFs over $\R^s$ and let $\mathcal{F}^{\otimes N} = \{\prod_{i=1}^N f_i(\x) | \forall i, f_i(\x) \in \mathcal{F} \}$ be a set of PDFs over the product space $(\R^s)^N$. If $\mathcal{F}$ is a PDF total set then $\mathcal{F}^{\otimes N}$ is PDF total set. \end{claim} \begin{proof} If $\mathcal{F}$ is the set of Gaussian PDFs over $\R^s$ with diagonal covariance matrices, which is known to be a PDF total set, then $\mathcal{F}^{\otimes N}$ is the set of Gaussian PDFs over $(\R^s)^N$ with diagonal covariance matrices and the claim is trivially true. Otherwise, let $h(\x_1,\ldots,\x_N)$ be a PDF over $(\R^s)^N$ and let $\epsilon~>~0$. From the above, there exists $K~\in~\N$, $\w~\in~\triangle^{M_1 - 1}$ and a set of diagonal Gaussians $\{g_{ij}(\x)\}_{i \in [M_1], j \in [N]}$ s.t. \begin{align}\label{eq:univ:gaussian} \left\| g(\x) - \sum_{i=1}^{M_1} w_i \prod_{j=1}^N g_{ij}(\x_j) \right\|_1 < \frac{\epsilon}{2} \end{align} Additionally, since $\mathcal{F}$ is a PDF total set then there exists $M_2 \in \N$, $\{ f_k(\x)\}_{k\in[M_2]} \subset \mathcal{F}$ and $\{\w_{ij} \in \triangle^{M_2 -1}\}_{i \in [M_1], j \in [N]}$ s.t. for all $i \in [M_1], j \in [N]$ it holds that $\left\| g_{ij}(\x) - \sum_{k=1}^{M_2} w_{ijk} f_k(\x)\right\|_1 < \frac{\epsilon}{2N}$, from which it is trivially proven using a telescopic sum and the triangle inequality that: \begin{align}\label{eq:univ:gaussian_approx} \left\| \sum_{i=1}^{M_1} w_i \prod_{j=1}^N g_{ij}(\x) - \sum_{i=1}^{M_1} w_i \prod_{j=1}^N \sum_{k=1}^{M_2} w_{ijk} f_k(\x_j) \right\|_1 &< \frac{\epsilon}{2} \end{align} From eq.~\ref{eq:univ:gaussian}, eq.~\ref{eq:univ:gaussian_approx} the triangle inequality it holds that: \begin{align*} \left\| g(\x) - \sum_{k_1,\ldots,k_N=1}^{M_2} \A_{k_1,\ldots,k_N} \prod_{j=1}^N f_{k_j}(\x_j) \right\|_1 &< \epsilon \end{align*} where $\A_{k_1,\ldots,k_N} = \sum_{i=1}^{M_1} w_i \prod_{j=1}^N w_{ijk_j}$ which holds $\sum_{k_1,\ldots,k_N=1}^{M_2} \A_{k_1,\ldots,k_N} = 1$. Taking $M~=~M_2^N$, $\{\prod_{j=1}^N f_{k_j}(\x_j)\}_{k_1 \in [M_2],\ldots, k_N \in [M_2]} \subset \mathcal{F}^{\otimes N}$ and $\w = \textrm{vec}(\A)$ completes the proof. \end{proof} \begin{corollary} Let $\mathcal{F}$ be a PDF total set of PDFs over $\R^s$, then the family of Generative ConvACs with mixture components from $\mathcal{F}$ can approximate any $PDF$ over $(\R^s)^N$ arbitrarily well, given arbitrarily many components. \end{corollary} \section{TMMs with Sparsity Constraints Can Represent Gaussian Mixture Models} \label{app:sparsity_example} As discussed in sec.~\ref{sec:model}, TMMs become tractable when a sparsity constraint is imposed on the priors tensor, i.e. most of the entries of the tensors are replaced with zeros. In this section, we demonstrate that under such a case, TMMs can represent Gaussian Mixture Models with diagonal covariance matrices, probably the most common type of mixture models. With the same notations as sec.~\ref{sec:model}, assume the number of mixing components of the TMM is $M = N \cdot K$ for some $K \in \N$, let $\{\mathcal{N}(\x; \mubf_{ki}, \textrm{diag}(\sigmabf^2_{ki}))\}_{k,i}^{K,N}$ be these components, and finally, assume the prior tensor has the following structure: \begin{align*} P(d_1,\ldots,d_N) &= \begin{cases} w_k & \forall i\in [N], \,\,\,d_i {=} N {\cdot} (k{-}1) {+} i \\ 0 & \textrm{Otherwise} \end{cases} \end{align*} then eq.~\ref{eq:tmm} reduces to: \begin{align*} P(X) &= \sum\nolimits_{k=1}^K w_k \prod\nolimits_{i=1}^N \mathcal{N}(\x; \mubf_{ki}, \textrm{diag}(\sigmabf^2_{ki})) = \sum\nolimits_{k=1}^K w_k \mathcal{N}(\x; \tilde{\mubf}_k, \textrm{diag}(\tilde{\sigmabf}^2_k)) \\ \tilde{\mubf}_k &= (\mubf_{k1}^T, \ldots, \mubf_{kN}^T)^T \quad\quad\quad \tilde{\sigmabf}^2_k = ((\sigmabf^2_{k1})^T, \ldots, (\sigmabf^2_{kN})^T)^T \end{align*} which is equivalent to a diagonal GMM with mixing weights $\w \in \triangle^{K-1}$ (where $\triangle^{K-1}$ is the $K$-dimensional simplex) and Gaussian mixture components with means $\{\tilde{\mubf}_k\}_{k=1}^K$ and covariances $\{\textrm{diag}(\tilde{\sigmabf}^2_k)\}_{k=1}^K$. \section{Background on Tensor Factorizations} \label{app:tensor_background} In this section we establish the minimal background in the field of tensor analysis required for following our work. A tensor is best thought of as a multi-dimensional array $\A_{d_1,\ldots,d_N}\in\R$, where $\forall i\in[N], d_i \in [M_i]$. The number of indexing entries in the array, which are also called \emph{modes}, is referred to as the \emph{order} of the tensor. The number of values an index of a particular mode can take is referred to as the \emph{dimension} of the mode. The tensor $\A \in \R^{M_1 \otimes \ldots \otimes M_N}$ mentioned above is thus of order $N$ with dimension $M_i$ in its $i$-th mode. For our purposes we typically assume that $M_1 = \ldots = M_N = M$, and simply denote it as $\A \in (\R^M)^{\otimes N}$. The fundamental operator in tensor analysis is the \emph{tensor product}. The tensor product operator, denoted by $\otimes$, is a generalization of outer product of vectors (1-ordered vectors) to any pair of tensors. Specifically, let $\A$ and $\B$ be tensors of order $P$ and $Q$ respectively, then the tensor product $\A \otimes \B$ results in a tensor of order $P+Q$, defined by: $(\A \otimes \B)_{d_1,\ldots,d_{P+Q}} = \A_{d_1,\ldots,d_P} \cdot \B_{d_{P+1},\ldots,d_{P+Q}}$. The main concept from tensor analysis we use in our work is that of tensor decompositions. The most straightforward and common tensor decomposition format is the rank-1 decomposition, also known as a CANDECOMP/PARAFAC decomposition, or in short, a \emph{CP decomposition}. The CP decomposition is a natural extension of low-rank matrix decomposition to general tensors, both built upon the concept of a linear combination of rank-1 elements. Similarly to matrices, tensors of the form $\vv^{(1)} \otimes \cdots \otimes \vv^{(N)}$, where $\vv^{(i)} \in \R^{M_i}$ are non-zero vectors, are regarded as $N$-ordered rank-1 tensors, thus the rank-$Z$ CP decomposition of a tensor $\A$ is naturally defined by: \begin{align} \label{eq:cp_decomp} \A &= \sum_{z=1}^Z a_z \aaa^{z,1} \otimes \cdots \otimes \aaa^{z,N} \nonumber \\ \Rightarrow \A_{d_1,\ldots,d_N} &= \sum_{z=1}^Z a_z \prod_{i=1}^N a_{d_i}^{z,i} \end{align} where $\{\aaa^{z,i} \in \R^{M_i}\}_{i=1,z=1}^{N,Z}$ and $\aaa \in \R^Z$ are the parameters of the decomposition. As mentioned above, for $N=2$ it is equivalent to low-order matrix factorization. It is simple to show that any tensor $\A$ can be represented by the CP decomposition for some $Z$, where the minimal such $Z$ is known as its \emph{tensor rank}. Another decomposition we will use in this paper is of a hierarchical nature and known as the Hierarchical Tucker decomposition \citep{Hackbusch:2009jj}, which we will refer to as \emph{HT decomposition}. While the CP decomposition combines vectors into higher order tensors in a single step, the HT decomposition does that more gradually, combining vectors into matrices, these matrices into 4th ordered tensors and so on recursively in a hierarchically fashion. Specifically, the following describes the recursive formula of the HT decomposition\footnote{ More precisely, we use a special case of the canonical HT decomposition as presented in \citet{Hackbusch:2009jj}. In the terminology of the latter, the matrices $A^{l,j,\gamma}$ are diagonal and equal to $diag(\aaa^{l,j,\gamma})$ (using the notations from eq.~\ref{eq:ht_decomp}).} for a tensor $\A \in (\R^M)^{\otimes N}$ where $N = 2^L$, i.e. $N$ is a power of two\footnote{The requirement for $N$ to be a power of two is solely for simplifying the definition of the HT decomposition. More generally, instead of defining it through a complete binary tree describing the order of operations, the canonical decomposition can use any balanced binary tree.}: \begin{align} \phi^{1,j,\gamma} &= \sum_{\alpha=1}^{r_0} a_\alpha^{1,j,\gamma} \aaa^{0,2j-1,\alpha} \otimes \aaa^{0,2j,\alpha} \nonumber \\ &\cdots \nonumber\\ \phi^{l,j,\gamma} &= \sum_{\alpha=1}^{r_{l-1}} a_\alpha^{l,j,\gamma} \underbrace{\phi^{l-1,2j-1,\alpha}}_{\text{order $2^{l-1}$}} \otimes \underbrace{\phi^{l-1,2j,\alpha}}_{\text{order $2^{l-1}$}} \nonumber\\ &\cdots \nonumber\\ \phi^{L-1,j,\gamma} &= \sum_{\alpha=1}^{r_{L-2}} a_\alpha^{L-1,j,\gamma} \underbrace{\phi^{L-2,2j-1,\alpha}}_{\text{order $\frac{N}{4}$}} \otimes \underbrace{\phi^{L-2,2j,\alpha}}_{\text{order $\frac{N}{4}$}} \nonumber\\ \A &= \sum_{\alpha=1}^{r_{L-1}} a_\alpha^L \underbrace{\phi^{L-1,1,\alpha}}_{\text{order $\frac{N}{2}$}} \otimes \underbrace{\phi^{L-1,2,\alpha}}_{\text{order $\frac{N}{2}$}} \label{eq:ht_decomp} \end{align} where the parameters of the decomposition are the vectors $\{\aaa^{l,j,\gamma}{\in}\R^{r_{l-1}}\}_{l\in\{0,\ldots,L-1\}, j\in[\nicefrac{N}{2^l}], \gamma \in [r_l]}$ and the top level vector $\aaa^L \in \R^{r_{L-1}}$, and the scalars $r_0,\ldots,r_{L-1} \in \N$ are referred to as the \emph{ranks of the decomposition}. Similar to the CP decomposition, any tensor can be represented by an HT decomposition. Moreover, any given CP decomposition can be converted to an HT decomposition by only a polynomial increase in the number of parameters. Finally, since we are dealing with generative models, the tensors we study are non-negative and sum to one, i.e. the vectorization of $\A$ (rearranging its entries to the shape of a vector), denoted by $\textrm{vec}(\A)$, is constrained to lie in the multi-dimensional simplex, denoted by: \begin{align}\label{eq:simplex} \triangle^k &:= \left\{\x \in \R^{k+1} | \sum\nolimits_{i=1}^{k+1} x_i = 1, \forall i \in [k+1]: x_i \geq 0\right\} \end{align} \section{Proof for the Depth Efficiency of Convolutional Arithmetic Circuits with Simplex Constraints}\label{app:depth_efficiency} In this section we prove that the depth efficiency property of ConvACs that was proved in~\citet{expressive_power} applies also to the generative variant of ConvACs we have introduced in sec.~\ref{sec:model}. Our analysis relies on basic knowledge of tensor analysis and its relation to ConvACs, specifically, that the concept of ``ranks'' of each factorization scheme is equivalent to the number of channels in these networks. For completeness, we provide a short introduction to tensor analysis in app.~\ref{app:tensor_background}. The We prove the following theorem, which is the generative analog of theorem~1 from~\citep{expressive_power}: \begin{theorem} \label{thm:tensor_rank} Let $\A^y$ be a tensor of order $N$ and dimension $M$ in each mode, generated by the recursive formulas in eq.~\ref{eq:ht_decomp}, under the simplex constraints introduced in sec.~\ref{sec:model}. Define $r~:=~\min\{r_0,M\}$, and consider the space of all possible configurations for the parameters of the decomposition~--~$\{\aaa^{l,j,\gamma}~\in~\triangle^{r_{l-1}-1}\}_{l,j,\gamma}$. In this space, the generated tensor $\A^y$ will have CP-rank of at least $r^{\nicefrac{N}{2}}$ almost everywhere (w.r.t. the product measure of simplex spaces). Put differently, the configurations for which the CP-rank of $\A^y$ is less than $r^{\nicefrac{N}{2}}$ form a set of measure zero. The exact same result holds if we constrain the composition to be ``shared'', i.e. set $\aaa^{l,j,\gamma}\equiv\aaa^{l,\gamma}$ and consider the space of $\{\aaa^{l,\gamma}~\in~\triangle^{r_{l-1}-1}\}_{l,\gamma}$ configurations. \end{theorem} The only differences between ConvACs and their generative counter-parts are the simplex constraints applied to the parameters of the models, which necessitate a careful treatment to the measure theoretical arguments of the original proof. More specifically, while the $k$-dimensional simplex $\triangle^{k}$ is a subset of the $k+1$-dimensional space~$\R^{k+1}$, it has a zero measure with respect to the Lebesgue measure over $\R^{k+1}$. The standard method to define a measure over $\triangle^k$ is by the Lebesgue measure over $\R^k$ of its projection to that space, i.e. let $\lambda:\R^k \to \R$ be the Lebesgue measure over $\R^k$, $p:\R^{k+1} \to \R^k, p(\x) = (x_1, \ldots, x_k)^T$ be a projection, and $A \subset \triangle^k$ be a subset of the simplex, then the latter's measure is defined as $\lambda(p(A))$. Notice that $p(\triangle^k)$ has a positive measure, and moreover that $p$ is invertible over the set $p(\triangle^k)$, and that its inverse is given by $p^{-1}(x_1,\ldots,x_k) = (x_1,\ldots,x_k,1-\sum_{i=1}^k x_i)$. In our case, the parameter space is the cartesian product of several simplex spaces of different dimensions, for each of them the measure is defined as above, and the measure over their cartesian product is uniquely defined by the product measure. Though standard, the choice of the projection function $p$ above could be seen as a limitation, however, the set of zero measure sets in $\triangle^k$ is identical for any reasonable choice of a projection $\pi$ (e.g. all polynomial mappings). More specifically, for any projection $\pi:\R^{k+1}\to\R^k$ that is invertible over $\pi(\triangle^k)$, $\pi^{-1}$ is differentiable, and the Jacobian of $\pi^{-1}$ is bounded over $\pi(\triangle^k)$, then a subset $A \subset \triangle^k$ is of measure zero w.r.t. the projection $\pi$ iff it is of measure zero w.r.t. $p$ (as defined above). This implies that if we sample the weights of the generative decomposition (eq.~\ref{eq:ht_decomp} with simplex constraints) by a continuous distribution, a property that holds with probability 1 under the standard parameterization (projection $p$), will hold with probability 1 under any reasonable parameterization. We now state and prove a lemma that will be needed for our proof of theorem~\ref{thm:tensor_rank}. \begin{lemma}\label{lemma:rank_everywhere} Let $M, N, K \in \N$, $1 \leq r \leq \min\{M,N\}$ and a polynomial mapping $A:\R^K \to \R^{M \times N}$ (i.e. for every $i \in [M],j\in [N]$ then $A_{ij}:\R^k \to \R$ is a polynomial function). If there exists a point $\x \in \R^K$ s.t. $\rank{A(\x)} \geq r$, then the set $\{\x \in \R^K | \rank{A(\x)} < r\}$ has zero measure. \end{lemma} \begin{proof} Remember that $\rank{A(\x)} \geq r $ iff there exits a non-zero $r \times r$ minor of $A(\x)$, which is polynomial in the entries of $A(\x)$, and so it is polynomial in $\x$ as well. Let $c = {M \choose r} \cdot {N \choose r}$ be the number of minors in $A$, denote the minors by $\{f_i(\x)\}_{i=1}^c$, and define the polynomial function $f(\x) = \sum_{i=1}^c f_i(\x)^2$. It thus holds that $f(\x) = 0$ iff for all $i \in [c]$ it holds that $f_i(\x) = 0$, i.e. $f(\x) = 0$ iff $\rank{A(\x)} < r$. Now, $f(\x)$ is a polynomial in the entries of $\x$, and so it either vanishes on a set of zero measure, or it is the zero polynomial~(see \citet{caron2005zero} for proof). Since we assumed that there exists $\x \in \R^K$ s.t. $\textrm{rank}(A(\x)) \geq r$, the latter option is not possible. \end{proof} Following the work of~\citet{expressive_power}, our main proof relies on following notations and facts: \begin{itemize} \item We denote by $[\A]$ the matricization of an $N$-order tensor $\A$ (for simplicity, $N$ is assumed to be even), where rows and columns correspond to odd and even modes, respectively. Specifically, if $\A \in \R^{M_1 \times \cdots M_N}$, the matrix $[\A]$ has $M_1 \cdot M_3 \cdot \ldots \cdot M_{N-1}$ rows and $M_2 \cdot M_4 \cdot \ldots \cdot M_N$ columns, rearranging the entries of the tensor such that $\A_{d_1,\ldots,d_N}$ is stored in row index $1 + \sum_{i=1}^{\nicefrac{N}{2}}(d_{2i-1} - 1) \prod_{j=i+1}^{\nicefrac{N}{2}} M_{2j-1}$ and column index $1 + \sum_{i=1}^{\nicefrac{N}{2}}(d_{2i} - 1) \prod_{j=i+1}^{\nicefrac{N}{2}} M_{2j}$. Additionally, the matricization is a linear operator, i.e. for all scalars $\alpha_1,\alpha_2$ and tensors $\A_1,\A_2$ with the order and dimensions in every mode, it holds that $[\alpha_1 \A_1 + \alpha_2 \A_2] = \alpha_1[\A_1] + \alpha_2 [\A_2]$. \item The relation between the Kronecker product (denoted by $\odot$) and the tensor product (denoted by $\otimes$) is given by $[\A \otimes \B] = [\A] \odot [\B]$. \item For any two matrices $A$ and $B$, it holds that $\rank{A \odot B} = \rank{A} \cdot \rank{B}$. \item Let $Z$ be the CP-rank of $\A$, then it holds that $\rank{[\A]}~\leq~Z$ (see~\citep{expressive_power} for proof). \end{itemize} \begin{proof}[Proof of theorem~\ref{thm:tensor_rank}] Stemming from the above stated facts, to show that the CP-rank of $\A^y$ is at least $r^{\nicefrac{N}{2}}$, it is sufficient to examine its matricization $[\A^y]$ and prove that $\rank{[\A^y]}\geq~r^{\nicefrac{N}{2}}$. Notice from the construction of $[\A^y]$, according to the recursive formula of the HT-decomposition, that its entires are polynomial in the parameters of the decomposition, its dimensions are $M^{\nicefrac{N}{2}}$ each and that $1 \leq r^{\nicefrac{N}{2}} \leq M^{\nicefrac{N}{2}}$. In accordance with the discussion on the measure of simplex spaces, for each vector parameter $\aaa^{l,j,\gamma} \in \triangle^{r_{l-1} - 1}$, we instead examine its projection $\tilde{\aaa}^{l,j,\gamma} = p(\aaa^{l,j,\gamma}) \in \R^{r_{l-1}-1}$, and notice that $p^{-1}(\tilde{\aaa}^{l,j,\gamma})$ is a polynomial mapping\footnote{As we mentioned earlier, $p$ is invertible only over $p(\triangle^k)$, for which its inverse is given by $p^{-1}(x_1,\ldots,x_k) = (x_1,\ldots,x_k,1-\sum_{i=1}^k x_i)$. However, to simplified the proof and notations, we use $p^{-1}$ as defined here over the entire range $\R^{k-1}$, even where it does not serve as the inverse of~$p$.} w.r.t. $\tilde{\aaa}^{l,j,\gamma}$. Thus, $[\A^y]$ is a polynomial mapping w.r.t. the projected parameters $\{\tilde{\aaa}^{l,j,\gamma}\}_{l,j,\gamma}$, and using lemma~\ref{lemma:rank_everywhere} it is sufficient to show that there exists a set of parameters for which $\rank{[\A^y]} \geq r^{\nicefrac{N}{2}}$. Denoting for convenience $\phi^{L,1,1}:=\A^y$ and $r_L=1$, we will construct by induction over $l=1,...,L$ a set of parameters, $\{\aaa^{l,j,\gamma}\}_{l,j,\gamma}$, for which the ranks of the matrices $\{[\phi^{l,j,\gamma}]\}_{j\in[\nicefrac{N}{2^l}],\gamma\in[r_l]}$ are at least $r^{\nicefrac{2^l}{2}}$, while enforcing the simplex constraints on the parameters. More so, we'll construct these parameters s.t. $\aaa^{l,j,\gamma} = \aaa^{l,\gamma}$, thus proving both the "unshared" and "shared" cases. For the case $l=1$ we have: $$\phi^{1,j,\gamma}= \sum_{\alpha=1}^{r_0} a_\alpha^{1,j,\gamma} \aaa^{0,2j-1,\alpha} \otimes \aaa^{0,2j,\alpha}$$ and let $a^{1,j,\gamma}_\alpha = \frac{1_{\alpha \leq r}}{r}$ and $a^{0,j,\alpha}_i = 1_{\alpha = i}$ for all $i,j,\gamma$ and $\alpha \leq M$, and $a^{0,j,\alpha}_i = 1_{i=1}$ for all $i$ and $\alpha > M$, and so $$[\phi^{1,j,\gamma}]_{i,j} = \begin{cases} \nicefrac{1}{r} & i = j \wedge i \leq r \\ 0 & Otherwise \end{cases} $$ which means $\rank{[\phi^{1,j,\gamma}]} = r$, while preserving the simplex constraints, which proves our inductive hypothesis for $l=1$. Assume now that $\rank{[\phi^{l-1,j',\gamma'}]} \geq r^{\nicefrac{2^{l-1}}{2}}$ for all $j'\in[\nicefrac{N}{2^{l-1}}]$ and $\gamma'\in[r_{l-1}]$. For some specific choice of $j\in[\nicefrac{N}{2^l}]$ and $\gamma\in[r_l]$ we have: \begin{align*} &&\phi^{l,j,\gamma} = \sum_{\alpha=1}^{r_{l-1}} a_\alpha^{l,j,\gamma} \phi^{l-1,2j-1,\alpha} \otimes \phi^{l-1,2j,\alpha} \\ &&\implies [\phi^{l,j,\gamma}] = \sum_{\alpha=1}^{r_{l-1}} a_\alpha^{l,j,\gamma} [\phi^{l-1,2j-1,\alpha}] \odot [\phi^{l-1,2j,\alpha}] \end{align*} Denote $M_\alpha := [\phi^{l-1,2j-1,\alpha}] \odot [\phi^{l-1,2j,\alpha}]$ for $\alpha=1,...,r_{l-1}$. By our inductive assumption, and by the general property $\rank{A \odot B}~=~\rank{A}~\cdot~\rank{B}$, we have that the ranks of all matrices $M_\alpha$ are at least $r^{\nicefrac{2^{l-1}}{2}}\cdot r^{\nicefrac{2^{l-1}}{2}}=r^{\nicefrac{2^l}{2}}$. Writing $[\phi^{l,j,\gamma}] = \sum_{\alpha=1}^{r_{l-1}} a_\alpha^{l,j,\gamma} \cdot M_\alpha$, and noticing that $\{M_\alpha\}$ do not depend on $\aaa^{l,j,\gamma}$, we simply pick $a^{l,j,\gamma}_\alpha = 1_{\alpha = 1}$, and thus $\phi^{l,j,\gamma} = M_1$, which is of rank $r^{\nicefrac{2^l}{2}}$. This completes the proof of the theorem. \end{proof} From the perspective of ConvACs with simplex constraints, theorem~\ref{thm:tensor_rank} leads to the following corollary: \begin{corollary} Assume the mixing components $\mathcal{M}~=~\{f_i(\x)~\in~L^2(\R^2) \cap L^1(\R^s)\}_{i=1}^M$ are square integrable\footnote{It is important to note that most commonly used distribution functions are square integrable, e.g. most members of the exponential family such as the Gaussian distribution.} probability density functions, which form a linearly independent set. Consider a deep ConvAC model with simplex constraints of polynomial size whose parameters are drawn at random by some continuous distribution. Then, with probability~1, the distribution realized by this network requires an exponential size in order to be realized (or approximated w.r.t. the $L^2$ distance) by the shallow ConvAC model with simplex constraints. The claim holds regardless of whether the parameters of the deep model are shared or not. \end{corollary} \begin{proof} Given a coefficient tensor $\A$, the CP-rank of $\A$ is a lower bound on the number of channels (of its next to last layer) required to represent that tensor by the ConvAC following the CP factorization. Additionally, since the mixing components are linearly independent, their products $\{\prod_{i=1}^N f_i(\x_i) | f_i \in \mathcal{M}\}$ are linearly independent as well, which entails that any distribution representable by the generative variant of ConvAC with mixing components $\mathcal{M}$ has a unique coefficient tensor $\A$. From theorem~\ref{thm:tensor_rank}, the set of parameters of a deep ConvAC model (under the simplex constraints) with a coefficient tensor of a polynomial CP-rank~--~the requirement for a polynomially-sized shallow ConvAC model with simplex constraints realizing that same distribution exactly~--~forms a set of measure zero. It is left to prove, that not only is it impossible to exactly represent a distribution with an exponential coefficient tensor by a shallow model, it is also impossible to approximate it. This follows directly from lemma~7 in appendix~B of~\citet{expressive_power}, as our case meets the requirement of that lemma. \end{proof} \section{Proof for the Optimality of Marginalized Bayes Predictor}\label{app:mbayes_proof} In this section we give short proofs for the claims from sec.~\ref{sec:missing_data}, on the optimality of the marginalized Bayes predictor under missing-at-random~(MAR) distribution, when the missingness mechanism is unknown, as well as the general case when we do not add additional assumptions. In addition, we will also present a counter example proving data imputation results lead to suboptimal classification performance. We begin by introducing several notations that augment the notations already introduced in the body of the article. Given a specific mask realization $\m \in \{0,1\}^s$, we use the following notations to denote partial assignments to the random vector $\X$. For the observed indices of $\X$, i.e. the indices for which $m_i = 1$, we denote a partial assignment by $\X \setminus \m = \x_o$, where $\x_o \in \R^{d_o}$ is a vector of length $d_o$ equal to the number of observed indices. Similarly, we denote by $\X \cap \m = \x_m$ a partial assignment to the missing indices according to $\m$, where $\x_m \in \R^{d_m}$ is a vector of length $d_m$ equal to the number of missing indices. As an example of the notation, for given realizations $\x \in \R^s$ and $\m \in \{0,1\}^s$, we defined in sec.~\ref{sec:missing_data} the event $o(\x,\m)$, which using current notation is marked by the partial assignment $\X \setminus \m = \x_o$ where $\x_o$ matches the observed values of the vector $\x$ according to $\m$. With the above notations in place, we move on to prove claim~\ref{claim:optimal_rule}, which describes the general solution to the optimal prediction rule given both the data and missingness distributions, and without adding any additional assumptions. \begin{proof}[Proof of claim~\ref{claim:optimal_rule}] Fix an arbitrary prediction rule $h$. We will show that $L(h^*) \leq L(h)$, where $L$ is the expected 0-1 loss. \begin{align*} &1 - L(h) {=} E_{(\x,\m,y)\sim(\X,\M,\Y)}[1_{h(\x \odot \m) = y}] \\ &{=} {\sum_{\m \in \{0,1\}^s}} {\sum_{y \in [k]}} {\int_{\R^s}} \PP(\M{=}\m, \X{=}\x, \Y{=}y) 1_{h(\x {\odot} \m) {=} y} d\x \\ &{=} {\sum_{\m \in \{0,1\}^s}} {\sum_{y \in [k]}} {\int_{\R^{d_o}}} {\int_{\R^{d_m}}} \\ &\phantom{{=}}\PP(\M{=}\m, \X{\setminus}\m {=} \x_o, \X{\cap}\m {=} \x_m, \Y{=}y) 1_{h(\x {\otimes} \m) {=} y} d\x_o d\x_m \\ &{=_1} {\sum_{\m \in \{0,1\}^s}} {\sum_{y \in [k]}} {\int_{\R^{d_o}}} 1_{h(\x {\odot} \m) {=} y} d\x_o \\ &\phantom{{=_1}}{\int_{\R^{d_m}}} \PP(\M{=}\m, \X{\setminus}\m {=} \x_o, \X{\cap}\m {=} \x_m, \Y{=}y) d\x_m \\ &{=_2} {\sum_{\m \in \{0,1\}^s}} {\sum_{y \in [k]}} {\int_{\R^{d_o}}} 1_{h(\x {\odot} \m) {=} y} \PP(\M{=}\m, \X{\setminus}\m{=}\x_o, \Y{=}y) d\x_o \\ &{=_3} {\sum_{\m \in \{0,1\}^s}} {\int_{\R^{d_o}}} \PP(\X{\setminus}\m{=}\x_o) {\sum_{y \in [k]}} 1_{h(\x {\odot} \m) {=} y} \PP(\Y{=}y | \X{\setminus}\m{=}\x_o) \\ &\phantom{{=_3} }\PP(\M{=}\m | \X{\setminus}\m{=}\x_o, \Y{=}y) d\x_o \\ &{\leq_4} {\sum_{\m \in \{0,1\}^s}} {\int_{\R^{d_o}}} \PP(\X{\setminus}\m{=}\x_o) {\sum_{y \in [k]}} 1_{h^*(\x {\odot} \m) {=} y} \PP(\Y{=}y | \X{\setminus}\m{=}\x_o) \\ &\phantom{\leq_4}\PP(\M{=}\m | \X{\setminus}\m{=}\x_o, \Y{=}y) d\x_o \\ &{=} 1- L(h^*) \end{align*} Where (1) is because the output of $h(\x \odot \m)$ is independent of the missing values, (2) by marginalization, (3) by conditional probability definition and (4) because by definition $h^*(\x \odot \m)$ maximizes the expression $\PP(\Y{=}y | \X{\setminus}\m{=}\x_o) \PP(\M{=}\m | \X{\setminus}\m{=}\x_o, \Y{=}y)$ w.r.t. the possible values of $y$ for fixed vectors~$\m$ and~$\x_o$. Finally, by replacing integrals with sums, the proof holds exactly the same when instances ($\X$) are discrete. \end{proof} We now continue and prove corollary~\ref{corollary:mar}, a direct implication of claim~\ref{claim:optimal_rule} which shows that in the MAR setting, the missingness distribution can be ignored, and the optimal prediction rule is given by the marginalized Bayes predictor. \begin{proof}[Proof of corollary~\ref{corollary:mar}] Using the same notation as in the previous proof, and denoting by $\x_o$ the partial vector containing the observed values of $\x \odot \m$, the following holds: \begin{align*} &\PP(\M{=}\m|o(\x,\m), \Y{=}y) := \PP(\M{=}\m|\X {\setminus} \m {=} \x_o, \Y{=}y) \\ &{=} \int_{\R^{d_m}} \PP(\M{=}\m, \X \cap \m {=} \x_m | \X {\setminus} \m {=} \x_o, \Y{=}y) d\x_m \\ &{=} \int_{\R^{d_m}} \PP(\X {\cap} \m {=} \x_m | \X {\setminus} \m {=} \x_o, \Y{=}y)\\ &\phantom{{=}} \cdot \PP(\M{=}\m | \X {\cap} \m {=} \x_m, \X {\setminus} \m {=} \x_o, \Y{=}y) d\x_m \\ &{=_1} \int_{\R^{d_m}} \PP(\X {\cap} \m {=} \x_m | \X {\setminus} \m {=} \x_o, \Y{=}y) \\ &\phantom{=_1} \cdot \PP(\M{=}\m | \X {\cap} \m {=} \x_m, \X {\setminus} \m {=} \x_o) d\x_m \\ &{=_2} \int_{\R^{d_m}} \PP(\X {\cap} \m {=} \x_m | \X {\setminus} \m {=} \x_o, \Y{=}y) \cdot \PP(\M{=}\m | \X {\setminus} \m {=} \x_o) d\x_m \\ &{=} \PP(\M{=}\m | \X {\setminus} \m {=} \x_o) {\int_{\R^{d_m}}} \PP(\X {\cap} \m {=} \x_m | \X {\setminus} \m {=} \x_o, \Y{=}y) d\x_m \\ &{=} \PP(\M{=}\m | o(\x,\m)) \end{align*} Where $(1)$ is due to the independence assumption of the events $\Y=y$ and $\M=\m$ conditioned on $\X=\x$, while noting that $(\X \setminus \m = x_o) \wedge (\X \cap \m = x_m)$ is a complete assignment of $\X$. $(2)$ is due to the MAR assumption, i.e. that for a given $\m$ and $\x_o$ it holds for all $\x_m \in \R^{d_m}$: $$\PP(\M{=}\m|\X {\setminus} \m {=} \x_o, \X {\cap} \m {=} \x_m) = \PP(\M{=}\m|\X {\setminus} \m {=} \x_o)$$ We have shown that $\PP(\M{=}\m|o(\x,\m), \Y=y)$ does not depend on $y$, and thus does not affect the optimal prediction rule in claim~\ref{claim:optimal_rule}. It may therefore be dropped, and we obtain the marginalized Bayes predictor. \end{proof} Having proved that in the MAR setting, classification through marginalization leads to optimal performance, we now move on to show that the same is not true for classification through data-imputation. Though there are many methods to perform data-imputation, i.e. to complete missing values given the observed ones, all of these methods can be seen as the solution of the following optimization problem, or more typically its approximation: \begin{align*} g(\x \odot \m) = \argmax_{\x' \in \R^s \wedge \forall i: m_i = 1 \rightarrow x'_i = x_i} \PP(\X=\x') \end{align*} Where $g(\x \odot \m)$ is the most likely completion of $\x \odot \m$. When data-imputation is carried out for classification purposes, one is often interested in data-imputation conditioned on a given class $Y=y$, i.e.: \begin{align*} g(\x \odot \m; y) = \argmax_{\x' \in \R^s \wedge \forall i: m_i = 1 \rightarrow x'_i = x_i} \PP(\X=\x'|\Y=y) \end{align*} Given a classifier $h:\R^s \to [K]$ and an instance $\x$ with missing values according to $\m$, classification through data-imputation is simply the result of applying $h$ on the output of $g$. When $h$ is the optimal classifier for complete data, i.e.~the Bayes predictor, we end up with one of the following prediction rules: \begin{align*} \textrm{Unconditional:} &\, h(\x \odot \m) = \argmax_y \PP(\Y = y | \X = g(\x \odot \m))\\ \textrm{Conditional:} & \, h(\x \odot \m) = \argmax_y \PP(\Y = y | \X = g(\x \odot \m; y)) \end{align*} \begin{claim} \label{claim:data_imp_subopt} There exists a data distribution $\D$ and MAR missingness distribution $\Q$ s.t. the accuracy of classification through data-imputation is almost half the accuracy of the optimal marginalized Bayes predictor, with an absolute gap of more than $33$ percentage points. \end{claim} \begin{proof} For simplicity, we will give an example for a discrete distribution over the binary set $\X~{\times}~\Y~{=}~\{0,1\}^2~{\times}~\{0,1\}$. Let $1~{>}~\epsilon~{>}~0$ be some small positive number, and we define $\D$ according to table~\ref{table:counter_example}, where each triplet $(x_1,x_2,y) \in \X{\times}\Y$ is assigned a positive weight, which through normalization defines a distribution over $\X{\times}\Y$. The missingness distribution $\Q$ is defined s.t. $P_\Q(M_1 = 1, M_2 = 0 | X = \x) = 1$ for all $\x \in \X$, i.e. $X_1$ is always observed and $X_2$ is always missing, which is a trivial MAR distribution. Given the above data distribution $\D$, we can easily calculate the exact accuracy of the optimal data-imputation classifier and the marginalized Bayes predictor under the missingness distribution $\Q$, as well as the standard Bayes predictor under full-observability. First notice that whether we apply conditional or unconditional data-imputation, and whether $X_1$ is equal to $0$ or $1$, the completion will always be $X_2 = 1$ and the predicted class will always be $Y=1$. Since the data-imputation classifiers always predict the same class $Y=1$ regardless of their input, the probability of success is simply the probability $P(Y=1) = \frac{1 + \epsilon}{3}$ (for $\epsilon = 10^{-4}$ it equals approximately $33.337\%$). Similarly, the marginalized Bayes predictor always predicts $Y=0$ regardless of its input, and so its probability of success is $P(Y = 0) = \frac{2 - \epsilon}{3}$ (for $\epsilon = 10^{-4}$ it equals approximately $66.663\%$), which is almost double the accuracy achieved by the data-imputation classifier. Additionally, notice that the marginalized Bayes predictor achieves almost the same accuracy as the Bayes predictor under full-observability, which equals exactly $\frac{2}{3}$. \end{proof} \section{Efficient Marginalization with Tensorial Mixture Models} \label{sec:missing_data:margin} As discussed above, with generative models optimal classification under missing data (in the MAR setting) is oblivious to the specific missingness distribution. However, it requires tractable marginalization over missing values. In this section we show that TMMs bring forth extremely efficient marginalization, requiring only a single forward pass through the corresponding ConvAC. Recall from sec.~\ref{sec:model} and~\ref{sec:training} that a TMM classifier realizes the following form: \begin{align} P(\x_1,\ldots,\x_N|Y{=}y) &= \sum\nolimits_{d_1,\ldots,d_N}^M P(d_1,\ldots,d_N|Y{=}y) \prod\nolimits_{i=1}^{N} P(\x_i | d_i; \theta_{d_i}) \label{eq:mc_tmm} \end{align} Suppose now that only the local structures $\x_{i_1}\ldots\x_{i_V}$ are observed, and we would like to marginalize over the rest. Integrating eq.~\ref{eq:mc_tmm} gives: \begin{align*} P(\x_{i_1},\ldots,\x_{i_V}|Y{=}y) &= \sum\nolimits_{d_1,\ldots,d_N}^M P(d_1,\ldots,d_N|Y{=}y) \prod\nolimits_{v=1}^{V} P(\x_{i_v} | d_{i_v}; \theta_{d_{i_v}}) \end{align*} from which it is evident that the same network used to compute $P(\x_1,\ldots,\x_N|Y{=}y)$, can be used to compute $P(\x_{i_1},\ldots,\x_{i_V}|Y{=}y)$~--~all it requires is a slight adaptation of the representation layer. Namely, the latter would represent observed values through the usual likelihoods, whereas missing (marginalized) values would now be represented via constant ones: \begin{align*} \textrm{rep}(i,d) &= \begin{cases} \qquad1 & \textrm{, $\x_i$ is missing (marginalized)} \\ P(\x_i|d;\Theta) & \textrm{, $\x_i$ is visible (not marginalized)} \end{cases} \end{align*} More generally, to marginalize over individual coordinates of the local structure $\x_i$, it is sufficient to replace $\textrm{rep}(i,d)$ by its respective marginalized mixing component. To conclude, with TMMs marginalizing over missing values is just as efficient as plain inference~--~requires only a single pass through the corresponding network. Accordingly, the marginalized Bayes predictor (eq.~\ref{eq:mbayes}) is realized efficiently, and classification under missing data (in the MAR setting) is optimal (under generative assumption), regardless of the missingness distribution. \section{Extended Discussion on Generative Models Based on Neural Networks} \label{app:extended_related_works} There are many generative models realized through neural networks, and convolutional networks in particular. Of these models, one of the most successful to date is the method of Generative Adversarial Networks~\citep{Goodfellow:2014td}, where a network is trained to generate instances from the data distribution, through a two-player mini-max game. While there are numerous applications for learning to generate data points, e.g. inpainting and super-resolution, it cannot be used for computing the likelihood of the data. Other generative networks do offer inference, but only approximate. Variational Auto-Encoders~\citep{Kingma:2013tz} use a variational lower-bound on the likelihood function. GSNs~\citep{Bengio:2013vx}, DPMs~\citep{SohlDickstein:2015vq} and MPDBMs~\citep{Goodfellow:2013vm} are additional methods along this line. The latter is especially noteworthy for being a generative classifier that can approximate the marginal likelihoods conditioned on each class, and for being tested on classification under missing data. Some generative neural networks are capable of tractable inference, but not of tractable marginalization. \citet{Dinh:2014vu} suggest a method for designing neural networks that realize an invertible transformation from a simple distribution to the data distribution. Inverting the network brings forth tractable inference, yet partial integration of its density function is still intractable. Another popular method for tractable inference, central to both PixelRNN~\citep{vandenOord:2016um} and NADE~\citep{JMLR:v17:16-272}, is the factorization of the probability distribution according to $\PP(x_1,\ldots,x_d) = \prod_{i=1}^d \PP(x_i | x_{i-1},\ldots,x_1)$, and realization of $\PP(x_i|x_{i-1},\ldots,x_1)$ as a neural network. Based on this construction, certain marginal distributions are indeed tractable to compute, but most are not. Orderless-NADE partially addresses this issue by using ensembles of models over different orderings of its input. However, it can only estimate the marginal distributions, and has no classifier analogue that can compute class-conditional marginal likelihoods, as required for classification under missing data. \section{Image Generation and Network Visualization}\label{app:exp:vis} Following the graphical model perspective of our models allows us to not only generate random instances from the distribution, but to also generate the most likely patches for each neuron in the network, effectively explaining its role in the classification process. We remind the reader that every neuron in the network corresponds to a possible assignment of a latent variable in the graphical model. By looking for the most likely assignments for each of its child nodes in the graphical tree model, we can generate a patch that describes that neuron. Unlike similar suggested methods to visualize neural networks~\citep{Zeiler:2014fra}, often relying on brute-force search or on solving some optimization problem to find the most likely image, our method emerges naturally from the probabilistic interpretation of our model. In fig.~\ref{fig:fulldigits}, we can see conditional samples generates for each digit, while in fig.~\ref{fig:visnet} we can see a visualization of the top-level layers of network, where each small patch matches a different neuron in the network. The common wisdom of how ConvNets work is by assuming that simple low-level features are composed together to create more and more complex features, where each subsequent layer denotes features of higher abstraction~--~the visualization of our network clearly demonstrate this hypothesis to be true for our case, showing small strokes iteratively being composed into complete digits. \section{Detailed Description of the Experiments}\label{app:exp_details} Experiments are meaningful only if they could be reproduced by other proficient individuals. Providing sufficient details to enable others to replicate our results is the goal of this section. We hope to accomplish this by making our code public, as well as documenting our experiments to a sufficient degree allowing for their reproduction from scratch. Our complete implementation of the models presented in this paper, as well as our modifications to other open-source projects and scripts used in the process of conducting our experiments, are available at our Github repository: \githuburl{Generative-ConvACs}. We additionally wish to invite readers to contact the authors, if they deem the following details insufficient in their process to reproduce our results. \subsection{Description of Methods} In the following we give concise descriptions of each classification method we have used in our experiments. The results of the experiment on MP-DBM~\citep{Goodfellow:2013vm} were taken directly from the paper and were not conducted by us, hence we do not cover it in this section. We direct the reader to that article for exact details on how to reproduce their results. \subsubsection{Robust Linear Classifier} In~\cite{Dekel:2008jb}, binary linear classifiers were trained by formulating their optimization as a quadric program under the constraint that some of its features could be deleted, i.e. their original value was changed to zero. While the original source code was never published, the authors have kindly agreed to share with us their code, which we used to reproduced their results, but on larger datasets. The algorithm has only a couple hyper-parameters, which were chosen by a grid-search through a cross-validation process. For details on the exact protocol for testing binary classifiers on missing data, please see sec.~\ref{app:sec:exp:binary}. \subsubsection{K-Nearest Neighbors} K-Nearest Neighbors~(KNN) is a classical machine learning algorithm used for both regression and classification tasks. Its underlying mechanism is finding the $k$~nearest examples (called neighbors) from the training set, $(\x_1,y_1),\ldots,(\x_k,y_k) \in S$, according to some metric function~$d(\cdot,\cdot):\X \times \X \to \R_+$, after which a summarizing function~$f$ is applied to the targets of the $k$~nearest neighbors to produce the output $y^* = f(y_1,\ldots,y_k)$. When KNN is used for classification, $f$~is typically the majority voting function, returning the class found in most of the $k$ nearest neighbors. In our experiments we use KNN for classification under missing data, where the training set consists of complete examples with no missing data, but at classification time the inputs have missing values. Given an input with missing values $\x \odot \m$ and an example $\x'$ from the training set, we use a modified Euclidean distance metric, where we compare the distance only against the non-missing coordinates of $\x$, i.e. the metric is defined by~$d(\x', \x~\odot~\m)~=~\sum_{i:m_i = 1} \left(x'_i - x_i \right)^2$. Through a process of cross-validation we have chosen $k=5$ for all of our experiments. Our implementation of KNN is based on the popular \emph{scikit-learn} python library~\citep{scikit-learn}. \subsubsection{Convolutional Neural Networks} The most widespread and successful discriminative method nowadays are Convolutional Neural Networks~(ConvNets). Standard ConvNets are represented by a computational graph consisted of different kinds of nodes, called layers, with a convolutional-like operators applied to their inputs, followed by a non-linear point-wise activation function, e.g. $\max(0,x)$ known as ReLU. For our experiments on MNIST, both with and without missing data, we have used the LeNeT ConvNet architecture~\citep{LeCun:1998hy} that is bundled with Caffe~\citep{Jia:2014up}, trained for 20,000 iterations using SGD with $0.9$ momentum and $0.01$ base learning rate, which remained constant for 10,000 iterations, followed by a linear decrease to $0.001$ for another 5,000 iterations, followed by a linear decrease to $0$ learning rate for the remaining 5,000 iterations. The model also used $l_2$-regularization (also known as weight decay), which was chosen through cross-validation for each experiment separately. No other modifications were made to the model or its training procedure. For our experiments on NORB, we have used an ensemble of 3 ConvNets, each using the following architecture: $5{\times}5$ convolution with 128 output channels, $3{\times}3$ max pooling with stride 2, ReLU activation, $5{\times}5$ convolution with 128 output channels, ReLU activation, dropout layer with probability 0.5, $3{\times}3$ average pooling with stride 2, $5{\times}5$ convolution with 256 output channels, ReLU activation, dropout layer with probability 0.5, $3{\times}3$ average pooling with stride 2, fully-connected layer with 768 output channels, ReLU activation, dropout layer with probability 0.5, and ends with fully-connected layer with 5 output channels. The stereo images were represented as a two-channel input image when fed to the network. During training we have used data augmentation consisting of randomly scaling and rotation transforms. The networks were trained for 40,000 iterations using SGD with $0.99$ momentum and $0.001$ base learning rate, which remained constant for 30,000 iterations, followed by a linear decrease to $0.0001$ for 6000 iterations, followed by a linear decrease to $0$ learning rate for the remaining 4,000 iterations. The model also used $0.0001$ weight decay for additional regularization. When ConvNets were trained on images containing missing values, we passed the network the original image with missing values zeroed out, and an additional binary image as a separate channel, containing $1$ for missing values at the same spatial position, and $0$ otherwise -- this missing data format is sometimes known as \emph{flag data imputation}. Other formats for representing missing values were tested (e.g. just using zeros for missing values), however, the above scheme performed significantly better than other formats. In our experiments, we assumed that the training set was complete and missing values were only present in the test set. In order to design ConvNets that are robust against specific missingness distributions, we have simulated missing values during training, sampling a different mask of missing values for each image in each mini-batch. As covered in sec.~\ref{sec:exp}, the results of training ConvNets directly on simulated missingness distributions resulted in classifiers which were biased towards the specific distribution used in training, and performed worse on other distributions compared to ConvNets trained on the same distribution. In addition to training ConvNets directly on missing data, we have also used them as the classifier for testing different data imputation methods, as describe in the next section. \subsubsection{Classification Through Data Imputation} The most common method for handling missing data, while leveraging available discriminative classifiers, is through the application of \emph{data imputation}~--~an algorithm for the completion of missing values~--~and then passing the results to a classifier trained on uncorrupted dataset. We have tested five different types of data imputation algorithms: \begin{itemize} \item Zero data imputation: replacing every missing value by zero. \item Mean data imputation: replacing every missing value by the mean value computed over the dataset. \item Generative data imputation: training a generative model and using it to complete the missing values by finding the most likely instance that coincides with the observed values, i.e. solving the following \begin{align*} g(\x \odot \m) &= \argmax_{\x' \in \R^s \wedge \forall i, m_i = 1 \rightarrow x'_i = x_i} P(X=\x') \end{align*} We have tested the following generative models: \begin{itemize} \item Generative Stochastic Networks~(GSN)~\citep{Bengio:2013vx}: We have used their original source code from \url{https://github.com/yaoli/GSN}, and trained their example model on MNIST for 1000 epochs. Whereas in the original article they have tested completing only the left or right side of a given image, we have modified their code to support general masks. Our modified implementation can be found at \githuburl{GSN}. \item Non-linear Independent Components Estimation~(NICE)~\citep{Dinh:2014vu}: We have used their original source code from \url{https://github.com/laurent-dinh/nice}, and trained it on MNIST using their example code without changes. Similarly to our modification to the GSN code, here too we have adapted their code to support general masks over the input. Additionally, their original inpainting code required 110,000 iterations, which we have reduced to just 8,000 iterations, since the effect on classification accuracy was marginal. For the NORB dataset, we have used their CIFAR10 example, with lower learning rate of $10^{-4}$. Our modified code can be found at \githuburl{nice}. \item Diffusion Probabilistic Models~(DPM)~\citep{SohlDickstein:2015vq}: We have user their original source code from \url{https://github.com/Sohl-Dickstein/Diffusion-Probabilistic-Models}, and trained it on MNIST using their example code without changes. Similarly to our modifications to GSN, we have add support for a general mask of missing values, but other than that kept the rest of the parameters for inpainting unchanged. For NORB we have used the same model as MNIST. We have tried using their CIFAR10 example for NORB, however, it produced exceptions during training. Our modified code can be found at \githuburl{Diffusion-Probabilistic-Models}. \end{itemize} \end{itemize} \subsubsection{Tensorial Mixture Models} For a complete theoretical description of our model please see the body of the article. Our models were implemented by performing all intermediate computations in log-space, using numerically aware operations. In practiced, that meant our models were realized by the SimNets architecture~\citep{simnets1, simnets2}, which consists of Similarity layers representing gaussian distributions, MEX layers representing weighted sums performed on log-space input and outputs, as well as standard pooling operations. The learned parameters of the MEX layers are called \emph{offsets}, which represents the weights of the weighted sum, but saved in log-space. The parameters of the MEX layers can be optionally shared between spatial regions, or alternatively left with no parameter sharing at all. Additionally, when used to implement our generative models, the offsets are normalized to have a soft-max~(i.e., $\log\left(\sum_i \exp(x_i)\right)$) of zero. The network architectures we have tested in this article, consists of $M$ different Gaussian mixture components with diagonal covariance matrices, over non-overlapping patches of the input of size $2 \times 2$, which were implemented by a similarity layer as specified by the SimNets architecture, but with an added gaussian normalization term. We first describe the architectures used for the MNIST dataset. For the CP-TMM model, we used $M=800$, and following the similarity layer is a $1 \times 1$~MEX layer with no parameter sharing over spatial regions and $10$ output channels. The model ends with a global sum pooling operation, followed by another $1 \times 1$ MEX layer with $10$ outputs, one for each class. The HT-TMM model starts with the similarity layer with $M=32$, followed by a sequence of four pairs of $1 \times 1$~MEX layer followed by $2 \times 2$ sum pooling layer, and after the pairs and additional $1 \times 1$~MEX layer lowering the outputs of the model to $10$ outputs as the number of classes. The number of output channels for each MEX layer are as follows 64-128-256-512-10. All the MEX layers in this network do not use parameter sharing, except the first MEX layer, which uses a repeated sharing pattern of $2 \times 2$ offsets, that analogous to a $2 \times 2$~convolution layer with stride $2$. Both models were trained with the losses described in sec.~\ref{sec:training}, using the Adam SGD variant for optimizing the parameters, with a base learning rate of $0.03$, and $\beta_1 = \beta_2 = 0.9$. The models were trained for 25,000 iterations, where the learning rate was dropped by $0.1$ after 20,000 iterations. For the NORB dataset, we have trained only the HT-TMM model with $M=128$ for the similarity layer. The MEX layers use the same parameter sharing scheme as the one for MNIST, and the number of output channels for each MEX layer are as follows: 256-256-256-512-5. Training was identical to the MNIST models, with the exception of using 40,000 iterations instead of just 25,000. Additionally, we have used an ensemble of 4 models trained separately, each trained using a different generative loss weight (see below for more information). We have also used the same data augmentation methods (scaling and rotation) which were used in training the ConvNets for NORB used in this article. The standard $L_2$ weight regularization~(sometimes known as weight decay) did not work well on our models, which lead us to adapt it to better fit to log-space weights, by minimizing $\lambda \sum_i \left(\exp\left(x_i\right)\right)^2$ instead of $\lambda || \x ||_2~=~\lambda\sum_i \x_i^2$, where the parameter $\lambda$ was chosen through cross-validation. Additionally, since even with large values of $\lambda$ our model was still overfitting, we have added another form of regularization in the form of \emph{random marginalization} layers. A random marginalization layer, is similar in concept to dropout, but instead of zeroing activations completely in random, it choses spatial locations at random, and then zero out the activations at those locations for all the channels. Under our model, zeroing all the activations in a layer at a specific location, is equivalent to marginalizing over all the inputs for the receptive field for that respective location. We have used random marginalization layers in between all our layers during training, where the probability for zeroing out activations was chosen through cross-validation for each layer separately. Though it might raise concern that random marginalization layers could lead to biased results toward the missingness distributions we have tested it on, in practice the addition of those layers only helped improve our results under cases where only few pixels where missing. Finally, we wish to discuss a few optimization tricks which had a minor effects compared to the above, but were nevertheless very useful in achieving slightly better results. First, instead of optimizing directly the objective defined by eq.~\ref{eq:objective}, we add smoothing parameter $\beta$ between the two terms, as follows: \begin{align*} \Theta^* &= \argmin_\Theta -\sum_{i=1}^{\abs{S}} \log \frac{e^{N_\Theta(X^{(i)};Y^{(i)})}}{\sum\nolimits_{y=1}^{K}e^{N_\Theta(X^{(i)};y)}} \\ &\phantom{=\argmin_\Theta}- \beta\sum_{i=1}^{\abs{S}} \log \sum_{y=1}^{K}e^{N_\Theta(X^{(i)};y)} \end{align*} setting $\beta$ too low diminish the generative capabilities of our models, while setting it too high diminish the discriminative performance. Through cross-validation, we decided on the value $\beta=0.01$ for the models trained on MNIST, while for NORB we have used a different value of $\beta$ for each of the models, ranging in $\{0.01,0.1,0.5,1\}$. Second, we found that performance increased if we normalized activations before applying the $1 \times 1$ MEX operations. Specifically, we calculate the soft-max over the channels for each spatial location which we call the activation norm, and then subtract it from every respective activation. After applying the MEX operation, we add back the activation norm. Though might not be obvious at first, subtracting a constant from the input of a MEX operation and adding it to its output is equivalent does not change the mathematical operation. However, it does resolve the numerical issue of adding very large activations to very small offsets, which might result in a loss of precision. Finally, we are applying our model in different translations of the input and then average the class predictions. Since our model can marginalize over inputs, we do not need to crop the original image, and instead mask the unknown parts after translation as missing. Applying a similar trick to standard ConvNets on MNIST does not seem to improve their results. We believe this method is especially fit to our model, is because it does not have a natural treatment of overlapping patches like ConvNets do, and because it is able to marginalize over missing pixels easily, not limiting it just to crop translation as is typically done. \subsection{Description of Experiments} In this section we will give a detailed description of the protocol we have used during our experiments. \subsubsection{Binary Digit Classification under Feature Deletion Missing Data}\label{app:sec:exp:binary} This experiment focuses on the binary classification problem derived from MNIST, by limiting the number of classes to two different digits at a time. We use the same non-zero feature deletion distribution as suggested by \citet{Globerson:2006jv}, i.e. for a given image we uniformly sample a set of $N$ non-zero pixels from the image (if the image has less than $N$ non-zero pixels then they are non-zero pixels are chosen), and replace their values with zeros. This type of missingness distribution falls under the MNAR type defined in sec.\ref{sec:missing_data}. We test values of $N$ in $\{0, 25, 50, 75, 100, 125, 150\}$. For a given value of $N$, we train a separate classifier on each digit pair classifier on a randomly picked subset of the dataset containing 300 images per digit (600 total). During training we use a fixed validation set with 1000 images per digit. After picking the best classifier according to the validation set, the classifier is tested against a test set with a 1000 images per digits with a randomly chosen missing values according to the value of $N$. This experiment is repeated 10 times for each digit pair, each time using a different subset for the training set, and a new corrupted test set. After conducting all the different experiments, all the accuracies are averaged for each value of $N$, which are reported in table~\ref{table:exp_shamir}. \subsubsection{Multi-class Digit Classification under MAR Missing Data} This experiment focuses on the complete multi-class digit classification of the MNIST dataset, in the presence of missing data according to different missingness distributions. Under this setting, only the test set contains missing values, whereas the training set does not. We test two kinds of missingness distributions, which both fall under the MAR type defined in sec.\ref{sec:missing_data}. The first kind, which we call \emph{i.i.d. corruption}, each pixel is missing with a fixed probability $p$. the second kind, which we call \emph{missing rectangles corruption}, The positions of $N$ rectangles of width $W$ or chosen uniformly in the picture, where the rectangles can overlap one another. During the training stage, the models to be tested are not to be biased toward the specific missingness distributions we have chosen, and during the test stage, the same classifier is tested against all types of missingness distributions, and without supplying it with the parameters or type of the missingness distribution it is tested against. This rule prevent the use of ConvNets trained on simulated missingness distributions. To demonstrate that the latter lead to biased classifiers, we have conducted a separate experiment just for ConvNets, where the previous rule is ignored, and we train a separate ConvNet classifier on each type and parameter of the missingness distributions we have used. We then tested each of those ConvNets on all other missingness distributions, the results of which are in fig.~\ref{fig:convnet}, which confirmed our hypothesis. \end{document}
Emergence of foveal image sampling from learning to attend in visual scenes
1611.09430
Table 2: Classification Error on Cluttered MNIST
[ "Sampling Lattice Model", "Dataset 1 (%)", "Dataset 2 (%)" ]
[ [ "Fixed Lattice", "11.8", "31.9" ], [ "Translation Only", "5.1", "24.4" ], [ "Translation and Zoom", "4.0", "24.1" ] ]
There is a significant drop in performance when the retinal sampling lattice is fixed and not learnable, confirming that the model is benefitting from learning the high-acuity region. The classification performance between the Translation Only and Translation and Zoom model is competitive. This supports the hypothesis that the functionality of a high acuity region with a low resolution periphery is similar to that of zoom.
\documentclass{article} % For LaTeX2e % professional-quality tables \title{Emergence of foveal image sampling from learning to attend in visual scenes} \author{Brian Cheung, Eric Weiss, Bruno Olshausen\\ Redwood Center\\ UC Berkeley\\ \texttt{\{bcheung,eaweiss,baolshausen\}@berkeley.edu}\\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} We describe a neural attention model with a learnable retinal sampling lattice. The model is trained on a visual search task requiring the classification of an object embedded in a visual scene amidst background distractors using the smallest number of fixations. We explore the tiling properties that emerge in the model's retinal sampling lattice after training. Specifically, we show that this lattice resembles the eccentricity dependent sampling lattice of the primate retina, with a high resolution region in the fovea surrounded by a low resolution periphery. Furthermore, we find conditions where these emergent properties are amplified or eliminated providing clues to their function. \end{abstract} \section{Introduction} A striking design feature of the primate retina is the manner in which images are spatially sampled by retinal ganglion cells. Sample spacing and receptive fields are smallest in the fovea and then increase linearly with eccentricity, as shown in Figure \ref{fig:rgc_plot}. Thus, we have highest spatial resolution at the center of fixation and lowest resolution in the periphery, with a gradual fall-off in resolution as one proceeds from the center to periphery. The question we attempt to address here is, \emph{why} is the retina designed in this manner - i.e., how is it beneficial to vision? The commonly accepted explanation for this eccentricity dependent sampling is that it provides us with both high resolution and broad coverage of the visual field with a limited amount of neural resources. The human retina contains 1.5 million ganglion cells, whose axons form the sole output of the retina. These essentially constitute about 300,000 distinct ‘samples’ of the image due to the multiplicity of cell types coding different aspects such as ‘on’ vs. ‘off’ channels \citep{van1995information}. If these were packed uniformly at highest resolution (120 samples/deg, the Nyquist-dictated sampling rate corresponding to the spatial-frequencies admitted by the lens), they would subtend an image area spanning just 5x5 deg$^2$. Thus we would have high-resolution but essentially tunnel vision. Alternatively if they were spread out uniformly over the entire monocular visual field spanning roughly 150 deg$^2$ we would have wide field of coverage but with very blurry vision, with each sample subtending 0.25 deg (which would make even the largest letters on a Snellen eye chart illegible). Thus, the primate solution makes intuitive sense as a way to achieve the best of both of these worlds. However we are still lacking a quantitative demonstration that such a sampling strategy emerges as the optimal design for subserving some set of visual tasks. Here, we explore what is the optimal retinal sampling lattice for an (overt) attentional system performing a simple visual search task requiring the classification of an object. We propose a learnable retinal sampling lattice to explore what properties are best suited for this task. While evolutionary pressure has tuned the retinal configurations found in the primate retina, we instead utilize gradient descent optimization for our in-silico model by constructing a fully differentiable dynamically controlled model of attention. Our choice of visual search task follows a paradigm widely used in the study of overt attention in humans and other primates \citep{geisler2011models}. In many forms of this task, a single target is randomly located on a display among distractor objects. The goal of the subject is to find the target as rapidly as possible. \citet{itti2000saliency} propose a selection mechanism based on manually defined low level features of real images to locate various search targets. Here the neural network must learn what features are most informative for directing attention. While ‘neural attention’ models have been applied successfully to a variety of engineering applications \citep{bahdanau2014neural, jaderberg2015spatial, xu2015show, graves2014neural}, there has been little work in relating the properties of these attention mechanisms back to biological vision. An important property which distinguishes neural networks from most other neurobiological models is their ability to learn \emph{internal} (latent) features directly from data. But existing neural network models specify the input sampling lattice {\em a priori}. \citet{larochelle2010learning} employ an eccentricity dependent sampling lattice mimicking the primate retina, and \citet{mnih2014recurrent} utilize a multi scale ‘glimpse window' that forms a piece-wise approximation of this scheme. While it seems reasonable to think that these design choices contribute to the good performance of these systems, it remains to be seen if this arrangement emerges as the optimal solution. We further extend the learning paradigm of neural networks to the \emph{structural} features of the glimpse mechanism of an attention model. To explore emergent properties of our learned retinal configurations, we train on artificial datasets where the factors of variation are easily controllable. Despite this departure from biology and natural stimuli, we find our model learns to create an eccentricity dependent layout where a distinct central region of high acuity emerges surrounded by a low acuity periphery. We show that the properties of this layout are highly dependent on the variations present in the task constraints. When we depart from physiology by augmenting our attention model with the ability to spatially rescale or ‘zoom’ on its input, we find our model learns a more uniform layout which has properties more similar to the ‘glimpse window’ proposed in \citet{jaderberg2015spatial, gregor2015draw}. These findings help us to understand the task conditions and constraints in which an eccentricity dependent sampling lattice emerges. \section{Retinal Tiling in Neural Networks with Attention} Attention in neural networks may be formulated in terms of a differentiable feedforward function. This allows the parameters of these models to be trained jointly with backpropagation. Most formulations of visual attention over the input image assume some structure in the kernel filters. For example, the recent attention models proposed by \citet{jaderberg2015spatial, mnih2014recurrent, gregor2015draw, ba2014multiple} assume each kernel filter lies on a rectangular grid. To create a learnable retinal sampling lattice, we relax this assumption by allowing the kernels to tile the image independently. \subsection{Generating a Glimpse} We interpret a glimpse as a form of routing where a subset of the visual scene $U$ is sampled to form a smaller output glimpse $G$. The routing is defined by a set of kernels $k[\bullet](s)$, where each kernel $i$ specifies which part of the input $U[\bullet]$ will contribute to a particular output $G[i]$. A control variable $s$ is used to control the routing by adjusting the position and scale of the entire array of kernels. With this in mind, many attention models can be reformulated into a generic equation written as \begin{equation} \label{eq:generic_attention} G[i] = \sum_{n}^{H} \sum_{m}^{W} U[n,m] k[m,n,i](s) \end{equation} where $m$ and $n$ index input pixels of $U$ and $i$ indexes output glimpse features. The pixels in the input image $U$ are thus mapped to a smaller glimpse $G$. \subsection{Retinal Glimpse} The centers of each kernel filter $\acute \mu[i]$ are calculated with respect to control variables $s_c$ and $s_z$ and learnable offset $\mu[i]$. The control variables specify the position and zoom of the entire glimpse. $\mu[i]$ and $\sigma[i]$ specify the position and spread respectively of an individual kernel $k[-,-,i]$. These parameters are learned during training with backpropagation. We describe how the control variables are computed in the next section. The kernels are thus specified as follows: \begin{align} \acute \mu[i] &= (s_{c} - \mu[i])s_{z} \\ \acute \sigma[i] &= \sigma[i] s_{z} \\ \label{eq:factored_kernel} k[m,n,i](s) &= \mathcal{N}(m; \acute \mu_{x}[i], \acute \sigma[i])\mathcal{N}(n; \acute \mu_{y}[i], \acute \sigma[i]) \end{align} We assume kernel filters factorize between the horizontal $m$ and vertical $n$ dimensions of the input image. This factorization is shown in equation \ref{eq:factored_kernel}, where the kernel is defined as an isotropic gaussian $\mathcal{N}$. For each kernel filter, given a center $\acute \mu[i]$ and scalar variance $\acute \sigma[i]$, a two dimensional gaussian is defined over the input image as shown in Figure \ref{fig:gaussian_kernel_attention}. These gaussian kernel filters can be thought of as a simplified approximation to the receptive fields of retinal ganglion cells in primates \citep{van1995information}. While this factored formulation reduces the space of possible transformations from input to output, it can still form many different mappings from an input $U$ to output $G$. Figure \ref{fig:generic_retina_abilities}B shows the possible windows which an input image can be mapped to an output $G$. The yellow circles denote the central location of a particular kernel while the size denotes the standard deviation. Each kernel maps to one of the outputs $G[i]$. Positional control $s_c$ can be considered analogous to the motor control signals which executes saccades of the eye, whereas $s_z$ would correspond to controlling a zoom lens in the eye (which has no counterpart in biology). In contrast, \textit{training} defines \textit{structural} adjustments to individual kernels which include its position in the lattice as well as its variance. These adjustments are only possible during training and are fixed afterwards.Training adjustments can be considered analagous to the incremental adjustments in the layout of the retinal sampling lattice which occur over many generations, directed by evolutionary pressure in biology. \section{Recurrent Neural Architecture for Attention} A glimpse at a specific timepoint, $G_t$, is processed by a fully-connected recurrent network $f_{rnn}()$. \begin{align} h_t &= f_{rnn}(G_t,h_{t-1}) \\ \label{eq:localization_network} [s_{c,t}; s_{z,t}] &= f_{control}(h_t) \end{align} The global center $s_{c,t}$ and zoom $s_{z,t}$ are predicted by the control network $f_{control}()$ which is parameterized by a fully-connected neural network. In this work, we investigate three variants of the proposed recurrent model: \begin{itemize} \item \textbf{Fixed Lattice:} The kernel parameters $\mu[i]$ and $\sigma[i]$ for each retinal cell are \emph{not} learnable. The model can only translate the kernel filters $s_{c,t} = f_{control}(h_t)$ and the global zoom is fixed $s_{z,t} = 1$. \item \textbf{Translation Only:} Unlike the fixed lattice model, $\mu[i]$ and $\sigma[i]$ are learnable (via backpropagation). \item \textbf{Translation and Zoom:} This model follows equation \ref{eq:localization_network} where it can both zoom and translate the kernels. \end{itemize} A summary for comparing these variants is shown in Table \ref{tab:attention_variants}. Prior to training, the kernel filters are initialized as a 12x12 grid (144 kernel filters), tiling uniformly over the central region of the input image and creating a retinal sampling lattice as shown in Figure \ref{fig:retina_structure_training} before training. Our recurrent network, $f_{rnn}$ is a two layer traditional recurrent network with 512-512 units. Our control network, $f_{control}$ is a fully-connected network with 512-3 units (x,y,zoom) in each layer. Similarly, our prediction networks are fully-connected networks with 512-10 units for predicting the class. We use ReLU non-linearities for all hidden unit layers. Our model as shown in Figure \ref{fig:generic_retina_abilities}C are differentiable and trained end-to-end via backpropagation through time. Note that this allows us to train the control network indirectly from signals backpropagated from the task cost. For stochastic gradient descent optimization we use Adam \citep{kingma2014adam} and construct our models in Theano \citep{bastien2012theano}. \section{Datasets and Tasks} \subsection{Modified Cluttered MNIST Dataset} Example images from of our dataset are shown in Figure \ref{fig:cluttered_mnist_examples}. Handwritten digits from the original MNIST dataset \cite{lecun1998mnist} are randomly placed over a 100x100 image with varying amounts of distractors (clutter). Distractors are generated by extracting random segments of non-target MNIST digits which are placed randomly with uniform probability over the image. In contrast to the cluttered MNIST dataset proposed in \cite{mnih2014recurrent}, the number of distractors for each image varies randomly from 0 to 20 pieces. This prevents the attention model from learning a solution which depends on the number `on' pixels in a given region. In addition, we create another dataset (Dataset 2) with an additional factor of variation: the original MNIST digit is randomly resized by a factor of 0.33x to 3.0x. Examples of this dataset are shown in the second row of Figure \ref{fig:cluttered_mnist_examples}. \subsection{Visual Search Task} We define our visual search task as a recognition task in a cluttered scene. The recurrent attention model we propose must output the class $\hat c$ of the single MNIST digit appearing in the image via the prediction network $f_{predict}()$. The task loss, $\mathcal{L}$, is specified in equation \ref{eq:cost_function}. To minimize the classification error, we use cross-entropy cost: \begin{align} \label{eq:f_predict} \hat c_{t,n} &= f_{predict}(h_{t,n})\\ \label{eq:cost_function} \mathcal{L} &= \sum^N_{n}\sum^T_{t} c_{n}log(\hat c_{t,n}) \end{align} Analolgous to the visual search experiments performed in physiological studies, we pressure our attention model to accomplish the visual search as quickly as possible. By applying the task loss to every timepoint, the model is forced to accurately recognize and localize the target MNIST digit in as few iterations as possible. In our classification experiments, the model is given $T=$ 4 glimpses. \section{Results} Figure \ref{fig:retina_structure_training}shows the layouts of the learned kernels for a Translation Only model at different stages during training. The filters are smoothly transforming from a uniform grid of kernels to an eccentricity dependent lattice. Furthermore, the kernel filters spread their individual centers to create a sampling lattice which covers the full image. This is sensible as the target MNIST digit can appear anywhere in the image with uniform probability. When we include variable sized digits as an additional factor in the dataset, the translation only model shows an even greater diversity of variances for the kernel filters. This is shown visually in the first row of Figure \ref{fig:fovea_and_eccentricities}. Furthermore, the second row shows a highly dependent relationship between the sampling interval and standard deviatoin of the retinal sampling lattice and eccentricity from the center. This dependency increases when training on variable sized MNIST digits (Dataset 2). This relationship has also been observed in the primate visual system \citep{perry1984retinal, van1995information}. When the proposed attention model is able to zoom its retinal sampling lattice, a very different layout emerges. There is much less diversity in the distribution of kernel filter variances as evidenced in Figure \ref{fig:fovea_and_eccentricities}. Both the sampling interval and standard deviation of the retinal sampling lattice have far less of a dependence on eccentricity. As shown in the last column of Figure \ref{fig:fovea_and_eccentricities}, we also trained this model on variable sized digits and noticed no significant differences in sampling lattice configuration. Figure \ref{fig:retina_animation} shows how each model variant makes use of its retinal sampling lattice after training. The strategy each variant adopts to solve the visual search task helps explain the drastic difference in lattice configuration. The translation only variant simply translates its high acuity region to recognize and localize the target digit. The translation and zoom model both rescales and translates its sampling lattice to fit the target digit. Remarkably, Figure \ref{fig:retina_animation} shows that both models detect the digit early on and make minor corrective adjustments in the following iterations. Table \ref{tab:task_performance} compares the classification performance of each model variant on the cluttered MNIST dataset with fixed sized digits (Dataset 1). There is a significant drop in performance when the retinal sampling lattice is fixed and not learnable, confirming that the model is benefitting from learning the high-acuity region. The classification performance between the Translation Only and Translation and Zoom model is competitive. This supports the hypothesis that the functionality of a high acuity region with a low resolution periphery is similar to that of zoom. \section{Conclusion} When constrained to a glimpse window that can translate only, similar to the eye, the kernels converge to a sampling lattice similar to that found in the primate retina \citep{curcio1990topography, van1995information}. This layout is composed of a high acuity region at the center surrounded by a wider region of low acuity. \citet{van1995information} postulate that the linear relationship between eccentricity and sampling interval leads to a form of scale invariance in the primate retina. Our results from the Translation Only model with variable sized digits supports this conclusion. Additionally, we observe that zoom appears to supplant the need to learn a high acuity region for the visual search task. This implies that the high acuity region serves a purpose resembling that of a zoomable sampling lattice. The low acuity periphery is used to detect the search target and the high acuity `fovea' more finely recognizes and localizes the target. These results, while obtained on an admittedly simplified domain of visual scenes, point to the possibility of using deep learning as a tool to explore the optimal sample tiling for a retinal in a data driven and task-dependent manner. Exploring how or if these results change for more challenging tasks in naturalistic visual scenes is a future goal of our research. \subsubsection*{Acknowledgments} We would like to acknowledge everyone at the Redwood Center for their helpful discussion and comments. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPUs used for this research. \bibliographystyle{iclr2017_conference} \end{document}
Boosted Generative Models
1702.08484
Table 1: Average test NLL for mixture of Gaussians.
[ "Model", "NLL (in nats, with std. error)" ]
[ [ "Base model", "4.69±0.01" ], [ "Add model", "4.64±0.02" ], [ "GenBGM", "4.58±0.10" ], [ "DiscBGM-NCE", "4.42±0.01" ], [ "DiscBGM-HD", "[BOLD] 4.35± [BOLD] 0.01" ] ]
Multiplicative boosting algorithms outperform additive boosting in correcting for model misspecification. GenBGM initially leans towards maximizing coverage, whereas both versions of DiscBGM are relatively more conservative in assigning high densities to data points away from the modes.
\onecolumn \section*{Appendices} \begin{appendices} \section{Proofs of theoretical results} \subsection{Theorem~\ref{thm:add_KL_red}}\label{proof:add_KL_red} \begin{proof} \text{The reduction in KL-divergence can be simplified as:}\\ \begin{align*} \delta^t_{KL}(h_t, \hat{\alpha}_t) &= \mathbb{E}_P\left[\log \frac{p}{q_{t-1}}\right] - \mathbb{E}_P\left[\log \frac{p}{q_t}\right] \\ &= \mathbb{E}_P\left[\log \frac{q_t}{q_{t-1}} \right] \\ &= \mathbb{E}_P\left[\log \left[ (1-\hat{\alpha}_t) + \hat{\alpha}_t \frac{h_t}{q_{t-1}}\right]\right].& \end{align*} \noindent We first derive the \textbf{sufficient condition} by lower bounding $\delta^t_{KL}(h_t, \hat{\alpha}_t)$. \begin{align*} \delta^t_{KL}(h_t, \hat{\alpha}_t) &=\mathbb{E}_P\left[\log \left[ (1-\hat{\alpha}_t) + \hat{\alpha}_t \frac{h_t}{q_{t-1}}\right]\right] \\ &\geq\mathbb{E}_P\left[(1-\hat{\alpha}_t) \log 1 + \hat{\alpha}_t \log \frac{h_t}{q_{t-1}}\right] &\text{(Arithmetic Mean} \geq \text{Geometric Mean)}\\ &= \hat{\alpha}_t \mathbb{E}_P\left[\log \frac{h_t}{q_{t-1}}\right]. &\text{(Linearity of expectation)} \end{align*} If the lower bound is non-negative, then so is $\delta^t_{KL}(h_t, \hat{\alpha}_t)$. Hence: \begin{align*} \mathbb{E}_P\left[\log \frac{h_t}{q_{t-1}}\right] &\geq 0 & \end{align*} which is the stated sufficient condition. \\ \noindent For the \textbf{necessary condition} to hold, we know that: \begin{align*} 0 &\leq \delta^t_{KL}(h_t, \hat{\alpha}_t) \\ &= \mathbb{E}_P\left[\log \left[ (1-\hat{\alpha}_t) + \hat{\alpha}_t \frac{h_t}{q_{t-1}}\right]\right] \\ &\leq \log \mathbb{E}_P\left[ (1-\hat{\alpha}_t) + \hat{\alpha}_t \frac{h_t}{q_{t-1}}\right] & \text{(Jensen's inequality)}\\ &= \log \left [ (1-\hat{\alpha}_t) + \hat{\alpha}_t \mathbb{E}_P\left[ \frac{h_t}{q_{t-1}}\right]\right] & \text{(Linearity of expectation)} \end{align*} Taking exponential on both sides, we get: \begin{align*} (1-\hat{\alpha}_t) + \hat{\alpha}_t \mathbb{E}_P\left[ \frac{h_t}{q_{t-1}}\right] &\geq 1\nonumber \\ \mathbb{E}_P\left[ \frac{h_t}{q_{t-1}}\right] &\geq 1 & \end{align*} which is the stated necessary condition. \end{proof} \subsection{Theorem~\ref{thm:KL_red}}\label{proof:KL_red} \begin{proof} We first derive the \textbf{sufficient condition}. \begin{align}\label{eq:greedy_objective} \delta^t_{KL}(h_t, \alpha_t) &= \int p \log q_t \,\mathrm{d}\mathbf{x} - \int p \log q_{t-1} \,\mathrm{d}\mathbf{x}\nonumber \\\nonumber &= \int p \log \frac{h_t^{\alpha_t} \cdot q_{t-1}}{Z_t} - \int p \log q_{t-1} &\text{(using Eq.~\eqref{eq:q_t})}\\ &= \alpha_t \cdot \mathbb{E}_P[\log h_t] - \log \mathbb{E}_{Q_{t-1}}[h_t^{\alpha_t}]\\\nonumber &\geq \alpha_t \cdot \mathbb{E}_P[\log h_t] - \log \mathbb{E}_{Q_{t-1}}[h_t]^{\alpha_t} & \text{(Jensen's inequality)}\\\nonumber &= \alpha_t \cdot \big[\mathbb{E}_P[\log h_t] - \log \mathbb{E}_{Q_{t-1}}[h_t]\big] \\\nonumber &\geq 0. & \text{(by assumption)} \end{align} Note that if $\alpha_t=1$, the sufficient condition is also necessary. \\ \noindent For the \textbf{necessary condition} to hold, we know that: \begin{align*} 0 \leq \delta^t_{KL}(h_t, \alpha_t) &= \alpha_t \cdot \mathbb{E}_P[\log h_t] - \log \mathbb{E}_{Q_{t-1}}[h_t^{\alpha_t}]\\ &\leq \alpha_t \cdot \mathbb{E}_P[\log h_t] - \mathbb{E}_{Q_{t-1}}[\log h_t^{\alpha_t}]& \text{(Jensen's inequality)}\\ &=\alpha_t \cdot [\mathbb{E}_P[\log h_t] - \mathbb{E}_{Q_{t-1}} [\log h_t]] & \text{(Linearity of expectation)}\\ &\leq \mathbb{E}_P[\log h_t] - \mathbb{E}_{Q_{t-1}} [\log h_t]. & (\text{since } \alpha_t > 0) \end{align*} \end{proof} \subsection{Proposition~\ref{thm:genbgm_reweight}}\label{proof:genbgm_reweight} \begin{proof} By assumption, we can optimize Eq.~\eqref{eq:genbgm_obj} to get: \begin{align*} h_t &\propto \left(\frac{p}{q_{t-1}}\right)^{\beta_t}&. \end{align*} \noindent Substituting for $h_t$ in the multiplicative boosting formulation in Eq.~\eqref{eq:q_t},: \begin{align*} q_t &\propto \frac{q_{t-1} \cdot h_t}{Z_{q_t}}\\ &\propto q_{t-1} \cdot \left(\frac{p}{q_{t-1}}\right)^{\beta_t}\\ &= \frac{ p^{\beta_t} \cdot q_{t-1}^{1-\beta_t} }{Z_{q_t}}& \end{align*} where the partition function $Z_{q_t} = \int p^{\beta_t} \cdot q_{t-1}^{1-\beta_t} $. \\ \noindent In order to prove the inequality, we first obtain a lower bound on the log-partition function, $Z_{q_t}$. For any given point, we have: \begin{align*} p^{\beta_t} \cdot q_{t-1}^{1-\beta_t} &\leq \beta_t p + (1-\beta_t) q_{t-1}. & \text{(Arithmetic Mean $\geq$ Geometric Mean)} \end{align*} Integrating over all points in the domain, we get: \begin{align}\label{eq:lower_bound_Z} \log Z_{q_t} &\leq \log \left[\beta Z_p + (1-\beta) Z_{q_{t-1}} \right] \nonumber \\ &= 0 & \end{align} where we have used the fact that $p$ and $q_{t-1}$ are normalized densities. \\ \noindent Now, consider the following quantity: \begin{align*} D_{KL}(P \Vert Q_t) &= \mathbb{E}_P \left[\log \frac{p}{q_t}\right] \\ &= \mathbb{E}_P \left[\log \frac{p}{\frac{p^{\beta_t} \cdot q_{t-1}^{1-\beta_t}}{Z_{q_t}}}\right] \\ &= (1-\beta_t) \mathbb{E}_P \left[\log \frac{p}{q_{t-1}}\right] + \log Z_{q_t}\\ &\leq (1-\beta_t) \mathbb{E}_P \left[\log \frac{p}{q_{t-1}}\right] & \text{(using Eq.~\eqref{eq:lower_bound_Z})} \\ &\leq \mathbb{E}_P \left[\log \frac{p}{q_{t-1}}\right] & (\text{since } \beta_t \geq 0) \\ &= D_{KL}(P \Vert Q_{t-1}). \end{align*} \end{proof} \subsection{Proposition~\ref{thm:f_optimal}}\label{proof:f_optimal} \begin{proof} By the $f$-optimality assumption, we know that: \begin{align*} r_t &= f'\left(\frac{p}{q_{t-1}}\right).& \end{align*} Hence, $h_t = \frac{p}{q_{t-1}}$. From Eq.~\eqref{eq:q_t}, we get: \begin{align*} q_t &= q_{t-1} \cdot h_t^{\alpha_t} = p& \end{align*} finishing the proof. \end{proof} \subsection{Corollary~\ref{thm:bayes_optimal}}\label{proof:bayes_optimal} \begin{proof} Let $u_t$ denote the joint distribution over $(\mathbf{x}, y)$ at round $t$. We will prove a slightly more general result where we have $m$ positive training examples sampled from $p$ and the $k$ negative training examples sampled from $q_{t-1}$.\footnote{In the statement for Corollary~\ref{thm:bayes_optimal}, the classes are assumed to be balanced for simplicity \textit{i.e.}, $m=k$.} Hence, we can express the conditional and prior densities as: \begin{align} p &= u(\mathbf{x} \vert y=1) \label{eq:binary_class_1} \\ q_{t-1} &= u(\mathbf{x} \vert y=0) \label{eq:binary_class_2}\\ u(y=1) &= \frac{m}{m+k} \label{eq:prior_class_1}\\ u(y=0) &= \frac{k}{m+k} \label{eq:prior_class_2}.& \end{align} The Bayes optimal density $c_t$ can be expressed as: \begin{align} c_t &= u(y=1 \vert \mathbf{x}) \nonumber\\ &= u(\mathbf{x} \vert y=1) u(y=1) / u(\mathbf{x}) \label{eq:bayes_d_1}.& \end{align} Similarly, we have: \begin{align} 1-c_t &= u(\mathbf{x} \vert y=0) u(y=0) / u(\mathbf{x}) \label{eq:bayes_d_2}.& \end{align} From Eqs.~(\ref{eq:binary_class_1}-\ref{eq:prior_class_2}, \ref{eq:bayes_d_1}-\ref{eq:bayes_d_2}), we have: \begin{align*} h_t &= \gamma \cdot \frac{c_t}{1-c_t} = \frac{p}{q_{t-1}}.& \end{align*} where $\gamma = \frac{k}{m}$. \\ \noindent Finally from Eq.~\eqref{eq:q_t}, we get: \begin{align*} q_t &= q_{t-1} \cdot h_t^{\alpha_t} = p & \end{align*} finishing the proof. \end{proof} In Corollary~\ref{thm:adversarial_bayes_optimal} below, we present an additional theoretical result below that derives the optimal model weight, $\alpha_t$ for an \textit{adversarial Bayes optimal classifier}. \subsection{Corollary~\ref{thm:adversarial_bayes_optimal}} \begin{corollary}~\label{thm:adversarial_bayes_optimal} [to Corollary~\ref{thm:bayes_optimal}] Define an adversarial Bayes optimal classifier $c_t'$ as one that assigns the density $c_t' = 1 - c_t$ where $c_t$ is the Bayes optimal classifier. For an adversarial Bayes optimal classifier $c_t'$, $\delta^t_{KL}$ attains a maxima of zero when $\alpha_t=0$. \end{corollary} \begin{proof} For an adversarial Bayes optimal classifier, \begin{align} c_t' &= u(\mathbf{x} \vert y=0) u(y=0) / u(\mathbf{x})\label{eq:adv_bayes_d_1}\\ 1-c_t' &= u(\mathbf{x} \vert y=1) u(y=1) / u(\mathbf{x})\label{eq:adv_bayes_d_2}.& \end{align} From Eqs.~(\ref{eq:binary_class_1}-\ref{eq:prior_class_2}, \ref{eq:adv_bayes_d_1}-\ref{eq:adv_bayes_d_2}), we have: \begin{align*} h_t &= \gamma \cdot \frac{c_t'}{1-c_t'} = \frac{q_{t-1}}{p}.& \end{align*} Substituting the above intermediate model in Eq.~\eqref{eq:greedy_objective}, \begin{align*} \delta^t_{KL}(h_t, \alpha_t) &= \alpha_t \cdot \mathbb{E}_P\left[\log \frac{q_{t-1}}{p}\right] - \log \mathbb{E}_{Q_{t-1}}\left[\frac{q_{t-1}}{p}\right]^{\alpha_t}\\ &\leq \alpha_t \cdot \mathbb{E}_P\left[\log \frac{q_{t-1}}{p}\right] - \mathbb{E}_{Q_{t-1}}\left[\alpha_t \cdot \log \frac{q_{t-1}}{p}\right] & \text{(Jensen's inequality)}\\ &= \alpha_t \cdot \left [\mathbb{E}_P\left[\log \frac{q_{t-1}}{p}\right] - \mathbb{E}_{Q_{t-1}}\left[\log \frac{q_{t-1}}{p}\right]\right] &\text{(Linearity of expectation)}\\ &= -\alpha_t \left[D_{KL}(P \parallel Q_{t-1}) + D_{KL}(Q_{t-1} \parallel P) \right]\\ &\leq 0 & (D_{KL}\text{ is non-negative)}.\\ \end{align*} By inspection, the equality holds when $\alpha_t=0$ finishing the proof. \end{proof} \section{Additional implementation details}~\label{app:exp} \subsection{Density estimation on synthetic dataset} \paragraph{Model weights.} For DiscBGM, all model weights, $\alpha$'s to unity. The model weights for GenBGM, $\alpha$'s are set uniformly to $1/(T+1)$ and reweighting coefficients, $\beta$'s are set to unity. \subsection{Density estimation on benchmark datasets} \paragraph{Generator learning procedure details.} We use the default open source implementations of mixture of Bernoullis (MoB) and sum-product networks (SPN) as given in \texttt{https://github.com/AmazaspShumik/sklearn-bayes} and \texttt{https://github.com/KalraA/Tachyon} respectively for baseline models. \paragraph{Discriminator learning procedure details.} The discriminator considered for these experiments is a multilayer perceptron with two hidden layers consisting of $100$ units each and ReLU activations learned using the Adam optimizer~\cite{kingma2014adam} with a learning rate of $1e-4$. The training is for $100$ epochs with a mini-batch size of $100$, and finally the model checkpoint with the best validation error during training is selected to specify the intermediate model to be added to the ensemble. \paragraph{Model weights.} Model weights for multiplicative boosting algorithms, GenBGM and DiscBGM, are set based on best validation set performance of the heuristic weighting strategies. Partition function is estimated using importance sampling with the baseline model (MoB or SPN) as a proposal and a sample size of $1,000,000$. \subsection{Sample generation}\label{app:mnist} \paragraph{VAE architecture and learning procedure details.} Only the last layer in every VAE is stochastic, rest are deterministic. The inference network specifying the posterior contains the same architecture for the hidden layer as the generative network. The prior over the latent variables is standard Gaussian, the hidden layer activations are ReLU, and learning is done using Adam~\cite{kingma2014adam} with a learning rate of $10^{-3}$ and mini-batches of size $100$. \paragraph{CNN architecture and learning procedure details.} The CNN contains two convolutional layers and a single full connected layer with $1024$ units. Convolution layers have kernel size $5\times 5$, and $32$ and $64$ output channels, respectively. We apply ReLUs and $2\times 2$ max pooling after each convolution. The net is randomly initialized prior to training, and learning is done using the Adam~\cite{kingma2014adam} optimizer with a learning rate of $10^{-3}$ and mini-batches of size $100$. \paragraph{Sampling procedure for BGM sequences.} Samples from the GenDiscBGM are drawn from a Markov chain run using the Metropolis-Hastings algorithm with a discrete, uniformly random proposal and the BGM distribution as the stationary distribution for the chain. Every sample in Figure~\ref{fig:mnist_sampling} (d) is drawn from an independent Markov chain with a burn-in period of $100,000$ samples and a different start seed state. \end{appendices}\def\year{2018}\relax \documentclass[letterpaper]{article} %DO NOT CHANGE THIS %Required %Required %Required %Required %Required %Required \newtheorem{theorem}{Theorem} \newtheorem{property}{Property} \newtheorem{definition}{Definition} \newtheorem{corollary}{Corollary} \newtheorem{lemma}{Lemma} \newtheorem{fact}{Fact} \newtheorem{proposition}{Proposition} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \def\Plus{\texttt{+}} \def\Minus{\texttt{-}} \newcommand{\eg}{\emph{e.g.}} \newcommand{\ie}{\emph{i.e.}} \newcommand\Tstrut{\rule{0pt}{2.6ex}} \frenchspacing %Required \setlength{\pdfpagewidth}{8.5in} %Required \setlength{\pdfpageheight}{11in} %Required \pdfinfo{ /Title (Boosted Generative Models) /Author (Aditya Grover, Stefano Ermon)} \setcounter{secnumdepth}{2} \begin{document} \title{Boosted Generative Models} \author{Aditya Grover, Stefano Ermon\\ Computer Science Department\\ Stanford University\\ \texttt{\{adityag, ermon\}@cs.stanford.edu}\\ } \maketitle \begin{abstract} We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our meta-algorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of our black-box boosting algorithms on density estimation, classification, and sample generation on benchmark datasets for a wide range of generative models. \end{abstract} \section{Introduction}\label{sec:intro} A variety of deep generative models have shown promising results on tasks spanning computer vision, speech recognition, natural language processing, and imitation learning~\cite{poon2011sum,oord2016pixel,kingma-iclr2014,goodfellow2014generative,zhao2017learning,li2017inferring}. These parametric models differ from each other in their ability to perform various forms of tractable inference, learning algorithms, and objectives. Despite significant progress, existing generative models cannot fit complex distributions with a sufficiently high degree of accuracy, limiting their applicability and leaving room for improvement. In this paper, we propose a technique for ensembling (imperfect) generative models to improve their overall performance. Our meta-algorithm is inspired by boosting, a technique used in supervised learning to combine weak classifiers (\eg, decision stumps or trees), which individually might not perform well on a given classification task, into a more powerful ensemble. The boosting algorithm will attempt to learn a classifier to correct for the mistakes made by reweighting the original dataset, and repeat this procedure recursively. Under some conditions on the weak classifiers' effectiveness, this procedure can drive the (training) error to zero~\cite{freund1999short}. Boosting can also be thought as a feature learning algorithm, where at each round a new feature is learned by training a classifier on a reweighted version of the original dataset. In practice, algorithms based on boosting perform extremely well in machine learning competitions~\cite{caruana2006empirical}. We show that a similar procedure can be applied to generative models. Given an initial generative model that provides an imperfect fit to the data distribution, we construct a second model to correct for the error, and repeat recursively. The second model is also a generative one, which is trained on a reweighted version of the original training set. Our meta-algorithm is general and can construct ensembles of any existing generative model that permits (approximate) likelihood evaluation such as fully-observed belief networks, sum-product networks, and variational autoencoders. Interestingly, our method can also leverage powerful discriminative models. Specifically, we train a binary classifier to distinguish true data samples from ``fake'' ones generated by the current model and provide a principled way to include this discriminator in the ensemble. A prior attempt at boosting density estimation proposed a \textit{sum-of-experts} formulation~\cite{rosset2002boosting}. This approach is similar to supervised boosting where at every round of boosting we derive a reweighted additive estimate of the boosted model density. In contrast, our proposed framework uses multiplicative boosting which multiplies the ensemble model densities and can be interpreted as a \textit{product-of-experts} formulation. We provide a holistic theoretical and algorithmic framework for multiplicative boosting contrasting with competing additive approaches. Unlike prior use cases of product-of-experts formulations, our approach is \textit{black-box}, and we empirically test the proposed algorithms on several generative models from simple ones such as mixture models to expressive parameteric models such as sum-product networks and variational autoencoders. Overall, this paper makes the following contributions: \begin{enumerate} \item We provide theoretical conditions for additive and multiplicative boosting under which incorporating a new model is guaranteed to improve the ensemble fit. \item We design and analyze a flexible meta-algorithmic boosting framework for including both generative and discriminative models in the ensemble. \item We demonstrate the empirical effectiveness of our algorithms for density estimation, generative classification, and sample generation on several benchmark datasets. \end{enumerate} \section{Unsupervised boosting}\label{sec:theory} Supervised boosting provides an algorithmic formalization of the hypothesis that a sequence of weak learners can create a single strong learner~\cite{schapire2012boosting}. Here, we propose a framework that extends boosting to unsupervised settings for learning generative models. For ease of presentation, all distributions are with respect to any arbitrary $\mathbf{x} \in \mathbb{R}^d$, unless otherwise specified. We use upper-case symbols to denote probability distributions and assume they all admit absolutely continuous densities (denoted by the corresponding lower-case notation) on a reference measure $\mathrm{d}\mathbf{x}$. Our analysis naturally extends to discrete distributions, which we skip for brevity. Formally, we consider the following maximum likelihood estimation (MLE) setting. Given some data points $X=\{\mathbf{x}_i \in \mathbb{R}^d\}_{i=1}^{m}$ sampled i.i.d. from an unknown distribution $P$, we provide a model class $\mathcal{Q}$ parameterizing the distributions that can be represented by the generative model and minimize the Kullback-Liebler (KL) divergence with respect to the true distribution: \begin{align}\label{eq:MLE_objective} \min_{Q \in \mathcal{Q}} D_{KL}(P \Vert Q). \end{align} In practice, we only observe samples from $P$ and hence, maximize the log-likelihood of the observed data $X$. Selecting the model class for maximum likelihood learning is non-trivial; MLE w.r.t. a small class can be far from $P$, whereas a large class poses the risk of overfitting in the absence of sufficient data, or even underfitting due to difficulty in optimizing non-convex objectives that frequently arise due to the use of latent variable models, neural networks, etc. The boosting intuition is to greedily increase model capacity by learning a sequence of weak intermediate models $\{h_t \in \mathcal{H}_t\}_{t=0}^T$ that can correct for mistakes made by previous models in the ensemble. Here, $\mathcal{H}_t$ is a predefined model class (such as $\mathcal{Q}$) for $h_t$. We defer the algorithms pertaining to the learning of such intermediate models to the next section, and first discuss two mechanisms for deriving the final estimate $q_T$ from the individual density estimates at each round, $\{h_t\}_{t=0}^T$. \subsection{Additive boosting} In additive boosting, the final density estimate is an arithmetic average of the intermediate models: \begin{align*} q_T = \sum_{t=0}^T \alpha_t \cdot h_t \end{align*} where $0 \leq \alpha_t\leq 1$ denote the weights assigned to the intermediate models. The weights are re-normalized at every round to sum to 1 which gives us a valid probability density estimate. Starting with a base model $h_0$, we can express the density estimate after a round of boosting recursively as: \begin{align*} q_t = (1-\hat{\alpha}_t) \cdot q_{t-1} + \hat{\alpha}_t \cdot h_t \end{align*} where $\hat{\alpha}_t$ denotes the normalized weight for $h_t$ at round $t$. We now derive conditions on the intermediate models that guarantee ``progress'' in every round of boosting. \begin{theorem}\label{thm:add_KL_red} Let \small${\delta^t_{KL}(h_t, \hat{\alpha}_t) = D_{KL}(P \Vert Q_{t-1}) - D_{KL}(P \Vert Q_t)}$\normalsize{} denote the reduction in KL-divergence at the $t^{th}$ round of additive boosting. The following conditions hold: \begin{enumerate} \item Sufficient: If $\mathbb{E}_P \left[\log \frac{h_t}{q_{t-1}}\right] \geq 0$, then $\delta^t_{KL}(h_t, \hat{\alpha}_t) \geq 0$ for all $\hat{\alpha}_t \in [0,1]$. \item Necessary: If $\exists \hat{\alpha}_t \in (0, 1]$ such that $\delta^t_{KL}(h_t, \hat{\alpha}_t) \geq 0$, then $\mathbb{E}_P \left[\frac{h_t}{q_{t-1}}\right] \geq 1$. \end{enumerate} \end{theorem} \begin{proof} In Appendix~\ref{proof:add_KL_red}. \end{proof} The sufficient and necessary conditions require that the expected log-likelihood and likelihood respectively of the current intermediate model, $h_t$ are better-or-equal than those of the combined previous model, $q_{t-1}$ under the true distribution when compared using density ratios. Next, we consider an alternative formulation of multiplicative boosting for improving the model fit to an arbitrary data distribution. \subsection{Multiplicative boosting} In multiplicative boosting, we factorize the final density estimate as a geometric average of $T+1$ intermediate models $\{h_t\}_{t=0}^T$, each assigned an exponentiated weight $\alpha_t$: \begin{align*} q_T = \frac{\prod_{t=0}^T h_t^{\alpha_t}}{Z_T} \end{align*} where the partition function $Z_T = \int \prod_{t=0}^T h_t^{\alpha_t} \,\mathrm{d}\mathbf{x}$. Recursively, we can specify the density estimate as: \begin{align}\label{eq:q_t} \tilde{q}_t &= h_t^{\alpha_t} \cdot \tilde{q}_{t-1} \end{align} where $\tilde{q}_t$ is the unnormalized estimate at round $t$. The base model $h_0$ is learned using MLE. The conditions on the intermediate models for reducing KL-divergence at every round are stated below. \begin{theorem}\label{thm:KL_red} Let \small${\delta^t_{KL}(h_t, \alpha_t) = D_{KL}(P \Vert Q_{t-1}) - D_{KL}(P \Vert Q_t)}$\normalsize{} denote the reduction in KL-divergence at the $t^{th}$ round of multiplicative boosting. The following conditions hold: \begin{enumerate} \item Sufficient: If $\mathbb{E}_P [\log h_t] \geq \log \mathbb{E}_{Q_{t-1}}[h_t]$, then $\delta^t_{KL}(h_t, \alpha_t) \geq 0$ for all $\alpha_t \in [0, 1]$. \item Necessary: If $\exists \alpha_t \in (0, 1]$ such that $\delta^t_{KL}(h_t, \alpha_t) \geq 0$, then $\mathbb{E}_P [\log h_t] \geq \mathbb{E}_{Q_{t-1}}[\log h_t]$. \end{enumerate} \end{theorem} \begin{proof} In Appendix~\ref{proof:KL_red}. \end{proof} In contrast to additive boosting, the conditions above compare expectations under the true distribution with expectations under the {\em model distribution} in the previous round, $Q_{t-1}$. The equality in the conditions holds for $\alpha_t=0$, which corresponds to the trivial case where the current intermediate model is ignored in Eq.~\eqref{eq:q_t}. For other valid $\alpha_t$, the non-degenerate version of the sufficient inequality guarantees progress towards the true data distribution. Note that the intermediate models increase the overall capacity of the ensemble at every round. As we shall demonstrate later, we find models fit using multiplicative boosting to outperform their additive counterparts empirically suggesting the conditions in Theorem~\ref{thm:KL_red} are easier to fulfill in practice. From the necessary condition, we see that a ``good" intermediate model $h_t$ assigns a better-or-equal log-likelihood under the true distribution as opposed to the model distribution, $Q_{t-1}$. This condition suggests two learning algorithms for intermediate models which we discuss next. \section{Boosted generative models}\label{sec:algo} In this section, we design and analyze meta-algorithms for multiplicative boosting of generative models. Given any base model which permits (approximate) likelihood evaluation, we provide a mechanism for boosting this model using an ensemble of generative and/or discriminative models. \begin{algorithm}[t] \caption{GenBGM($X = \{\mathbf{x}_i\}_{i=1}^m, T$ rounds)} \label{alg:genbgm} \begin{algorithmic} \State Initialize $d_0(\mathbf{x}_i) = \nicefrac{1}{m}$ for all $ i = {1,2, \ldots, m}$. \State Obtain base generative model $h_0$. \State Set (unnormalized) density estimate $\tilde{q}_0 = h_0$ \For{$t = 1, 2,\ldots, T$} \State - Choose $\beta_t$ and update $d_{t}$ using Eq.~\eqref{eq:reweight_distribution}. \State - Train generative model $h_t$ to maximize Eq.~\eqref{eq:genbgm_obj}. \State - Choose $\alpha_t$. \State - Set density estimate $\tilde{q}_t = h_t^{\alpha_t} \cdot \tilde{q}_{t-1}$. \EndFor \State Estimate $Z_T=\int \tilde{q}_T \;\mathrm{d}\mathbf{x} $. \\ \Return $q_T = \tilde{q}_T/Z_T$. \end{algorithmic} \end{algorithm} \subsection{Generative boosting} Supervised boosting algorithms such as AdaBoost typically involve a reweighting procedure for training weak learners~\cite{freund1995desicion}. We can similarly train an ensemble of generative models for unsupervised boosting, where every subsequent model performs MLE w.r.t a reweighted data distribution $D_t$: \begin{align} \max_{h_t}\mathbb{E}_{D_t}[\log h_t] \label{eq:genbgm_obj}\\ \text{where }d_t \propto \left(\frac{p}{q_{t-1}}\right)^{\beta_t} \label{eq:reweight_distribution} \end{align} and $\beta_t \in [0, 1]$ is the reweighting coefficient at round $t$. Note that these coefficients are in general different from the model weights $\alpha_t$ that appear in Eq.~\eqref{eq:q_t}. \begin{proposition}\label{thm:genbgm_reweight} If we can maximize the objective in Eq.~\eqref{eq:genbgm_obj} optimally, then $\delta_{KL}^t(h_t, \alpha_t) \geq 0$ for any $\beta_t \in [0, 1]$ with the equality holding for $\beta_t=0$. \end{proposition} \begin{proof} In Appendix~\ref{proof:genbgm_reweight}. \end{proof} While the objective in Eq.~\eqref{eq:genbgm_obj} can be hard to optimize in practice, the target distribution becomes easier to approximate as we reduce the reweighting coefficient. For the extreme case of $\beta_t=0$, the reweighted data distribution is simply uniform. There is no free lunch however, since a low $\beta_t$ results in a slower reduction in KL-divergence leading to a computational-statistical trade-off. The pseudocode for the corresponding boosting meta-algorithm, referred to as GenBGM, is given in Algorithm~\ref{alg:genbgm}. In practice, we only observe samples from the true data distribution, and hence, approximate $p$ based on the empirical data distribution which is defined to be uniform over the dataset $X$. At every subsequent round, GenBGM learns an intermediate model that maximizes the log-likelihood of data sampled from a reweighted data distribution. \begin{algorithm}[t] \caption{DiscBGM($X = \{\mathbf{x}_i\}_{i=1}^m, T$ rounds, $f$-div)} \label{alg:discbgm} \begin{algorithmic} \State Initialize $d_0(\mathbf{x}_i) = \nicefrac{1}{m}$ for all $ i = {1,2, \ldots, m}$. \State Obtain base generative model $h_0$. \State Set (unnormalized) density estimate $\tilde{q}_0 = h_0$ \For{$t = 1, 2, \ldots, T$} \State - Generate negative samples from $q_{t-1}$ \State - Optimize $r_t$ to maximize RHS in Eq.~\eqref{eq:f_disc_obj}. \State - Set $h_t = \left[f'\right]^{-1} (r_t)$. \State - Choose $\alpha_t$. \State - Set density estimate $\tilde{q}_t = h_t^{\alpha_t} \cdot \tilde{q}_{t-1}$. \EndFor \State Estimate $Z_T=\int \tilde{q}_T\; \mathrm{d}\mathbf{x} $. \\ \Return $q_T = \tilde{q}_T/Z_T$. \end{algorithmic} \end{algorithm} \subsection{Discriminative boosting} A base generative model can be boosted using a discriminative approach as well. Here, the intermediate model is specified as the density ratio obtained from a binary classifier. Consider the following setup: we observe an equal number of samples drawn i.i.d. from the true data distribution (w.l.o.g. assigned the label $y=+1$) and the model distribution in the previous round $Q_{t-1}$ (label $y=0$). \begin{definition} Let $f: \mathbb{R}^{+}\rightarrow \mathbb{R}$ be any convex, lower semi-continuous function satisfying $f(1) = 0$. The $f$-divergence between $P$ and $Q$ is defined as, $D_f(P \Vert Q) = \int q \cdot f\left(\nicefrac{p}{q}\right) \mathrm{d} \mathbf{x}$. \end{definition} Notable examples include the Kullback-Liebler (KL) divergence, Hellinger distance, and the Jenson-Shannon (JS) divergence among many others. The binary classifier in discriminative boosting maximizes a variational lower bound on any $f$-divergence at round $t$: \begin{align}\label{eq:f_disc_obj} D_f\left(P \Vert Q_{t-1}\right) \geq \sup_{r_t\in \mathcal{R}_t} \left (\mathbb{E}_P[r_t] - \mathbb{E}_{Q_{t-1}}[f^\star(r_t)]\right). \end{align} where $f^\star$ denotes the Fenchel conjugate of $f$ and $r_t:\mathbb{R}^d \rightarrow \mathrm{dom}_{f^\star}$ parameterizes the classifier. Under mild conditions on $f$~\cite{nguyen2010estimating}, the lower bound in Eq.~\eqref{eq:f_disc_obj} is tight if $r_t^\star = f'\left( \nicefrac{p}{q_{t-1}}\right)$. Hence, a solution to Eq.~\eqref{eq:f_disc_obj} can be used to estimate density ratios. The density ratios naturally fit into the multiplicative boosting framework and provide a justification for the use of objectives of the form Eq.~\eqref{eq:f_disc_obj} for learning intermediate models as formalized in the proposition below. \begin{proposition}\label{thm:f_optimal} For any given $f$-divergence, let $r_t^\star$ denote the optimal solution to Eq.~\eqref{eq:f_disc_obj} in the $t^{th}$ round of boosting. Then, the model density at the end of the boosting round matches the true density if we set $\alpha_t=1$ and $h_t = \left[f'\right]^{-1} (r_t^\star)$ where $\left[f'\right]^{-1}$ denotes the inverse of the derivative of $f$. \end{proposition} \begin{proof} In Appendix~\ref{proof:f_optimal}. \end{proof} The pseudocode for the corresponding meta-algorithm, DiscBGM is given in Algorithm~\ref{alg:discbgm}. At every round, we train a binary classifier to optimize the objective in Eq.~\eqref{eq:f_disc_obj} for a chosen $f$-divergence. As a special case, the negative of the cross-entropy loss commonly used for binary classification is also a lower bound on an $f$-divergence. While Algorithm~\ref{alg:discbgm} is applicable for any $f$-divergence, we will focus on cross-entropy henceforth to streamline the discussion. \begin{corollary}\label{thm:bayes_optimal} Consider the (negative) cross-entropy objective maximized by a binary classifier: \begin{align}\label{eq:disc_obj} \sup_{c_t\in \mathcal{C}_t} \mathbb{E}_{P}[\log c_t] + \mathbb{E}_{Q_{t-1}}[\log(1-c_t)]. \end{align} If a binary classifier $c_t$ trained to optimize Eq.~\eqref{eq:disc_obj} is Bayes optimal, then the model density after round $t$ matches the true density if we set $\alpha_t=1$ and $h_t= \nicefrac{c_t}{1-c_t}$. \end{corollary} \begin{proof} In Appendix~\ref{proof:bayes_optimal}. \end{proof} In practice, a classifier with limited capacity trained on a finite dataset will not generally be Bayes optimal. The above corollary, however, suggests that a good classifier can provide a `direction of improvement', in a similar spirit to gradient boosting for supervised learning~\cite{freund1995desicion}. Additionally, if the intermediate model distribution $h_t$ obtained using the above corollary satisfies the conditions in Theorem~\ref{thm:KL_red}, it is guaranteed to improve the fit. The weights $\alpha_t\in [0,1]$ can be interpreted as our confidence in the classification estimates, akin to the step size used in gradient descent. While in practice we heuristically assign weights to the intermediate models, the greedy optimum value of these weights at every round is a critical point for $\delta^t_{KL}$ (defined in Theorem~\ref{thm:KL_red}). For example, in the extreme case where $c_t$ is uninformative, \ie, $c_t \equiv 0.5$, then $\delta^t_{KL}(h_t, \alpha_t)=0$ for all $\alpha_t\in [0,1] $. If $c_t$ is Bayes optimal, then $\delta^t_{KL}$ attains a maxima when $\alpha_t=1$ (Corollary~\ref{thm:bayes_optimal}). \subsection{Hybrid boosting} Intermediate models need not be exclusively generators or discriminators; we can design a boosting ensemble with any combination of generators and discriminators. If an intermediate model is chosen to be a generator, we learn a generative model using MLE after appropriately reweighting the data points. If a discriminator is used to implicitly specify an intermediate model, we set up a binary classification problem. \subsection{Regularization} In practice, we want boosted generative models (BGM) to generalize to data outside the training set $X$. Regularization in BGMs is imposed primarily in two ways. First, every intermediate model can be independently regularized by incorporating explicit terms in the learning objective, early stopping based on validation error, heuristics such as dropout, etc. Moreover, restricting the number of rounds of boosting is another effective mechanism for regularizing BGMs. Fewer rounds of boosting are required if the intermediate models are sufficiently expressive. \section{Empirical evaluation}\label{sec:exp} Our experiments are designed to demonstrate the superiority of the proposed boosting meta-algorithms on a wide variety of generative models and tasks. A reference implementation of the boosting meta-algorithms is available at \texttt{https://github.com/ermongroup/bgm}. Additional implementation details for the experiments below are given in Appendix~\ref{app:exp}. \subsection{Multiplicative vs. additive boosting} A common pitfall with learning parameteric generative models is model misspecification with respect to the true underlying data distribution. For a quantitative and qualitative understanding of the behavior of additive and multiplicative boosting, we begin by considering a synthetic setting for density estimation on a mixture of Gaussians. \paragraph{Density estimation on synthetic dataset.}\label{sec:mog} The true data distribution is a equi-weighted mixture of four Gaussians centered symmetrically around the origin, each having an identity covariance matrix. The contours of the underlying density are shown in Figure~\ref{fig:mog_setup_target}. We observe $1,000$ training samples drawn independently from the data distribution (shown as black dots in Figure~\ref{fig:mog_density_estimation}), and the task is to learn this distribution. The test set contains $1,000$ samples from the same distribution. We repeat the process $10$ times for statistical significance. As a base (misspecified) model, we fit a mixture of two Gaussians to the data; the contours for an example instance are shown in Figure~\ref{fig:mog_setup_base}. We compare multiplicative and additive boosting, each run for $T=2$ rounds. For additive boosting (Add), we extend the algorithm proposed by \citeauthor{rosset2002boosting}~\shortcite{rosset2002boosting} setting $\hat{\alpha}_0$ to unity and doing a line search over $\hat{\alpha}_1, \hat{\alpha}_2 \in [0, 1]$. For Add and GenBGM, the intermediate models are mixtures of two Gaussians as well. The classifiers for DiscBGM are multi-layer perceptrons with two hidden layers of 100 units each and ReLU activations, trained to maximize $f$-divergences corresponding to the negative cross-entropy (NCE) and Hellinger distance (HD) using the Adam optimizer~\cite{kingma-iclr2014}. The test negative log-likelihood (NLL) estimates are listed in Table~\ref{tab:mog_density_estimation}. Qualitatively, the contour plots for the estimated densities after every boosting round on a sample instance are shown in Figure~\ref{fig:mog_density_estimation}. Multiplicative boosting algorithms outperform additive boosting in correcting for model misspecification. GenBGM initially leans towards maximizing coverage, whereas both versions of DiscBGM are relatively more conservative in assigning high densities to data points away from the modes. \paragraph{Heuristic model weighting strategies.} The multiplicative boosting algorithms require as hyperparameters the number of rounds of boosting and weights assigned to the intermediate models. For any practical setting, these hyperparameters are specific to the dataset and task under consideration and should be set based on cross-validation. While automatically setting model weights is an important direction for future work, we propose some heuristic weighting strategies. Specifically, the \textit{unity} heuristic assigns a weight of $1$ to every model in the ensemble, the \textit{uniform} heuristic assigns a weight of $1/(T+1)$ to every model, and the \textit{decay} heuristic assigns as a weight of $1/2^t$ to the $t^{th}$ model in the ensemble. In Figure~\ref{fig:heuristics}, we observe that the performance of the algorithms is sensitive to the weighting strategies. In particular, DiscBGM produces worse estimates as $T$ increases for the ``uniform" (red) strategy. The performance of GenBGM also degrades slightly with increasing $T$ for the ``unity'' (green) strategy. Notably, the ``decay'' (cyan) strategy achieves stable performance for both the algorithms. Intuitively, this heuristic follows the rationale of reducing the step size in gradient based stochastic optimization algorithms, and we expect this strategy to work better even in other settings. However, this strategy could potentially result in slower convergence as opposed to the unity strategy. \paragraph{Density estimation on benchmark datasets.} We now evaluate the performance of additive and multiplicative boosting for density estimation on real-world benchmark datasets~\cite{van2012markov}. We consider two generative model families: mixture of Bernoullis (MoB) and sum-product networks~\cite{poon2011sum}. While our results for multiplicative boosting with sum-product networks (SPN) are competitive with the state-of-the-art, the goal of these experiments is to perform a robust comparison of boosting algorithms as well as demonstrate their applicability to various model families. We set $T=2$ rounds for additive boosting and GenBGM. Since DiscBGM requires samples from the model density at every round, we set $T=1$ to ensure computational fairness such that the samples can be obtained efficiently from the base model sidestepping running expensive Markov chains. Model weights are chosen based on cross-validation. The results on density estimation are reported in Table~\ref{tab:binary_density_estimation}. Since multiplicative boosting estimates are unnormalized, we use importance sampling to estimate the partition function. When the base model is MoB, the Add model underperforms and is often worse than even the baseline model for the best performing validated non-zero model weights. GenBGM consistently outperforms Add and improves over the baseline model in a most cases (4/6 datasets). DiscBGM performs the best and convincingly outperforms the baseline, Add, and GenBGM on all datasets. For results on SPNs, the boosted models all outperform the baseline. GenBGM again edges out Add models (4/6 datasets), whereas DiscBGM models outperform all other models on all datasets. These results demonstrate the usefulness of boosted expressive model families, especially the DiscBGM approach, which performs the best, while GenBGM is preferable to Add. \subsection{Applications of generative models} \paragraph{Classification.} Here, we evaluate the performance of boosting algorithms for classification. Since the datasets above do not have any explicit labels, we choose one of the dimensions to be the label (say $y$). Letting $\mathbf{x}_{\bar{y}}$ denote the remaining dimensions, we can obtain a prediction for $y$ as, \[p(y=1\vert \mathbf{x}_{\bar{y}})= \frac{p(y=1, \mathbf{x}_{\bar{y}})}{p(y=1, \mathbf{x}_{\bar{y}}) + p(y=0, \mathbf{x}_{\bar{y}})}\] which is efficient to compute even for unnormalized models. We repeat the above procedure for all the variables predicting one variable at a time using the values assigned to the remaining variables. The results are reported in Table~\ref{tab:binary_classification}. When the base model is a MoB, we observe that the Add approach could often be worse than the base model whereas GenBGM performs slightly better than the baseline (4/6 datasets). The DiscBGM approach consistently performs well, and is only outperformed by GenBGM for two datasets for MoB. When SPNs are used instead, both Add and GenBGM improve upon the baseline model while DiscBGM again is the best performing model on all but one dataset. \paragraph{Sample generation.} We compare boosting algorithms based on their ability to generate image samples for the binarized MNIST dataset of handwritten digits~\cite{lecun2010mnist}. We use variational autoencoders (VAE) as the base model~\cite{kingma-iclr2014}. While any sufficiently expressive VAE can generate impressive examples, we design the experiment to evaluate the model complexity approximated as the number of learnable parameters. Ancestral samples obtained by the baseline VAE model are shown in Figure~\ref{fig:mnist_sampling_base}. We use the evidence lower bound (ELBO) as a proxy for approximately evaluating the marginal log-likelihood during learning. The conventional approach to improving the performance of a latent variable model is to increase its representational capacity by adding hidden layers (Base + depth) or increasing the number of hidden units in the existing layers (Base + width). These lead to a marginal improvement in sample quality as seen in Figure~\ref{fig:mnist_sampling_base_depth} and Figure~\ref{fig:mnist_sampling_base_width}. In contrast, boosting makes steady improvements in sample quality. We start with a VAE with much fewer parameters and generate samples using a hybrid boosting GenDiscBGM sequence VAE$\rightarrow$CNN$\rightarrow$VAE (Figure~\ref{fig:mnist_sampling_bgm}) . The discriminator used is a convolutional neural network (CNN)~\cite{lecun1995convolutional} trained to maximize the negative cross-entropy. We then generate samples using independent Markov chain Monte Carlo (MCMC) runs. The boosted sequences generate sharper samples than all baselines in spite of having similar model capacity. \section{Discussion and related work}\label{sec:rel} In this work, we revisited boosting, a class of meta-algorithms developed in response to a seminal question: \textit{Can a set of weak learners create a single strong learner?} Boosting has offered interesting theoretical insights into the fundamental limits of supervised learning and led to the development of algorithms that work well in practice ~\cite{schapire1990strength,freund1999short,friedman2002stochastic,caruana2006empirical}. Our work provides a foundational framework for unsupervised boosting with connections to prior work discussed below. \paragraph{Sum-of-experts.} \citeauthor{rosset2002boosting}~\shortcite{rosset2002boosting} proposed an algorithm for density estimation using Bayesian networks similar to gradient boosting. These models are normalized and easy to sample, but are generally outperformed by multiplicative formulations for correcting for model misspecification, as we show in this work. Similar additive approaches have been used for improving approximate posteriors for specific algorithms for variational inference~\cite{guo2016boosting,miller2016variational} and generative adversarial networks~\cite{tolstikhin2017adagan}. For a survey on variations of additive ensembling for unsupervised settings, refer to the survey by~\citeauthor{bourel2012aggregating}~\shortcite{bourel2012aggregating}. \paragraph{Product-of-experts.} Our multiplicative boosting formulation can be interpreted as a product-of-experts approach, which was initially proposed for feature learning in energy based models such as Boltzmann machines. For example, the hidden units in a restricted Boltzmann machine can be interpreted as weak learners performing MLE. If the number of weak learners is fixed, they can be efficiently updated in parallel but there is a risk of learning redundant features ~\cite{hinton1999products,hinton2002training}. Weak learners can also be added incrementally based on the learner's ability to distinguish observed data and model-generated data~\cite{welling2002self}. \citeauthor{tu2007learning}~\shortcite{tu2007learning} generalized the latter to boost arbitrary probabilistic models; their algorithm is a special case of DiscBGM with all $\alpha$'s set to 1 and the discriminator itself a boosted classifier. DiscBGM additionally accounts for imperfections in learning classifiers through flexible model weights. Further, it can include any classifier trained to maximize any $f$-divergence. Related techniques such as noise-contrastive estimation, ratio matching, and score matching methods can be cast as minimization of Bregman divergences, akin to DiscBGM with unit model weights~\cite{gutmann2012bregman}. A non-parametric algorithm similar to GenBGM was proposed by \citeauthor{di2004boosting}~\shortcite{di2004boosting} where an ensemble of weighted kernel density estimates are learned to approximate the data distribution. In contrast, our framework allows for both parametric and non-parametric learners and uses a different scheme for reweighting data points than proposed in the above work. \paragraph{Unsupervised-as-supervised learning.} The use of density ratios learned by a binary classifier for estimation was first proposed by~\citeauthor{friedman2001elements}~\shortcite{friedman2001elements} and has been subsequently applied elsewhere, notably for parameter estimation using noise-contrastive estimation~\cite{gutmann2010noise} and sample generation in generative adversarial networks (GAN)~\cite{goodfellow2014generative}. While GANs consist of a discriminator distinguishing real data from model generated data similar to DiscBGM for a suitable $f$-divergence, they differ in the learning objective for the generator~\cite{nowozin2016f}. The generator of a GAN performs an adversarial minimization of the same objective the discriminator maximizes, whereas DiscBGM uses the likelihood estimate of the base generator (learned using MLE) and the density ratios derived from the discriminator(s) to estimate the model density for the ensemble. \paragraph{Limitations and future work.} In the multiplicative boosting framework, the model density needs to be specified only up to a normalization constant at any given round of boosting. Additionally, while many applications of generative modeling such as feature learning and classification can sidestep computing the partition function, if needed it can be estimated using techniques such as Annealed Importance Sampling~\cite{neal2001annealed}. Similarly, Markov chain Monte Carlo methods can be used to generate samples. The lack of implicit normalization can however be limiting for applications requiring fast log-likelihood evaluation and sampling. In order to sidestep this issue, a promising direction for future work is to consider boosting of normalizing flow models~\cite{dinh2014nice,dinh2016density,grover2017flow}. These models specify an invertible multiplicative transformation from one distribution to another using the change-of-variables formula such that the resulting distribution is self-normalized and efficient ancestral sampling is possible. The GenBGM algorithm can be adapted to normalizing flow models whereby every transformation is interpreted as a weak learner. The parameters for every transformation can be trained greedily after suitable reweighting resulting in a self-normalized boosted generative model. \section{Conclusion}\label{sec:conc} We presented a general-purpose framework for boosting generative models by explicit factorization of the model likelihood as a product of simpler intermediate model densities. These intermediate models are learned greedily using discriminative or generative approaches, gradually increasing the overall model's capacity. We demonstrated the effectiveness of these models over baseline models and additive boosting for the tasks of density estimation, classification, and sample generation. Extensions to semi-supervised learning~\cite{kingma2014semi} and structured prediction~\cite{sohn2015learning} are exciting directions for future work. \section*{Acknowledgements} We are thankful to Neal Jean, Daniel Levy, and Russell Stewart for helpful critique. This research was supported by a Microsoft Research PhD fellowship in machine learning for the first author, NSF grants $\#1651565$, $\#1522054$, $\#1733686$, a Future of Life Institute grant, and Intel. \fontsize{9pt}{10pt} \selectfont \bibliographystyle{aaai} \newpage \input{supplementary} \end{document}
Boosted Generative Models
1702.08484
Table 3: Experimental results for classification. Prediction accuracy for predicting one variable given the rest. Higher is better with best performing models in bold. Multiplicative boosting again outperforms additive boosting and baseline models specified as Mixture of Bernoullis (MoB, middle columns) and Sum Product Networks (SPN, right columns).
[ "Dataset Accidents", "#test 283,161", "MoB Base 0.8395", "Add 0.8393", "GenBGM 0.8473", "DiscBGM [BOLD] 0.9043", "SPN Base 0.9258", "Add 0.9266", "GenBGM 0.9298", "DiscBGM [BOLD] 0.9416" ]
[ [ "Retail", "595,080", "0.9776", "0.9776", "0.9776", "[BOLD] 0.9792", "0.9780", "0.9790", "0.9789", "[BOLD] 0.9791" ], [ "Pumsb-star", "399,676", "0.8461", "0.8501", "0.8819", "[BOLD] 0.9267", "0.9599", "0.9610", "0.9611", "[BOLD] 0.9636" ], [ "DNA", "213,480", "0.7517", "0.7515", "[BOLD] 0.7531", "0.7526", "0.7799", "0.7817", "[BOLD] 0.7828", "0.7811" ], [ "Kosarek", "1,268,250", "0.9817", "0.9816", "0.9818", "[BOLD] 0.9831", "0.9824", "[BOLD] 0.9838", "[BOLD] 0.9838", "[BOLD] 0.9838" ], [ "Ad", "763,996", "0.9922", "0.9923", "0.9818", "[BOLD] 0.9927", "[BOLD] 0.9982", "0.9981", "[BOLD] 0.9982", "[BOLD] 0.9982" ] ]
Here, we evaluate the performance of boosting algorithms for classification. Since the datasets above do not have any explicit labels, we choose one of the dimensions to be the label (say y). Letting x¯y denote the remaining dimensions, we can obtain a prediction for y as, p(y=1|x¯y)=p(y=1,x¯y)p(y=1,x¯y)+p(y=0,x¯y) which is efficient to compute even for unnormalized models. We repeat the above procedure for all the variables predicting one variable at a time using the values assigned to the remaining variables. When the base model is a MoB, we observe that the Add approach could often be worse than the base model whereas GenBGM performs slightly better than the baseline (4/6 datasets). The DiscBGM approach consistently performs well, and is only outperformed by GenBGM for two datasets for MoB. When SPNs are used instead, both Add and GenBGM improve upon the baseline model while DiscBGM again is the best performing model on all but one dataset.
\onecolumn \section*{Appendices} \begin{appendices} \section{Proofs of theoretical results} \subsection{Theorem~\ref{thm:add_KL_red}}\label{proof:add_KL_red} \begin{proof} \text{The reduction in KL-divergence can be simplified as:}\\ \begin{align*} \delta^t_{KL}(h_t, \hat{\alpha}_t) &= \mathbb{E}_P\left[\log \frac{p}{q_{t-1}}\right] - \mathbb{E}_P\left[\log \frac{p}{q_t}\right] \\ &= \mathbb{E}_P\left[\log \frac{q_t}{q_{t-1}} \right] \\ &= \mathbb{E}_P\left[\log \left[ (1-\hat{\alpha}_t) + \hat{\alpha}_t \frac{h_t}{q_{t-1}}\right]\right].& \end{align*} \noindent We first derive the \textbf{sufficient condition} by lower bounding $\delta^t_{KL}(h_t, \hat{\alpha}_t)$. \begin{align*} \delta^t_{KL}(h_t, \hat{\alpha}_t) &=\mathbb{E}_P\left[\log \left[ (1-\hat{\alpha}_t) + \hat{\alpha}_t \frac{h_t}{q_{t-1}}\right]\right] \\ &\geq\mathbb{E}_P\left[(1-\hat{\alpha}_t) \log 1 + \hat{\alpha}_t \log \frac{h_t}{q_{t-1}}\right] &\text{(Arithmetic Mean} \geq \text{Geometric Mean)}\\ &= \hat{\alpha}_t \mathbb{E}_P\left[\log \frac{h_t}{q_{t-1}}\right]. &\text{(Linearity of expectation)} \end{align*} If the lower bound is non-negative, then so is $\delta^t_{KL}(h_t, \hat{\alpha}_t)$. Hence: \begin{align*} \mathbb{E}_P\left[\log \frac{h_t}{q_{t-1}}\right] &\geq 0 & \end{align*} which is the stated sufficient condition. \\ \noindent For the \textbf{necessary condition} to hold, we know that: \begin{align*} 0 &\leq \delta^t_{KL}(h_t, \hat{\alpha}_t) \\ &= \mathbb{E}_P\left[\log \left[ (1-\hat{\alpha}_t) + \hat{\alpha}_t \frac{h_t}{q_{t-1}}\right]\right] \\ &\leq \log \mathbb{E}_P\left[ (1-\hat{\alpha}_t) + \hat{\alpha}_t \frac{h_t}{q_{t-1}}\right] & \text{(Jensen's inequality)}\\ &= \log \left [ (1-\hat{\alpha}_t) + \hat{\alpha}_t \mathbb{E}_P\left[ \frac{h_t}{q_{t-1}}\right]\right] & \text{(Linearity of expectation)} \end{align*} Taking exponential on both sides, we get: \begin{align*} (1-\hat{\alpha}_t) + \hat{\alpha}_t \mathbb{E}_P\left[ \frac{h_t}{q_{t-1}}\right] &\geq 1\nonumber \\ \mathbb{E}_P\left[ \frac{h_t}{q_{t-1}}\right] &\geq 1 & \end{align*} which is the stated necessary condition. \end{proof} \subsection{Theorem~\ref{thm:KL_red}}\label{proof:KL_red} \begin{proof} We first derive the \textbf{sufficient condition}. \begin{align}\label{eq:greedy_objective} \delta^t_{KL}(h_t, \alpha_t) &= \int p \log q_t \,\mathrm{d}\mathbf{x} - \int p \log q_{t-1} \,\mathrm{d}\mathbf{x}\nonumber \\\nonumber &= \int p \log \frac{h_t^{\alpha_t} \cdot q_{t-1}}{Z_t} - \int p \log q_{t-1} &\text{(using Eq.~\eqref{eq:q_t})}\\ &= \alpha_t \cdot \mathbb{E}_P[\log h_t] - \log \mathbb{E}_{Q_{t-1}}[h_t^{\alpha_t}]\\\nonumber &\geq \alpha_t \cdot \mathbb{E}_P[\log h_t] - \log \mathbb{E}_{Q_{t-1}}[h_t]^{\alpha_t} & \text{(Jensen's inequality)}\\\nonumber &= \alpha_t \cdot \big[\mathbb{E}_P[\log h_t] - \log \mathbb{E}_{Q_{t-1}}[h_t]\big] \\\nonumber &\geq 0. & \text{(by assumption)} \end{align} Note that if $\alpha_t=1$, the sufficient condition is also necessary. \\ \noindent For the \textbf{necessary condition} to hold, we know that: \begin{align*} 0 \leq \delta^t_{KL}(h_t, \alpha_t) &= \alpha_t \cdot \mathbb{E}_P[\log h_t] - \log \mathbb{E}_{Q_{t-1}}[h_t^{\alpha_t}]\\ &\leq \alpha_t \cdot \mathbb{E}_P[\log h_t] - \mathbb{E}_{Q_{t-1}}[\log h_t^{\alpha_t}]& \text{(Jensen's inequality)}\\ &=\alpha_t \cdot [\mathbb{E}_P[\log h_t] - \mathbb{E}_{Q_{t-1}} [\log h_t]] & \text{(Linearity of expectation)}\\ &\leq \mathbb{E}_P[\log h_t] - \mathbb{E}_{Q_{t-1}} [\log h_t]. & (\text{since } \alpha_t > 0) \end{align*} \end{proof} \subsection{Proposition~\ref{thm:genbgm_reweight}}\label{proof:genbgm_reweight} \begin{proof} By assumption, we can optimize Eq.~\eqref{eq:genbgm_obj} to get: \begin{align*} h_t &\propto \left(\frac{p}{q_{t-1}}\right)^{\beta_t}&. \end{align*} \noindent Substituting for $h_t$ in the multiplicative boosting formulation in Eq.~\eqref{eq:q_t},: \begin{align*} q_t &\propto \frac{q_{t-1} \cdot h_t}{Z_{q_t}}\\ &\propto q_{t-1} \cdot \left(\frac{p}{q_{t-1}}\right)^{\beta_t}\\ &= \frac{ p^{\beta_t} \cdot q_{t-1}^{1-\beta_t} }{Z_{q_t}}& \end{align*} where the partition function $Z_{q_t} = \int p^{\beta_t} \cdot q_{t-1}^{1-\beta_t} $. \\ \noindent In order to prove the inequality, we first obtain a lower bound on the log-partition function, $Z_{q_t}$. For any given point, we have: \begin{align*} p^{\beta_t} \cdot q_{t-1}^{1-\beta_t} &\leq \beta_t p + (1-\beta_t) q_{t-1}. & \text{(Arithmetic Mean $\geq$ Geometric Mean)} \end{align*} Integrating over all points in the domain, we get: \begin{align}\label{eq:lower_bound_Z} \log Z_{q_t} &\leq \log \left[\beta Z_p + (1-\beta) Z_{q_{t-1}} \right] \nonumber \\ &= 0 & \end{align} where we have used the fact that $p$ and $q_{t-1}$ are normalized densities. \\ \noindent Now, consider the following quantity: \begin{align*} D_{KL}(P \Vert Q_t) &= \mathbb{E}_P \left[\log \frac{p}{q_t}\right] \\ &= \mathbb{E}_P \left[\log \frac{p}{\frac{p^{\beta_t} \cdot q_{t-1}^{1-\beta_t}}{Z_{q_t}}}\right] \\ &= (1-\beta_t) \mathbb{E}_P \left[\log \frac{p}{q_{t-1}}\right] + \log Z_{q_t}\\ &\leq (1-\beta_t) \mathbb{E}_P \left[\log \frac{p}{q_{t-1}}\right] & \text{(using Eq.~\eqref{eq:lower_bound_Z})} \\ &\leq \mathbb{E}_P \left[\log \frac{p}{q_{t-1}}\right] & (\text{since } \beta_t \geq 0) \\ &= D_{KL}(P \Vert Q_{t-1}). \end{align*} \end{proof} \subsection{Proposition~\ref{thm:f_optimal}}\label{proof:f_optimal} \begin{proof} By the $f$-optimality assumption, we know that: \begin{align*} r_t &= f'\left(\frac{p}{q_{t-1}}\right).& \end{align*} Hence, $h_t = \frac{p}{q_{t-1}}$. From Eq.~\eqref{eq:q_t}, we get: \begin{align*} q_t &= q_{t-1} \cdot h_t^{\alpha_t} = p& \end{align*} finishing the proof. \end{proof} \subsection{Corollary~\ref{thm:bayes_optimal}}\label{proof:bayes_optimal} \begin{proof} Let $u_t$ denote the joint distribution over $(\mathbf{x}, y)$ at round $t$. We will prove a slightly more general result where we have $m$ positive training examples sampled from $p$ and the $k$ negative training examples sampled from $q_{t-1}$.\footnote{In the statement for Corollary~\ref{thm:bayes_optimal}, the classes are assumed to be balanced for simplicity \textit{i.e.}, $m=k$.} Hence, we can express the conditional and prior densities as: \begin{align} p &= u(\mathbf{x} \vert y=1) \label{eq:binary_class_1} \\ q_{t-1} &= u(\mathbf{x} \vert y=0) \label{eq:binary_class_2}\\ u(y=1) &= \frac{m}{m+k} \label{eq:prior_class_1}\\ u(y=0) &= \frac{k}{m+k} \label{eq:prior_class_2}.& \end{align} The Bayes optimal density $c_t$ can be expressed as: \begin{align} c_t &= u(y=1 \vert \mathbf{x}) \nonumber\\ &= u(\mathbf{x} \vert y=1) u(y=1) / u(\mathbf{x}) \label{eq:bayes_d_1}.& \end{align} Similarly, we have: \begin{align} 1-c_t &= u(\mathbf{x} \vert y=0) u(y=0) / u(\mathbf{x}) \label{eq:bayes_d_2}.& \end{align} From Eqs.~(\ref{eq:binary_class_1}-\ref{eq:prior_class_2}, \ref{eq:bayes_d_1}-\ref{eq:bayes_d_2}), we have: \begin{align*} h_t &= \gamma \cdot \frac{c_t}{1-c_t} = \frac{p}{q_{t-1}}.& \end{align*} where $\gamma = \frac{k}{m}$. \\ \noindent Finally from Eq.~\eqref{eq:q_t}, we get: \begin{align*} q_t &= q_{t-1} \cdot h_t^{\alpha_t} = p & \end{align*} finishing the proof. \end{proof} In Corollary~\ref{thm:adversarial_bayes_optimal} below, we present an additional theoretical result below that derives the optimal model weight, $\alpha_t$ for an \textit{adversarial Bayes optimal classifier}. \subsection{Corollary~\ref{thm:adversarial_bayes_optimal}} \begin{corollary}~\label{thm:adversarial_bayes_optimal} [to Corollary~\ref{thm:bayes_optimal}] Define an adversarial Bayes optimal classifier $c_t'$ as one that assigns the density $c_t' = 1 - c_t$ where $c_t$ is the Bayes optimal classifier. For an adversarial Bayes optimal classifier $c_t'$, $\delta^t_{KL}$ attains a maxima of zero when $\alpha_t=0$. \end{corollary} \begin{proof} For an adversarial Bayes optimal classifier, \begin{align} c_t' &= u(\mathbf{x} \vert y=0) u(y=0) / u(\mathbf{x})\label{eq:adv_bayes_d_1}\\ 1-c_t' &= u(\mathbf{x} \vert y=1) u(y=1) / u(\mathbf{x})\label{eq:adv_bayes_d_2}.& \end{align} From Eqs.~(\ref{eq:binary_class_1}-\ref{eq:prior_class_2}, \ref{eq:adv_bayes_d_1}-\ref{eq:adv_bayes_d_2}), we have: \begin{align*} h_t &= \gamma \cdot \frac{c_t'}{1-c_t'} = \frac{q_{t-1}}{p}.& \end{align*} Substituting the above intermediate model in Eq.~\eqref{eq:greedy_objective}, \begin{align*} \delta^t_{KL}(h_t, \alpha_t) &= \alpha_t \cdot \mathbb{E}_P\left[\log \frac{q_{t-1}}{p}\right] - \log \mathbb{E}_{Q_{t-1}}\left[\frac{q_{t-1}}{p}\right]^{\alpha_t}\\ &\leq \alpha_t \cdot \mathbb{E}_P\left[\log \frac{q_{t-1}}{p}\right] - \mathbb{E}_{Q_{t-1}}\left[\alpha_t \cdot \log \frac{q_{t-1}}{p}\right] & \text{(Jensen's inequality)}\\ &= \alpha_t \cdot \left [\mathbb{E}_P\left[\log \frac{q_{t-1}}{p}\right] - \mathbb{E}_{Q_{t-1}}\left[\log \frac{q_{t-1}}{p}\right]\right] &\text{(Linearity of expectation)}\\ &= -\alpha_t \left[D_{KL}(P \parallel Q_{t-1}) + D_{KL}(Q_{t-1} \parallel P) \right]\\ &\leq 0 & (D_{KL}\text{ is non-negative)}.\\ \end{align*} By inspection, the equality holds when $\alpha_t=0$ finishing the proof. \end{proof} \section{Additional implementation details}~\label{app:exp} \subsection{Density estimation on synthetic dataset} \paragraph{Model weights.} For DiscBGM, all model weights, $\alpha$'s to unity. The model weights for GenBGM, $\alpha$'s are set uniformly to $1/(T+1)$ and reweighting coefficients, $\beta$'s are set to unity. \subsection{Density estimation on benchmark datasets} \paragraph{Generator learning procedure details.} We use the default open source implementations of mixture of Bernoullis (MoB) and sum-product networks (SPN) as given in \texttt{https://github.com/AmazaspShumik/sklearn-bayes} and \texttt{https://github.com/KalraA/Tachyon} respectively for baseline models. \paragraph{Discriminator learning procedure details.} The discriminator considered for these experiments is a multilayer perceptron with two hidden layers consisting of $100$ units each and ReLU activations learned using the Adam optimizer~\cite{kingma2014adam} with a learning rate of $1e-4$. The training is for $100$ epochs with a mini-batch size of $100$, and finally the model checkpoint with the best validation error during training is selected to specify the intermediate model to be added to the ensemble. \paragraph{Model weights.} Model weights for multiplicative boosting algorithms, GenBGM and DiscBGM, are set based on best validation set performance of the heuristic weighting strategies. Partition function is estimated using importance sampling with the baseline model (MoB or SPN) as a proposal and a sample size of $1,000,000$. \subsection{Sample generation}\label{app:mnist} \paragraph{VAE architecture and learning procedure details.} Only the last layer in every VAE is stochastic, rest are deterministic. The inference network specifying the posterior contains the same architecture for the hidden layer as the generative network. The prior over the latent variables is standard Gaussian, the hidden layer activations are ReLU, and learning is done using Adam~\cite{kingma2014adam} with a learning rate of $10^{-3}$ and mini-batches of size $100$. \paragraph{CNN architecture and learning procedure details.} The CNN contains two convolutional layers and a single full connected layer with $1024$ units. Convolution layers have kernel size $5\times 5$, and $32$ and $64$ output channels, respectively. We apply ReLUs and $2\times 2$ max pooling after each convolution. The net is randomly initialized prior to training, and learning is done using the Adam~\cite{kingma2014adam} optimizer with a learning rate of $10^{-3}$ and mini-batches of size $100$. \paragraph{Sampling procedure for BGM sequences.} Samples from the GenDiscBGM are drawn from a Markov chain run using the Metropolis-Hastings algorithm with a discrete, uniformly random proposal and the BGM distribution as the stationary distribution for the chain. Every sample in Figure~\ref{fig:mnist_sampling} (d) is drawn from an independent Markov chain with a burn-in period of $100,000$ samples and a different start seed state. \end{appendices}\def\year{2018}\relax \documentclass[letterpaper]{article} %DO NOT CHANGE THIS %Required %Required %Required %Required %Required %Required \newtheorem{theorem}{Theorem} \newtheorem{property}{Property} \newtheorem{definition}{Definition} \newtheorem{corollary}{Corollary} \newtheorem{lemma}{Lemma} \newtheorem{fact}{Fact} \newtheorem{proposition}{Proposition} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \def\Plus{\texttt{+}} \def\Minus{\texttt{-}} \newcommand{\eg}{\emph{e.g.}} \newcommand{\ie}{\emph{i.e.}} \newcommand\Tstrut{\rule{0pt}{2.6ex}} \frenchspacing %Required \setlength{\pdfpagewidth}{8.5in} %Required \setlength{\pdfpageheight}{11in} %Required \pdfinfo{ /Title (Boosted Generative Models) /Author (Aditya Grover, Stefano Ermon)} \setcounter{secnumdepth}{2} \begin{document} \title{Boosted Generative Models} \author{Aditya Grover, Stefano Ermon\\ Computer Science Department\\ Stanford University\\ \texttt{\{adityag, ermon\}@cs.stanford.edu}\\ } \maketitle \begin{abstract} We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our meta-algorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of our black-box boosting algorithms on density estimation, classification, and sample generation on benchmark datasets for a wide range of generative models. \end{abstract} \section{Introduction}\label{sec:intro} A variety of deep generative models have shown promising results on tasks spanning computer vision, speech recognition, natural language processing, and imitation learning~\cite{poon2011sum,oord2016pixel,kingma-iclr2014,goodfellow2014generative,zhao2017learning,li2017inferring}. These parametric models differ from each other in their ability to perform various forms of tractable inference, learning algorithms, and objectives. Despite significant progress, existing generative models cannot fit complex distributions with a sufficiently high degree of accuracy, limiting their applicability and leaving room for improvement. In this paper, we propose a technique for ensembling (imperfect) generative models to improve their overall performance. Our meta-algorithm is inspired by boosting, a technique used in supervised learning to combine weak classifiers (\eg, decision stumps or trees), which individually might not perform well on a given classification task, into a more powerful ensemble. The boosting algorithm will attempt to learn a classifier to correct for the mistakes made by reweighting the original dataset, and repeat this procedure recursively. Under some conditions on the weak classifiers' effectiveness, this procedure can drive the (training) error to zero~\cite{freund1999short}. Boosting can also be thought as a feature learning algorithm, where at each round a new feature is learned by training a classifier on a reweighted version of the original dataset. In practice, algorithms based on boosting perform extremely well in machine learning competitions~\cite{caruana2006empirical}. We show that a similar procedure can be applied to generative models. Given an initial generative model that provides an imperfect fit to the data distribution, we construct a second model to correct for the error, and repeat recursively. The second model is also a generative one, which is trained on a reweighted version of the original training set. Our meta-algorithm is general and can construct ensembles of any existing generative model that permits (approximate) likelihood evaluation such as fully-observed belief networks, sum-product networks, and variational autoencoders. Interestingly, our method can also leverage powerful discriminative models. Specifically, we train a binary classifier to distinguish true data samples from ``fake'' ones generated by the current model and provide a principled way to include this discriminator in the ensemble. A prior attempt at boosting density estimation proposed a \textit{sum-of-experts} formulation~\cite{rosset2002boosting}. This approach is similar to supervised boosting where at every round of boosting we derive a reweighted additive estimate of the boosted model density. In contrast, our proposed framework uses multiplicative boosting which multiplies the ensemble model densities and can be interpreted as a \textit{product-of-experts} formulation. We provide a holistic theoretical and algorithmic framework for multiplicative boosting contrasting with competing additive approaches. Unlike prior use cases of product-of-experts formulations, our approach is \textit{black-box}, and we empirically test the proposed algorithms on several generative models from simple ones such as mixture models to expressive parameteric models such as sum-product networks and variational autoencoders. Overall, this paper makes the following contributions: \begin{enumerate} \item We provide theoretical conditions for additive and multiplicative boosting under which incorporating a new model is guaranteed to improve the ensemble fit. \item We design and analyze a flexible meta-algorithmic boosting framework for including both generative and discriminative models in the ensemble. \item We demonstrate the empirical effectiveness of our algorithms for density estimation, generative classification, and sample generation on several benchmark datasets. \end{enumerate} \section{Unsupervised boosting}\label{sec:theory} Supervised boosting provides an algorithmic formalization of the hypothesis that a sequence of weak learners can create a single strong learner~\cite{schapire2012boosting}. Here, we propose a framework that extends boosting to unsupervised settings for learning generative models. For ease of presentation, all distributions are with respect to any arbitrary $\mathbf{x} \in \mathbb{R}^d$, unless otherwise specified. We use upper-case symbols to denote probability distributions and assume they all admit absolutely continuous densities (denoted by the corresponding lower-case notation) on a reference measure $\mathrm{d}\mathbf{x}$. Our analysis naturally extends to discrete distributions, which we skip for brevity. Formally, we consider the following maximum likelihood estimation (MLE) setting. Given some data points $X=\{\mathbf{x}_i \in \mathbb{R}^d\}_{i=1}^{m}$ sampled i.i.d. from an unknown distribution $P$, we provide a model class $\mathcal{Q}$ parameterizing the distributions that can be represented by the generative model and minimize the Kullback-Liebler (KL) divergence with respect to the true distribution: \begin{align}\label{eq:MLE_objective} \min_{Q \in \mathcal{Q}} D_{KL}(P \Vert Q). \end{align} In practice, we only observe samples from $P$ and hence, maximize the log-likelihood of the observed data $X$. Selecting the model class for maximum likelihood learning is non-trivial; MLE w.r.t. a small class can be far from $P$, whereas a large class poses the risk of overfitting in the absence of sufficient data, or even underfitting due to difficulty in optimizing non-convex objectives that frequently arise due to the use of latent variable models, neural networks, etc. The boosting intuition is to greedily increase model capacity by learning a sequence of weak intermediate models $\{h_t \in \mathcal{H}_t\}_{t=0}^T$ that can correct for mistakes made by previous models in the ensemble. Here, $\mathcal{H}_t$ is a predefined model class (such as $\mathcal{Q}$) for $h_t$. We defer the algorithms pertaining to the learning of such intermediate models to the next section, and first discuss two mechanisms for deriving the final estimate $q_T$ from the individual density estimates at each round, $\{h_t\}_{t=0}^T$. \subsection{Additive boosting} In additive boosting, the final density estimate is an arithmetic average of the intermediate models: \begin{align*} q_T = \sum_{t=0}^T \alpha_t \cdot h_t \end{align*} where $0 \leq \alpha_t\leq 1$ denote the weights assigned to the intermediate models. The weights are re-normalized at every round to sum to 1 which gives us a valid probability density estimate. Starting with a base model $h_0$, we can express the density estimate after a round of boosting recursively as: \begin{align*} q_t = (1-\hat{\alpha}_t) \cdot q_{t-1} + \hat{\alpha}_t \cdot h_t \end{align*} where $\hat{\alpha}_t$ denotes the normalized weight for $h_t$ at round $t$. We now derive conditions on the intermediate models that guarantee ``progress'' in every round of boosting. \begin{theorem}\label{thm:add_KL_red} Let \small${\delta^t_{KL}(h_t, \hat{\alpha}_t) = D_{KL}(P \Vert Q_{t-1}) - D_{KL}(P \Vert Q_t)}$\normalsize{} denote the reduction in KL-divergence at the $t^{th}$ round of additive boosting. The following conditions hold: \begin{enumerate} \item Sufficient: If $\mathbb{E}_P \left[\log \frac{h_t}{q_{t-1}}\right] \geq 0$, then $\delta^t_{KL}(h_t, \hat{\alpha}_t) \geq 0$ for all $\hat{\alpha}_t \in [0,1]$. \item Necessary: If $\exists \hat{\alpha}_t \in (0, 1]$ such that $\delta^t_{KL}(h_t, \hat{\alpha}_t) \geq 0$, then $\mathbb{E}_P \left[\frac{h_t}{q_{t-1}}\right] \geq 1$. \end{enumerate} \end{theorem} \begin{proof} In Appendix~\ref{proof:add_KL_red}. \end{proof} The sufficient and necessary conditions require that the expected log-likelihood and likelihood respectively of the current intermediate model, $h_t$ are better-or-equal than those of the combined previous model, $q_{t-1}$ under the true distribution when compared using density ratios. Next, we consider an alternative formulation of multiplicative boosting for improving the model fit to an arbitrary data distribution. \subsection{Multiplicative boosting} In multiplicative boosting, we factorize the final density estimate as a geometric average of $T+1$ intermediate models $\{h_t\}_{t=0}^T$, each assigned an exponentiated weight $\alpha_t$: \begin{align*} q_T = \frac{\prod_{t=0}^T h_t^{\alpha_t}}{Z_T} \end{align*} where the partition function $Z_T = \int \prod_{t=0}^T h_t^{\alpha_t} \,\mathrm{d}\mathbf{x}$. Recursively, we can specify the density estimate as: \begin{align}\label{eq:q_t} \tilde{q}_t &= h_t^{\alpha_t} \cdot \tilde{q}_{t-1} \end{align} where $\tilde{q}_t$ is the unnormalized estimate at round $t$. The base model $h_0$ is learned using MLE. The conditions on the intermediate models for reducing KL-divergence at every round are stated below. \begin{theorem}\label{thm:KL_red} Let \small${\delta^t_{KL}(h_t, \alpha_t) = D_{KL}(P \Vert Q_{t-1}) - D_{KL}(P \Vert Q_t)}$\normalsize{} denote the reduction in KL-divergence at the $t^{th}$ round of multiplicative boosting. The following conditions hold: \begin{enumerate} \item Sufficient: If $\mathbb{E}_P [\log h_t] \geq \log \mathbb{E}_{Q_{t-1}}[h_t]$, then $\delta^t_{KL}(h_t, \alpha_t) \geq 0$ for all $\alpha_t \in [0, 1]$. \item Necessary: If $\exists \alpha_t \in (0, 1]$ such that $\delta^t_{KL}(h_t, \alpha_t) \geq 0$, then $\mathbb{E}_P [\log h_t] \geq \mathbb{E}_{Q_{t-1}}[\log h_t]$. \end{enumerate} \end{theorem} \begin{proof} In Appendix~\ref{proof:KL_red}. \end{proof} In contrast to additive boosting, the conditions above compare expectations under the true distribution with expectations under the {\em model distribution} in the previous round, $Q_{t-1}$. The equality in the conditions holds for $\alpha_t=0$, which corresponds to the trivial case where the current intermediate model is ignored in Eq.~\eqref{eq:q_t}. For other valid $\alpha_t$, the non-degenerate version of the sufficient inequality guarantees progress towards the true data distribution. Note that the intermediate models increase the overall capacity of the ensemble at every round. As we shall demonstrate later, we find models fit using multiplicative boosting to outperform their additive counterparts empirically suggesting the conditions in Theorem~\ref{thm:KL_red} are easier to fulfill in practice. From the necessary condition, we see that a ``good" intermediate model $h_t$ assigns a better-or-equal log-likelihood under the true distribution as opposed to the model distribution, $Q_{t-1}$. This condition suggests two learning algorithms for intermediate models which we discuss next. \section{Boosted generative models}\label{sec:algo} In this section, we design and analyze meta-algorithms for multiplicative boosting of generative models. Given any base model which permits (approximate) likelihood evaluation, we provide a mechanism for boosting this model using an ensemble of generative and/or discriminative models. \begin{algorithm}[t] \caption{GenBGM($X = \{\mathbf{x}_i\}_{i=1}^m, T$ rounds)} \label{alg:genbgm} \begin{algorithmic} \State Initialize $d_0(\mathbf{x}_i) = \nicefrac{1}{m}$ for all $ i = {1,2, \ldots, m}$. \State Obtain base generative model $h_0$. \State Set (unnormalized) density estimate $\tilde{q}_0 = h_0$ \For{$t = 1, 2,\ldots, T$} \State - Choose $\beta_t$ and update $d_{t}$ using Eq.~\eqref{eq:reweight_distribution}. \State - Train generative model $h_t$ to maximize Eq.~\eqref{eq:genbgm_obj}. \State - Choose $\alpha_t$. \State - Set density estimate $\tilde{q}_t = h_t^{\alpha_t} \cdot \tilde{q}_{t-1}$. \EndFor \State Estimate $Z_T=\int \tilde{q}_T \;\mathrm{d}\mathbf{x} $. \\ \Return $q_T = \tilde{q}_T/Z_T$. \end{algorithmic} \end{algorithm} \subsection{Generative boosting} Supervised boosting algorithms such as AdaBoost typically involve a reweighting procedure for training weak learners~\cite{freund1995desicion}. We can similarly train an ensemble of generative models for unsupervised boosting, where every subsequent model performs MLE w.r.t a reweighted data distribution $D_t$: \begin{align} \max_{h_t}\mathbb{E}_{D_t}[\log h_t] \label{eq:genbgm_obj}\\ \text{where }d_t \propto \left(\frac{p}{q_{t-1}}\right)^{\beta_t} \label{eq:reweight_distribution} \end{align} and $\beta_t \in [0, 1]$ is the reweighting coefficient at round $t$. Note that these coefficients are in general different from the model weights $\alpha_t$ that appear in Eq.~\eqref{eq:q_t}. \begin{proposition}\label{thm:genbgm_reweight} If we can maximize the objective in Eq.~\eqref{eq:genbgm_obj} optimally, then $\delta_{KL}^t(h_t, \alpha_t) \geq 0$ for any $\beta_t \in [0, 1]$ with the equality holding for $\beta_t=0$. \end{proposition} \begin{proof} In Appendix~\ref{proof:genbgm_reweight}. \end{proof} While the objective in Eq.~\eqref{eq:genbgm_obj} can be hard to optimize in practice, the target distribution becomes easier to approximate as we reduce the reweighting coefficient. For the extreme case of $\beta_t=0$, the reweighted data distribution is simply uniform. There is no free lunch however, since a low $\beta_t$ results in a slower reduction in KL-divergence leading to a computational-statistical trade-off. The pseudocode for the corresponding boosting meta-algorithm, referred to as GenBGM, is given in Algorithm~\ref{alg:genbgm}. In practice, we only observe samples from the true data distribution, and hence, approximate $p$ based on the empirical data distribution which is defined to be uniform over the dataset $X$. At every subsequent round, GenBGM learns an intermediate model that maximizes the log-likelihood of data sampled from a reweighted data distribution. \begin{algorithm}[t] \caption{DiscBGM($X = \{\mathbf{x}_i\}_{i=1}^m, T$ rounds, $f$-div)} \label{alg:discbgm} \begin{algorithmic} \State Initialize $d_0(\mathbf{x}_i) = \nicefrac{1}{m}$ for all $ i = {1,2, \ldots, m}$. \State Obtain base generative model $h_0$. \State Set (unnormalized) density estimate $\tilde{q}_0 = h_0$ \For{$t = 1, 2, \ldots, T$} \State - Generate negative samples from $q_{t-1}$ \State - Optimize $r_t$ to maximize RHS in Eq.~\eqref{eq:f_disc_obj}. \State - Set $h_t = \left[f'\right]^{-1} (r_t)$. \State - Choose $\alpha_t$. \State - Set density estimate $\tilde{q}_t = h_t^{\alpha_t} \cdot \tilde{q}_{t-1}$. \EndFor \State Estimate $Z_T=\int \tilde{q}_T\; \mathrm{d}\mathbf{x} $. \\ \Return $q_T = \tilde{q}_T/Z_T$. \end{algorithmic} \end{algorithm} \subsection{Discriminative boosting} A base generative model can be boosted using a discriminative approach as well. Here, the intermediate model is specified as the density ratio obtained from a binary classifier. Consider the following setup: we observe an equal number of samples drawn i.i.d. from the true data distribution (w.l.o.g. assigned the label $y=+1$) and the model distribution in the previous round $Q_{t-1}$ (label $y=0$). \begin{definition} Let $f: \mathbb{R}^{+}\rightarrow \mathbb{R}$ be any convex, lower semi-continuous function satisfying $f(1) = 0$. The $f$-divergence between $P$ and $Q$ is defined as, $D_f(P \Vert Q) = \int q \cdot f\left(\nicefrac{p}{q}\right) \mathrm{d} \mathbf{x}$. \end{definition} Notable examples include the Kullback-Liebler (KL) divergence, Hellinger distance, and the Jenson-Shannon (JS) divergence among many others. The binary classifier in discriminative boosting maximizes a variational lower bound on any $f$-divergence at round $t$: \begin{align}\label{eq:f_disc_obj} D_f\left(P \Vert Q_{t-1}\right) \geq \sup_{r_t\in \mathcal{R}_t} \left (\mathbb{E}_P[r_t] - \mathbb{E}_{Q_{t-1}}[f^\star(r_t)]\right). \end{align} where $f^\star$ denotes the Fenchel conjugate of $f$ and $r_t:\mathbb{R}^d \rightarrow \mathrm{dom}_{f^\star}$ parameterizes the classifier. Under mild conditions on $f$~\cite{nguyen2010estimating}, the lower bound in Eq.~\eqref{eq:f_disc_obj} is tight if $r_t^\star = f'\left( \nicefrac{p}{q_{t-1}}\right)$. Hence, a solution to Eq.~\eqref{eq:f_disc_obj} can be used to estimate density ratios. The density ratios naturally fit into the multiplicative boosting framework and provide a justification for the use of objectives of the form Eq.~\eqref{eq:f_disc_obj} for learning intermediate models as formalized in the proposition below. \begin{proposition}\label{thm:f_optimal} For any given $f$-divergence, let $r_t^\star$ denote the optimal solution to Eq.~\eqref{eq:f_disc_obj} in the $t^{th}$ round of boosting. Then, the model density at the end of the boosting round matches the true density if we set $\alpha_t=1$ and $h_t = \left[f'\right]^{-1} (r_t^\star)$ where $\left[f'\right]^{-1}$ denotes the inverse of the derivative of $f$. \end{proposition} \begin{proof} In Appendix~\ref{proof:f_optimal}. \end{proof} The pseudocode for the corresponding meta-algorithm, DiscBGM is given in Algorithm~\ref{alg:discbgm}. At every round, we train a binary classifier to optimize the objective in Eq.~\eqref{eq:f_disc_obj} for a chosen $f$-divergence. As a special case, the negative of the cross-entropy loss commonly used for binary classification is also a lower bound on an $f$-divergence. While Algorithm~\ref{alg:discbgm} is applicable for any $f$-divergence, we will focus on cross-entropy henceforth to streamline the discussion. \begin{corollary}\label{thm:bayes_optimal} Consider the (negative) cross-entropy objective maximized by a binary classifier: \begin{align}\label{eq:disc_obj} \sup_{c_t\in \mathcal{C}_t} \mathbb{E}_{P}[\log c_t] + \mathbb{E}_{Q_{t-1}}[\log(1-c_t)]. \end{align} If a binary classifier $c_t$ trained to optimize Eq.~\eqref{eq:disc_obj} is Bayes optimal, then the model density after round $t$ matches the true density if we set $\alpha_t=1$ and $h_t= \nicefrac{c_t}{1-c_t}$. \end{corollary} \begin{proof} In Appendix~\ref{proof:bayes_optimal}. \end{proof} In practice, a classifier with limited capacity trained on a finite dataset will not generally be Bayes optimal. The above corollary, however, suggests that a good classifier can provide a `direction of improvement', in a similar spirit to gradient boosting for supervised learning~\cite{freund1995desicion}. Additionally, if the intermediate model distribution $h_t$ obtained using the above corollary satisfies the conditions in Theorem~\ref{thm:KL_red}, it is guaranteed to improve the fit. The weights $\alpha_t\in [0,1]$ can be interpreted as our confidence in the classification estimates, akin to the step size used in gradient descent. While in practice we heuristically assign weights to the intermediate models, the greedy optimum value of these weights at every round is a critical point for $\delta^t_{KL}$ (defined in Theorem~\ref{thm:KL_red}). For example, in the extreme case where $c_t$ is uninformative, \ie, $c_t \equiv 0.5$, then $\delta^t_{KL}(h_t, \alpha_t)=0$ for all $\alpha_t\in [0,1] $. If $c_t$ is Bayes optimal, then $\delta^t_{KL}$ attains a maxima when $\alpha_t=1$ (Corollary~\ref{thm:bayes_optimal}). \subsection{Hybrid boosting} Intermediate models need not be exclusively generators or discriminators; we can design a boosting ensemble with any combination of generators and discriminators. If an intermediate model is chosen to be a generator, we learn a generative model using MLE after appropriately reweighting the data points. If a discriminator is used to implicitly specify an intermediate model, we set up a binary classification problem. \subsection{Regularization} In practice, we want boosted generative models (BGM) to generalize to data outside the training set $X$. Regularization in BGMs is imposed primarily in two ways. First, every intermediate model can be independently regularized by incorporating explicit terms in the learning objective, early stopping based on validation error, heuristics such as dropout, etc. Moreover, restricting the number of rounds of boosting is another effective mechanism for regularizing BGMs. Fewer rounds of boosting are required if the intermediate models are sufficiently expressive. \section{Empirical evaluation}\label{sec:exp} Our experiments are designed to demonstrate the superiority of the proposed boosting meta-algorithms on a wide variety of generative models and tasks. A reference implementation of the boosting meta-algorithms is available at \texttt{https://github.com/ermongroup/bgm}. Additional implementation details for the experiments below are given in Appendix~\ref{app:exp}. \subsection{Multiplicative vs. additive boosting} A common pitfall with learning parameteric generative models is model misspecification with respect to the true underlying data distribution. For a quantitative and qualitative understanding of the behavior of additive and multiplicative boosting, we begin by considering a synthetic setting for density estimation on a mixture of Gaussians. \paragraph{Density estimation on synthetic dataset.}\label{sec:mog} The true data distribution is a equi-weighted mixture of four Gaussians centered symmetrically around the origin, each having an identity covariance matrix. The contours of the underlying density are shown in Figure~\ref{fig:mog_setup_target}. We observe $1,000$ training samples drawn independently from the data distribution (shown as black dots in Figure~\ref{fig:mog_density_estimation}), and the task is to learn this distribution. The test set contains $1,000$ samples from the same distribution. We repeat the process $10$ times for statistical significance. As a base (misspecified) model, we fit a mixture of two Gaussians to the data; the contours for an example instance are shown in Figure~\ref{fig:mog_setup_base}. We compare multiplicative and additive boosting, each run for $T=2$ rounds. For additive boosting (Add), we extend the algorithm proposed by \citeauthor{rosset2002boosting}~\shortcite{rosset2002boosting} setting $\hat{\alpha}_0$ to unity and doing a line search over $\hat{\alpha}_1, \hat{\alpha}_2 \in [0, 1]$. For Add and GenBGM, the intermediate models are mixtures of two Gaussians as well. The classifiers for DiscBGM are multi-layer perceptrons with two hidden layers of 100 units each and ReLU activations, trained to maximize $f$-divergences corresponding to the negative cross-entropy (NCE) and Hellinger distance (HD) using the Adam optimizer~\cite{kingma-iclr2014}. The test negative log-likelihood (NLL) estimates are listed in Table~\ref{tab:mog_density_estimation}. Qualitatively, the contour plots for the estimated densities after every boosting round on a sample instance are shown in Figure~\ref{fig:mog_density_estimation}. Multiplicative boosting algorithms outperform additive boosting in correcting for model misspecification. GenBGM initially leans towards maximizing coverage, whereas both versions of DiscBGM are relatively more conservative in assigning high densities to data points away from the modes. \paragraph{Heuristic model weighting strategies.} The multiplicative boosting algorithms require as hyperparameters the number of rounds of boosting and weights assigned to the intermediate models. For any practical setting, these hyperparameters are specific to the dataset and task under consideration and should be set based on cross-validation. While automatically setting model weights is an important direction for future work, we propose some heuristic weighting strategies. Specifically, the \textit{unity} heuristic assigns a weight of $1$ to every model in the ensemble, the \textit{uniform} heuristic assigns a weight of $1/(T+1)$ to every model, and the \textit{decay} heuristic assigns as a weight of $1/2^t$ to the $t^{th}$ model in the ensemble. In Figure~\ref{fig:heuristics}, we observe that the performance of the algorithms is sensitive to the weighting strategies. In particular, DiscBGM produces worse estimates as $T$ increases for the ``uniform" (red) strategy. The performance of GenBGM also degrades slightly with increasing $T$ for the ``unity'' (green) strategy. Notably, the ``decay'' (cyan) strategy achieves stable performance for both the algorithms. Intuitively, this heuristic follows the rationale of reducing the step size in gradient based stochastic optimization algorithms, and we expect this strategy to work better even in other settings. However, this strategy could potentially result in slower convergence as opposed to the unity strategy. \paragraph{Density estimation on benchmark datasets.} We now evaluate the performance of additive and multiplicative boosting for density estimation on real-world benchmark datasets~\cite{van2012markov}. We consider two generative model families: mixture of Bernoullis (MoB) and sum-product networks~\cite{poon2011sum}. While our results for multiplicative boosting with sum-product networks (SPN) are competitive with the state-of-the-art, the goal of these experiments is to perform a robust comparison of boosting algorithms as well as demonstrate their applicability to various model families. We set $T=2$ rounds for additive boosting and GenBGM. Since DiscBGM requires samples from the model density at every round, we set $T=1$ to ensure computational fairness such that the samples can be obtained efficiently from the base model sidestepping running expensive Markov chains. Model weights are chosen based on cross-validation. The results on density estimation are reported in Table~\ref{tab:binary_density_estimation}. Since multiplicative boosting estimates are unnormalized, we use importance sampling to estimate the partition function. When the base model is MoB, the Add model underperforms and is often worse than even the baseline model for the best performing validated non-zero model weights. GenBGM consistently outperforms Add and improves over the baseline model in a most cases (4/6 datasets). DiscBGM performs the best and convincingly outperforms the baseline, Add, and GenBGM on all datasets. For results on SPNs, the boosted models all outperform the baseline. GenBGM again edges out Add models (4/6 datasets), whereas DiscBGM models outperform all other models on all datasets. These results demonstrate the usefulness of boosted expressive model families, especially the DiscBGM approach, which performs the best, while GenBGM is preferable to Add. \subsection{Applications of generative models} \paragraph{Classification.} Here, we evaluate the performance of boosting algorithms for classification. Since the datasets above do not have any explicit labels, we choose one of the dimensions to be the label (say $y$). Letting $\mathbf{x}_{\bar{y}}$ denote the remaining dimensions, we can obtain a prediction for $y$ as, \[p(y=1\vert \mathbf{x}_{\bar{y}})= \frac{p(y=1, \mathbf{x}_{\bar{y}})}{p(y=1, \mathbf{x}_{\bar{y}}) + p(y=0, \mathbf{x}_{\bar{y}})}\] which is efficient to compute even for unnormalized models. We repeat the above procedure for all the variables predicting one variable at a time using the values assigned to the remaining variables. The results are reported in Table~\ref{tab:binary_classification}. When the base model is a MoB, we observe that the Add approach could often be worse than the base model whereas GenBGM performs slightly better than the baseline (4/6 datasets). The DiscBGM approach consistently performs well, and is only outperformed by GenBGM for two datasets for MoB. When SPNs are used instead, both Add and GenBGM improve upon the baseline model while DiscBGM again is the best performing model on all but one dataset. \paragraph{Sample generation.} We compare boosting algorithms based on their ability to generate image samples for the binarized MNIST dataset of handwritten digits~\cite{lecun2010mnist}. We use variational autoencoders (VAE) as the base model~\cite{kingma-iclr2014}. While any sufficiently expressive VAE can generate impressive examples, we design the experiment to evaluate the model complexity approximated as the number of learnable parameters. Ancestral samples obtained by the baseline VAE model are shown in Figure~\ref{fig:mnist_sampling_base}. We use the evidence lower bound (ELBO) as a proxy for approximately evaluating the marginal log-likelihood during learning. The conventional approach to improving the performance of a latent variable model is to increase its representational capacity by adding hidden layers (Base + depth) or increasing the number of hidden units in the existing layers (Base + width). These lead to a marginal improvement in sample quality as seen in Figure~\ref{fig:mnist_sampling_base_depth} and Figure~\ref{fig:mnist_sampling_base_width}. In contrast, boosting makes steady improvements in sample quality. We start with a VAE with much fewer parameters and generate samples using a hybrid boosting GenDiscBGM sequence VAE$\rightarrow$CNN$\rightarrow$VAE (Figure~\ref{fig:mnist_sampling_bgm}) . The discriminator used is a convolutional neural network (CNN)~\cite{lecun1995convolutional} trained to maximize the negative cross-entropy. We then generate samples using independent Markov chain Monte Carlo (MCMC) runs. The boosted sequences generate sharper samples than all baselines in spite of having similar model capacity. \section{Discussion and related work}\label{sec:rel} In this work, we revisited boosting, a class of meta-algorithms developed in response to a seminal question: \textit{Can a set of weak learners create a single strong learner?} Boosting has offered interesting theoretical insights into the fundamental limits of supervised learning and led to the development of algorithms that work well in practice ~\cite{schapire1990strength,freund1999short,friedman2002stochastic,caruana2006empirical}. Our work provides a foundational framework for unsupervised boosting with connections to prior work discussed below. \paragraph{Sum-of-experts.} \citeauthor{rosset2002boosting}~\shortcite{rosset2002boosting} proposed an algorithm for density estimation using Bayesian networks similar to gradient boosting. These models are normalized and easy to sample, but are generally outperformed by multiplicative formulations for correcting for model misspecification, as we show in this work. Similar additive approaches have been used for improving approximate posteriors for specific algorithms for variational inference~\cite{guo2016boosting,miller2016variational} and generative adversarial networks~\cite{tolstikhin2017adagan}. For a survey on variations of additive ensembling for unsupervised settings, refer to the survey by~\citeauthor{bourel2012aggregating}~\shortcite{bourel2012aggregating}. \paragraph{Product-of-experts.} Our multiplicative boosting formulation can be interpreted as a product-of-experts approach, which was initially proposed for feature learning in energy based models such as Boltzmann machines. For example, the hidden units in a restricted Boltzmann machine can be interpreted as weak learners performing MLE. If the number of weak learners is fixed, they can be efficiently updated in parallel but there is a risk of learning redundant features ~\cite{hinton1999products,hinton2002training}. Weak learners can also be added incrementally based on the learner's ability to distinguish observed data and model-generated data~\cite{welling2002self}. \citeauthor{tu2007learning}~\shortcite{tu2007learning} generalized the latter to boost arbitrary probabilistic models; their algorithm is a special case of DiscBGM with all $\alpha$'s set to 1 and the discriminator itself a boosted classifier. DiscBGM additionally accounts for imperfections in learning classifiers through flexible model weights. Further, it can include any classifier trained to maximize any $f$-divergence. Related techniques such as noise-contrastive estimation, ratio matching, and score matching methods can be cast as minimization of Bregman divergences, akin to DiscBGM with unit model weights~\cite{gutmann2012bregman}. A non-parametric algorithm similar to GenBGM was proposed by \citeauthor{di2004boosting}~\shortcite{di2004boosting} where an ensemble of weighted kernel density estimates are learned to approximate the data distribution. In contrast, our framework allows for both parametric and non-parametric learners and uses a different scheme for reweighting data points than proposed in the above work. \paragraph{Unsupervised-as-supervised learning.} The use of density ratios learned by a binary classifier for estimation was first proposed by~\citeauthor{friedman2001elements}~\shortcite{friedman2001elements} and has been subsequently applied elsewhere, notably for parameter estimation using noise-contrastive estimation~\cite{gutmann2010noise} and sample generation in generative adversarial networks (GAN)~\cite{goodfellow2014generative}. While GANs consist of a discriminator distinguishing real data from model generated data similar to DiscBGM for a suitable $f$-divergence, they differ in the learning objective for the generator~\cite{nowozin2016f}. The generator of a GAN performs an adversarial minimization of the same objective the discriminator maximizes, whereas DiscBGM uses the likelihood estimate of the base generator (learned using MLE) and the density ratios derived from the discriminator(s) to estimate the model density for the ensemble. \paragraph{Limitations and future work.} In the multiplicative boosting framework, the model density needs to be specified only up to a normalization constant at any given round of boosting. Additionally, while many applications of generative modeling such as feature learning and classification can sidestep computing the partition function, if needed it can be estimated using techniques such as Annealed Importance Sampling~\cite{neal2001annealed}. Similarly, Markov chain Monte Carlo methods can be used to generate samples. The lack of implicit normalization can however be limiting for applications requiring fast log-likelihood evaluation and sampling. In order to sidestep this issue, a promising direction for future work is to consider boosting of normalizing flow models~\cite{dinh2014nice,dinh2016density,grover2017flow}. These models specify an invertible multiplicative transformation from one distribution to another using the change-of-variables formula such that the resulting distribution is self-normalized and efficient ancestral sampling is possible. The GenBGM algorithm can be adapted to normalizing flow models whereby every transformation is interpreted as a weak learner. The parameters for every transformation can be trained greedily after suitable reweighting resulting in a self-normalized boosted generative model. \section{Conclusion}\label{sec:conc} We presented a general-purpose framework for boosting generative models by explicit factorization of the model likelihood as a product of simpler intermediate model densities. These intermediate models are learned greedily using discriminative or generative approaches, gradually increasing the overall model's capacity. We demonstrated the effectiveness of these models over baseline models and additive boosting for the tasks of density estimation, classification, and sample generation. Extensions to semi-supervised learning~\cite{kingma2014semi} and structured prediction~\cite{sohn2015learning} are exciting directions for future work. \section*{Acknowledgements} We are thankful to Neal Jean, Daniel Levy, and Russell Stewart for helpful critique. This research was supported by a Microsoft Research PhD fellowship in machine learning for the first author, NSF grants $\#1651565$, $\#1522054$, $\#1733686$, a Future of Life Institute grant, and Intel. \fontsize{9pt}{10pt} \selectfont \bibliographystyle{aaai} \newpage \input{supplementary} \end{document}
Memory-augmented Attention Modelling for Videos
1611.02261
Table 2: Video captioning evaluation on MSVD (670 videos).
[ "Method", "meteor", "bleu-1", "bleu-2", "bleu-3", "bleu-4", "cider" ]
[ [ "Venugopalan et al. ( 2015b )", "27.7", "−", "−", "−", "−", "−" ], [ "Venugopalan et al. ( 2015a )", "29.2", "−", "−", "−", "−", "−" ], [ "Pan et al. ( 2016b )", "29.5", "74.9", "60.9", "50.6", "40.2", "−" ], [ "Yu et al. ( 2016 )", "31.10", "77.30", "64.50", "54.60", "44.30", "−" ], [ "Pan et al. ( 2016a )", "[BOLD] 33.10–––––––", "79.20", "66.30", "55.10", "43.80", "−" ], [ "[BOLD] Our Model", "31.80", "[BOLD] 79.40–––––––", "[BOLD] 67.10–––––––", "[BOLD] 56.80–––––––", "[BOLD] 46.10–––––––", "[BOLD] 62.70–––––––" ], [ "Yao et al. ( 2015 ) + C3D", "29.60", "−", "−", "−", "41.92", "51.67" ], [ "Venugopalan et al. ( 2015a ) + Flow", "29.8", "−", "−", "−", "−", "−" ], [ "Ballas et al. ( 2016 ) + FT", "30.75", "−", "−", "−", "49.0", "59.37" ], [ "Pan et al. ( 2016b ) + C3D", "31.0", "78.80", "66.0", "55.4", "45.3", "−" ], [ "Yu et al. ( 2016 ) + C3D", "32.60", "81.50", "70.40", "60.4", "49.90", "−" ] ]
To extensively evaluate the proposed model, we compare with state-of-the-art models and baselines for the video caption generation task on the MSVD dataset. In this experiment, we use 8 frames per video as the inputs to the tem module. The closest-scoring comparison system, from Pan et al. bleu prefers descriptions with short-distance fluency and high lexical overlap with the observed descriptions, while meteor permits less direct overlap and longer descriptions. A detailed study of the generated descriptions between the two systems would be needed to better understand these differences. (denoted Flow) , 3-Dimensional Convolutional Network features ( (denoted C3D), or fine-tuned CNN features (denoted FT), which further enhances aspects such as action recognition by leveraging an external dataset such as UCF-101. The only system using external features that outperforms the model proposed here is from Yu et al. Here, we demonstrate that the proposed architecture maps visual space to language space with improved performance over previous work, before addition of further resources. The symbol − indicates that the score was not reported by the corresponding paper.
\section{Conclusion} We introduce a general framework for an memory-based sequence learning model, trained end-to-end. We apply this framework to the task of describing an input video with a natural language description. Our model utilizes a deep learning architecture that represents video with an explicit model of the video's temporal structure, and jointly models the video description and the temporal video sequence. This effectively connects the visual video space and the language description space. A memory-based attention mechanism helps guide where to attend and what to reason about as the description is generated. This allows the model to not only reason efficiently about local attention, but also to consider the full sequence of video frames during the generation of each word. Our experiments confirm that the memory components in our architecture, most notably from the {\sc iam} module, play a significant role in improving the performance of the entire network. Future work should raim to refine the temporal video frame model, {\sc tem}, and explore how to improve performance on capturing the ideal frames for each description. \section{Introduction} Deep neural architectures have led to remarkable progress in computer vision and natural language processing problems. Image captioning is one such problem, where the combination of convolutional structures~\citep{alexnet, lecun1998gradient}, %built from image classification, %challenges such as ImageNet \cite{ImageNetCite}, and sequential recurrent structures \citep{s2sIlya} leads to remarkable improvements over previous work \cite{FangEtAl2015,DevlinEtAl2015}. One of the emerging modelling paradigms, shared by models for image captioning as well as related vision-language problems, is the notion of an attention mechanism that guides the model to attend to certain parts of the image while generating \cite{icml2015_xuc15}. The attention models used for problems such as image captioning typically depend on the single image under consideration and the partial output generated so far, jointly capturing one region of an image and the words being generated. However, such models cannot directly capture the temporal reasoning necessary to effectively produce words that refer to actions and events taking place over multiple frames in a video. For example, in a video depicting ``someone waving a hand'', the ``waving'' action can start from any frame and can continue on for a variable number of following frames. At the same time, videos contain many frames that do not provide additional information over the smaller set of frames necessary to generate a summarizing description. Given these challenges, it is not surprising that even with recent advancements in image captioning~\cite{FangEtAl2015, icml2015_xuc15, densecap, Vinyals_2015_CVPR, lrcn2014}, video captioning has remained challenging. Motivated by these observations, we introduce a memory-based attention mechanism for video captioning and description. Our model utilizes memories of past attention in the video when reasoning about where to attend in a current time step. This allows the model to not only effectively leverage local attention, but also to consider the entire video as it generates each word. This mechanism effectively binds information from both vision and language sources into a coherent structure. %This mechanism is similar to the proposed {\it central executive} system in human cognition, which is thought to permit human performance on two simultaneous tasks (e.g., seeing and saying) using two separate perceptual domains (e.g., visual and linguistic) by binding information from both sources into coherent structure that enables coordination, selective attention, and inhibition. Our work shares the same goals as recent work on attention mechanisms for sequence-to-sequence architectures, such as \citet{RocktaschelGHKB15} and \citet{YangYWSC16}. %However, there are major differences between these works and our current work. ~\citet{RocktaschelGHKB15} consider the domain of entailment relations, where the goal is to determine entailment given two input sentences. They propose a soft attention model that is not only focused on the current state, but the previous as well. In our model, all previous attentions are explicitly stored into memory, and the system learns to memorize the encoded version of the input videos conditioned on previously seen words. \citet{YangYWSC16} and our work both try to solve the problem of locality of attention in vision-to-language, but while \citet{YangYWSC16} introduce a memory architecture optimized for single image caption generation, we introduce a memory architecture that operates on a streaming video's temporal sequence. % More specifically, they incorporate discriminative supervision into their ''reviewer'' mechanism, which is not the case in our model. Further, their model is applied to image caption generation, which is to some extent simpler than video caption generation because there is no temporal structure to model. The contributions of this work include: \begin{itemize} \item A deep learning architecture that represents video with an explicit model of the video's temporal structure. \vspace{-.5em} \item A method to jointly model the video description and temporal video sequence, connecting the visual video space and the language description space. \vspace{-.5em} \item A memory-based attention mechanism that learns iterative attention relationships in a simple and effective sequence-to-sequence memory structure. \vspace{-.5em} \item Extensive comparison of this work and previous work on the video captioning problem on the MSVD \citep{chencl11} and the Charades \citep{sigurdsson2016hollywood} datasets. \end{itemize} \noindent We focus on the video captioning problem, however, the proposed model is general enough to be applicable in other sequence problems where attention models are used (e.g., machine translation or recognizing entailment relations).\begin{abstract} We present a method to improve video description generation by modeling higher-order interactions between video frames and described concepts. By storing past visual attention in the video associated to previously generated words, the system is able to decide what to look at and describe in light of what it has already looked at and described. This enables not only more effective local attention, but tractable consideration of the video sequence while generating each word. Evaluation on the challenging and popular {\it MSVD} and {\it Charades} datasets demonstrates that the proposed architecture outperforms previous video description approaches without requiring external temporal video features. The source code for this paper is available on \url{https://github.com/rasoolfa/videocap}. \end{abstract} \section{Experiments} \paragraph*{Dataset} We evaluate the model on the \textit{Charades}~\citep{sigurdsson2016hollywood} dataset and the {\it Microsoft Video Description Corpus (MSVD)}~\citep{chencl11}. Charades contains $9,848$ videos (in total) and provides $27,847$\footnote{Only $16087$ out of $27,847$ are used as descriptions for our evaluation since the $27,847$ refers to script of the video as well as descriptions.} video descriptions. We follow the same train/test splits as \citet{sigurdsson2016hollywood}, with $7569$ train, $1,863$ test, and $400$ validation. A main difference between this dataset and others is that it uses a ``Hollywood in Homes'' approach to data collection, where ``actors'' are crowdsourced to act out different actions. This yields a diverse set of videos, with each containing a specific action. % -- useful for evaluating the basics of video description. MSVD is a set of YouTube videos annotated by workers on Mechanical Turk,\footnote{\url{https://www.mturk.com/mturk/welcome}} who were asked to pick a video clips representing an activity. In this dataset, each clip is annotated by multiple workers with a single sentence. The dataset contains $1,970$ videos and about $80,000$ descriptions, where $1,200$ of the videos are training data, $670$ test, and the rest ($100$ videos) for validation. In order for the results to be comparable to other approaches, we follow the \textit{\textbf{exact}} training/validation/test splits provided by \citet{venugopalannaacl15}. \paragraph*{Evaluation metrics} We report results on the video description generation task. In order to evaluate descriptions generated by our model, we use model-free automatic evaluation metrics. We adopt \textsc{meteor}, \textsc{bleu-n}, and \textsc{cide}r metrics available from the Microsoft COCO Caption Evaluation code\footnote{\url{https://github.com/tylin/coco-caption}} to score the system. \paragraph*{Video and Caption preprocessing} We preprocess the captions for both datasets using the Natural Language Toolkit (NLTK)\footnote{\url{http://www.nltk.org/}} and clip each description up to $30$ words, since the majority have less. %Beyond this, no other type of preprocessing is used. We extract sample frames from each video and pass each frame through VGGnet~\citep{Simonyan14c} without fine-tuning. For the experiments in this paper, we use the feature maps from \textit{conv5\_3} layer after applying \textit{ReLU}. The feature map in this layer is $14\times 14 \times 512$. Our {\sc tem} component operates on the flattened $196\times 512$ of this feature cubes. For the ablation studies, features from the fully connected layer with $4096$ dimensions are used as well. \paragraph*{Hyper-parameter optimization} We use random search~\citep{Bergstra2012} on the validation set to select hyper-parameters on both datasets. The word-embedding size, hidden layer size (for both the {\sc tem} and the Decoder), and memory size of the best model on Charades are: $237$, $1316$, and $437$, respectively. These values are $402$, $1479$, and $797$ for the model on the MSVD dataset. A stack of two LSTMs are used in the Decoder and {\sc tem}. The number of frame samples is a hyperparameter which is selected among $4$, $8$, $16$, $40$ on the validation set. \textsc{att + No {\sc tem}} and \textsc{No iam + {\sc tem}} get the best results on the validation set with $40$ frames, and we use this as the number of frames for all models in the ablation study. \subsection{Video Caption Generation} We first present an ablation analysis to elucidate the contribution of the different components of our proposed model. Then, we compare the overall performance of our model to other recent models. \subsection*{Ablation Analysis} Ablation results are shown in Table~\ref{tab:abl}, evaluating on the MSVD test set. The first (\textsc{Att + No tem}) corresponds to a simpler version of our model in which we remove the {\sc tem} component and instead pass each frame of the video through a CNN, extracting features from the last fully-connected hidden layer. % (e.g., \textit{fc7}). In addition, we replace our {\sc iam} with a simpler version where the model only memorizes the current step instead of all previous steps. In the next variation (\textsc{Att + tem}), it is same as the first one except we use {\sc tem} instead of fully connected CNN features. In the next ablation (\textsc{No iam + tem}), we remove the {\sc iam} component from our model and keep the rest of the model as-is. In the next variation (\textsc{iam + No tem}), we remove the {\sc tem} and calculate features for each frame, similar to \textsc{Att + No tem}. Finally, the last row in the table is our proposed model (\textsc{iam + tem}) with all its components. The {\sc iam} plays a significant role in the proposed model, and removing it causes a large drop in performance, as measured by both {\sc bleu} and {\sc meteor}. On the other hand, removing the {\sc tem} by itself does not drop performance as much as dropping the {\sc iam}. Putting the two together, they complement one another to result in overall better performance for {\sc meteor}. However, further development on the {\sc tem} component in future work is warranted. In the \textsc{No iam + tem} condition, an entire video must be represented with a fixed-length vector, which may contribute to the lower performance~\citep{BahdanauCB14}. This is in contrast to the other models, which apply single layer attention or {\sc iam} to search relevant parts of the video aligned with the description. \subsection*{Performance Comparison} To extensively evaluate the proposed model, we compare with state-of-the-art models and baselines for the video caption generation task on the MSVD dataset. In this experiment, we use $8$ frames per video as the inputs to the {\sc tem} module. As shown in Table \ref{tab:main_res},\footnote{The symbol $-$ indicates that the score was not reported by the corresponding paper. The horizontal line in Table \ref{tab:main_res} separates models that do/do not use external features for the video representation.} our proposed model achieves state-of-the-art scores in {\sc bleu}-4, and outperforms almost all systems on {\sc meteor}. The closest-scoring comparison system, from \citet{pan2016hierarchical}, shows a trade-off between {\sc meteor} and {\sc bleu}: {\sc bleu} prefers descriptions with short-distance fluency and high lexical overlap with the observed descriptions, while {\sc meteor} permits less direct overlap and longer descriptions. A detailed study of the generated descriptions between the two systems would be needed to better understand these differences. The improvement over previous work is particularly noteworthy because we do not use external features for the video, such as Optical Flow \citep{Bro04a} (denoted Flow), 3-Dimensional Convolutional Network features~\citep{C3DTan} (denoted C3D), or fine-tuned CNN features (denoted FT), which further enhances aspects such as action recognition by leveraging an external dataset such as UCF-101. The only system using external features that outperforms the model proposed here is from \citet{YuWHYX15}, who uses a slightly different version of the same dataset\footnote{\citet{YuWHYX15} uses the MSVD dataset reported in \cite{Guadarrama2013}, which has different preprocessing.} along with C3D features for a large improvement in results (compare Table~\ref{tab:main_res} rows 4 and 11); future work may explore the utility of external visual features for this work. Here, we demonstrate that the proposed architecture maps visual space to language space with improved performance over previous work, before addition of further resources. We additionally report results on the Charades dataset \cite{sigurdsson2016hollywood}, which is challenging to train on because there are only a few ($\approx2$) captions per video. In this experiment, we use $16$ frames per video as the input to the {\sc tem} module. As shown in Table~\ref{tab:main_res_HW}, our method achieves a $10\%$ relative improvement over the \citet{venugopalan15iccv} model reported by \citet{sigurdsson2016hollywood}. It is worth noting that humans reach a {\sc meteor} score of $24$ and a {\sc bleu-4} score of $20$, illustrating the low upper bound in this task.\footnote{For comparison, the upper bound {\sc bleu} score in machine translation for English to French is above 30.} \subsection*{Results Discussion} We show some example descriptions generated by our system in Figure \ref{fig:caps_samples}. The model generates mostly correct descriptions, with naturalistic variation from the ground truth. Errors illustrate a preference to describe items that have a higher likelihood of being mentioned, even if they appear in less of the frames. For example, in the ``a dog is on a trampoline" video, our model focuses on the man, who appears in only a few frames, and generates the incorrect description ``a man is washing a bath". The errors, alongside the ablation study shown in Table \ref{tab:abl}, suggest that the {\sc tem} module in particular may be further improved by focusing on how frames in the video sequence are captured and passed to the {\sc iam} module. \section{Related Work} One of the primary challenges in learning a mapping from a visual space (i.e., video or image) to a language space is learning a representation that not only effectively represents each of these modalities, but is also able to translate a representation from one space to the other. \citet{Rohrbachiccv2013} developed a model that generates a semantic representation of visual content that can be used as the source language for the language generation module. \citet{venugopalannaacl15} proposed a deep method to translate a video into a sentence where an entire video is represented with a single vector based on the mean pool of frame features. However, it was recognized that representing a video by an average of its frames loses the temporal structure of the video. To address this problem, recent work \citep{yao2015capgenvid, pan2016hierarchical, venugopalan15iccv, AndrewShinICP, PanMYLR15, XuXiChAAAI2015, BallasYPC15, YuWHYX15} proposed methods to model the temporal structure of videos as well as language. The majority of these methods are inspired by sequence-to-sequence~\citep{s2sIlya} and attention~\citep{BahdanauCB14} models. Sequence learning was proposed to map the input sequence of a source language to a target language~\citep{s2sIlya}. Applying this method with an additional attention mechanism to the problem of translating a video to a description showed promising initial results, however, revealed additional challenges. First, modelling the video content with a fixed-length vector in order to map it to a language space is a more complex problem than mapping from a language to a language, given the complexity of visual content and the difference between the two modalities. Since not all frames in a video are equally salient for a short description, and an event can happen in multiple frames, it is important for a model to identify which frames are most salient. Further, the models need additional work to be able to focus on points of interest within the video frames to select what to talk about. Even a variable-length vector to represent a video using attention \citep{yao2015capgenvid} can have some problems. More specifically, current attention methods are local~\cite{YangYWSC16}, since the attention mechanism works in a sequential structure, and lack the ability to capture global structure. Moreover, combining a video and a description as a sequence-to-sequence problem motivates using some variant of a recurrent neural network (RNN) \citep{Hochreiter}: Given the limited capacity of a recurrent network to model very long sequences, memory networks~\citep{WestonCB14,SukhbaatarSWF15} have been introduced to help the RNN memorize sequences. However, one problem these memory networks suffer from is difficulty in training the model. The model proposed by \citet{WestonCB14} requires supervision at each layer, which makes training with backpropagation a challenging task. \citet{SukhbaatarSWF15} proposed a memory network that can be trained end-to-end, and the current work follows this research line to tackle the challenging problem of modeling vision and language memories for video description generation. %especially with write operation~\citep{GravesWD14}.\section{Learning to Attend and Memorize} A main challenge in video description is to find a mapping that can capture the connection between the video frames and the video description. Sequence-to-sequence models, which work well at connecting input and output sequences in machine translation~\citep{s2sIlya}, do not perform as well for this task, as there is not the same direct alignment between a full video sequence and its summarizing description. Our goal in the video description problem is to create an architecture that learns which moments to focus on in a video sequence in order to generate a summarizing natural language description. The modelling challenges we set forth for the video description problem are: (1) Processing the temporal structure of the video; (2) Learning to attend to important parts of the video; and (3) Generating a description where each word is relevant to the video. At a high-level, this can be understood as having three primary parts: {\it When} moments in the video are particularly salient; {\it what} concepts to focus on; and {\it how} to talk about them. We directly address these issues in an end-to-end network with three primary corresponding components (Figure \ref{fig:our_model}): A Temporal Model ({\sc tem}), An Iterative Attention/Memory Model ({\sc iam}), and a Decoder. In summary: \begin{itemize} \item {\bf When:} Frames within the video sequence - The Temporal Model ({\sc tem}). \item {\bf What:} Language-grounded concepts depicted in the video - The Iterative Attention/Memory mechanism ({\sc iam}). \item {\bf How:} Words that fluently describe the {\it what} and {\it when} - The Decoder. \end{itemize} The Temporal Model is in place to capture the temporal structure of the video: It functions as a {\it when} component. The Iterative Attention/Memory is a main contribution of this work, functioning as a {\it what} component to remember relationships between words and video frames, and storing longer term memories. The Decoder generates language, and functions as the {\it how} component to create the final description. To train the system end to end, we formulate the problem as sequence learning to maximize the probability of generating a correct description given a video: \begin{equation} \Theta^* = \underset{\Theta}{\arg\max}\sum_{(S, {f_1,\dots, f_N})} \log~p(S|f_1, \dots,f_N ;\mathbf{\Theta}) \end{equation} where $S$ is the description, ${f_1, f_2,\dots,f_N}$ are the input video frames, and $\Theta$ is the model parameter vector. In the next sections, we will describe each component of the model, then explain the details of training and inference. \small \paragraph{\small Notational note:} Numbered equations use bold face to denote multi-dimensional learnable parameters, e.g., ${\mathbf{W^j_p}}$. To distinguish the two different sets of time steps, one for video frames and one for words in the description, we use the notation $t$ for video and $t^\prime$ for language. Throughout, the terms {\it description} and {\it caption} are used interchangeably. \normalsize \subsection{Temporal Model ({\sc tem})}\label{sec:TEM} The first module we introduce encodes the temporal structure of the input video. A clear framework to use for this is a Recurrent Neural Network (RNN), which has been shown to be effectual in modelling the temporal structure of sequential data such as video \citep{BallasYPC15, SharmaKS15, venugopalan15iccv} and speech \citep{graves14}. In order to apply this in video sequences to generate a description, we seek to capture the fact that frame-to-frame temporal variation tends to be local \citep{BroxMalik2011} and critical in modeling motion~\citep{BallasYPC15}. %, it is important to also consider a frame representation that can preserve frame-to-frame temporal variation. Visual features extracted from the last fully connected layers of Convolutional Neural Networks (CNNs) have been shown to produce state-of-the-art results in image classification and recognition \citep{Simonyan14c, He_2016_CVPR}, and thus seem a good choice for modeling visual frames. However, these features tend to discard low level information useful in modeling the motion in the video \citep{BallasYPC15}. To address these challenges, we implement an RNN we call the Temporal Model ({\sc tem}). % to model the temporal structure of the video. At each time step of the {\sc tem}, a video frame encoding from a CNN %with $D$ dimensions %with size $R^{D}$ serves as input. Rather than extracting video frame features from a fully connected layer %last hidden layers of the pretrained CNN, we extract intermediate convolutional maps. In detail, for a given video $X$ with $N$ frames $X = [X^1, X^2, \cdots, X^N]$, $N$ convolutional maps of size $ R ^{L \times D}$ are extracted, where $L$ is the number of locations in the input frame and $D$ is the number of dimensions (See {\sc tem} in Figure \ref{fig:our_model}). To enable the network to store the most important $L$ locations of each frame, %given the hidden state of RNN, we use a soft location attention mechanism, $f_{\mathbf{Latt}}$ \citep{BahdanauCB14, icml2015_xuc15, SharmaKS15}. %called ``Location Attention (\textbf{Latt})''. More specifically, We first use a softmax to compute $L$ probabilities that specify the importance of different parts in the frame, and this creates an input map for $f_{\mathbf{Latt}}$. Formally, given a video frame at time $t$, $X^t \in R^{L \times D}$, the $f_{\mathbf{Latt}}$ mechanism is defined as follows: \begin{align} \label{eq:Latt} & {\rho^t_j} = \frac{ \exp( \mathbf{ W_p^j} h^{t-1}_v )}{\sum_{k=1}^L \exp(\mathbf{W_p^k} h^{t-1}_v )} \\ & f_{\mathbf{Latt}}({X^t, h^{t-1}_v};\mathbf{W_p}) = \sum_{j=1}^L {\rho_j^t} {X^t_{j}} \end{align} where $h^{t-1}_v \in R^K$ is the hidden state of the {\sc tem} at time $t$-1 with $K$ dimensions, and $W_p \in R^{L \times K}$. %and $F^{t} \in R^{D}$. For each video frame time step, {\sc tem} learns a vector representation by applying location attention on the frame convolution map, conditioned on all previously seen frames: \begin{align} \label{eq:temporal_Latt} & {F^{t}} = f_{\mathbf{Latt}}({X^t, h^{t-1}_v};\mathbf{W_p}) \\ & {h^{t}_v} = f_\mathbf{v}({F^{t}},~{h^{t-1}_v};\mathbf{\Theta_v} ) \end{align} where $f_\mathbf{v}$ can be an RNN/LSTM/GRU cell and {\bf $\Theta_v$} is the parameters of the $f_\mathbf{v}$. Due to the fact that vanilla RNNs have gradient vanishing and exploding problems~\citep{pascanu2013difficulty}, we use gradient clipping, and an LSTM with the following flow to handle potential vanishing gradients: \begin{align*} & {i^t} = \sigma(F^{t} \mathbf{W_{x_i}} + {(h^{t-1}_v)}^T\mathbf{W_{h_i}}) \\ & {f^t} = \sigma(F^{t} \mathbf{W_{x_f}} + {(h^{t-1}_v)}^T\mathbf{W_{h_f}}) \\ & {o^t} = \sigma(F^{t} \mathbf{W_{x_o}} + {(h^{t-1}_v)}^T\mathbf{W_{h_o}}) \\ & {g^t} = {\tanh}(F^{t} \mathbf{W_{x_g}} + {(h^{t-1}_v)}^T\mathbf{W_{h_g}}) \\ & {c^t_v} = {f^t} \odot {c^{t-1}_v} + {i^t} \odot {g^{t}} \\ & {h^t_v} = {o_t}\odot {\tanh(c^t)} \end{align*} where $W_{h*} \in R^{K \times K}$, $W_{x*} \in R^{D \times K}$, and we define $\Theta_v = \{W_{h*},W_{x*}\}$. \subsection{Iterative Attention/Memory ({\sc iam})}\label{sec:HAM} A main contribution of this work is a global view for the video description task: A memory-based attention mechanism that learns iterative attention relationships in an efficient sequence-to-sequence memory structure. We refer to this as the Iterative Attention/Memory mechanism ({\sc iam}), and it aggregates information from previously generated words and all input frames. The {\sc iam} component is an iterative memorized attention between an input video and a description. More specifically, it learns a iterative attention structure for where to attend in a video given all previously generated words (from the Decoder), and previous states (from the {\sc tem}). This functions as a memory structure, remembering encoded versions of the video with corresponding language, and in turn, enabling the Decoder to access the full encoded video and previously generated words as it generates new words. This component addresses several key issues in generating a coherent video description. In video description, a single word or phrase often describes action spanning multiple frames within the input video. By employing the {\sc iam}, the model can effectively capture the relationship between a relatively short bit of language and an action that occurs over multiple frames. This also functions to directly address the problem of identifying which parts of the video are most relevant for description. The proposed Iterative Attention/Memory mechanism is formalized with an {\bf Attention} update and a {\bf Memory} update, detailed in Figure \ref{fig:HAM}. Figure \ref{fig:our_model} illustrates where the {\sc iam} sits within the full model, with the Attention module shown in \ref{fig:our_model}a and the Memory module shown in \ref{fig:our_model}b. As formalized in Figure \ref{fig:HAM}, the {\it Attention} update $\hat{F}(\Theta_a)$ computes the set of probabilities in a given time step for attention within the input video states, the memory state, and the decoder state. The {\it Memory} update stores what has been attended to and described. This serves as the memorization component, combining the previous memory with the current iterative attention $\hat{F}$. % within the video sequence and the last decoder state. %$\hat{F}$ and encoded version of the input video with respect to the language model. We use an LSTM $f_m$ with the equations described above to enable the network to learn multi-layer attention over the input video and its corresponding language. The output of this function is then used as input to the Decoder. %It is worth noting that $f_\mathbf{_m}$ is an LSTM with the equations described previously. \todoMM{Check this} \subsection{Decoder}\label{sec:Dec} In order to generate a new word conditioned on all previous words and {\sc iam} states, a recurrent structure is modelled as follows: \begin{align} \label{eq:Dec} &h^{{t^\prime }}_g = f_g(s^{t^\prime}, ~h_m^{t^\prime},~h^{{t^\prime-1}}_g; \mathbf{\Theta_g}) \\ &\hat{s}^{t^\prime} = \mathrm{softmax}((h^{{t^\prime}}_g)^T\mathbf{W_e}) \end{align} where $h^{t^\prime}_g \in R^K$, $s^{t^\prime}$ is a word vector at time ${t^\prime}$, $W_e\in R^{K \times |V|}$, and $|V|$ is the vocabulary size. In addition, $\hat{s}_{t^\prime}$ assigns a probability to each word in the language. $f_{g}$ is an LSTM where $s^{t^\prime}$ and $h_m^{t^\prime}$ are inputs and $h^{{t^\prime}}_g$ is the recurrent state. \subsection{Training and Optimization }\label{sec:training} The goal in our network is to predict the next word given all previously seen words and an input video. In order to optimize our network parameters $\Theta = \{W_p, \Theta_v, \Theta_a, \Theta_m, \Theta_g, W_e \} $, we minimize a negative log likelihood loss function: \begin{equation}\label{eq:NLL} L (S, X; \mathbf{\Theta}) = -\sum_{t^\prime}^T \sum_{i}^{|V|} s_i^{t^\prime}\log (\hat{s}_i^{t^\prime}) + \lambda\parallel\Theta \parallel_2^2 \end{equation} where $|V|$ is the vocabulary size. We fully train our network in an \textit{end-to-end} fashion using first-order stochastic gradient-based optimization method with an adaptive learning rate. More specifically, in order to optimize our network parameters, we use Adam~\citep{KingmaB14} with learning rate $2\times 10^{-5}$ and set $\beta_1$, $\beta_2$ to $0.8$ and $0.999$, respectively. During training, we use a batch size of $16$. The source code for this paper is available on \url{https://github.com/rasoolfa/videocap}. \documentclass{article} % more modern \newcommand{\theHalgorithm}{\arabic{algorithm}} \usepackage[accepted]{icml2017} \hypersetup{ pdfinfo={ Title={Memory-augmented Attention Modelling for Videos}, Author={Rasool Fakoor, Abdel-rahman Mohamed, Margaret Mitchell, Sing Bing Kang, Pushmeet Kohli}, } } \pdfoutput=1 \title{Memory-augmented Attention Modelling for Videos} \date{} \author{Rasool Fakoor$^{\dag}$\thanks{ Corresponding author: Rasool Fakoor ( rasool.fakoor@mavs.uta.edu)}, Abdel-rahman Mohamed,$^{\dag\dag}$, Margaret Mitchell$^{\ddag\dag}$, Sing Bing Kang$^{\dag\dag}$,\\ Pushmeet Kohli$^{\dag\dag}$\\ $^{\dag\dag}$Microsoft Research ~~ $^{\dag}$University of Texas at Arlington~~ $^{\ddag\dag}$Google\\ $^{\dag}$\small{rasool.fakoor@mavs.uta.edu}, $^{\dag\dag}$\small{\{asamir,~singbing.kang,~pkohli\}@microsoft.com} $^{\ddag\dag}$\small{mmitchellai@google.com}, } \cfoot{\thepage} \makeatletter \patchcmd\@combinedblfloats{\box\@outputbox}{\unvbox\@outputbox}{}{% \errmessage{\noexpand\@combinedblfloats could not be patched}% }% \makeatother \begin{document} \maketitle \input{Abstract} \input{Introduction} \input{RelatedWorks} \input{Model} \input{Experiments} \input{Conclusion} \bibliographystyle{icml2017} \end{document}