paper
stringlengths
24
111
paper_id
stringlengths
10
10
table_caption
stringlengths
3
663
table_column_names
sequencelengths
2
14
table_content_values
sequencelengths
1
49
text
stringlengths
116
2.01k
full_body_text
stringlengths
19.3k
104k
Memory-augmented Attention Modelling for Videos
1611.02261
Table 1: Ablation of proposed model with and without the iam component on the MSVD test set.
[ "Method", "meteor", "bleu-1", "bleu-2", "bleu-3", "bleu-4", "cider" ]
[ [ "Att + No tem", "31.20", "77.90", "65.10", "55.30", "44.90", "[BOLD] 63.90" ], [ "Att + tem", "31.00", "79.00", "66.50", "56.30", "45.50", "61.00" ], [ "No iam + tem", "30.50", "78.10", "65.20", "55.10", "44.60", "60.50" ], [ "iam + No tem", "31.00", "78.70", "66.90", "[BOLD] 57.40", "[BOLD] 47.00", "62.10" ], [ "iam + tem [40f]", "31.70", "79.00", "66.20", "56.0", "45.60", "62.20" ], [ "iam + tem [8f]", "[BOLD] 31.80", "[BOLD] 79.40", "[BOLD] 67.10", "56.80", "46.10", "62.70" ] ]
The first (Att + No tem) corresponds to a simpler version of our model in which we remove the tem component and instead pass each frame of the video through a CNN, extracting features from the last fully-connected hidden layer. In addition, we replace our iam with a simpler version where the model only memorizes the current step instead of all previous steps. In the next variation (Att + tem), it is same as the first one except we use tem instead of fully connected CNN features. In the next ablation (No iam + tem), we remove the iam component from our model and keep the rest of the model as-is. In the next variation (iam + No tem), we remove the tem and calculate features for each frame, similar to Att + No tem. Finally, the last row in the table is our proposed model (iam + tem) with all its components. The model generates mostly correct descriptions, with naturalistic variation from the ground truth. Errors illustrate a preference to describe items that have a higher likelihood of being mentioned, even if they appear in less of the frames. For example, in the “a dog is on a trampoline” video, our model focuses on the man, who appears in only a few frames, and generates the incorrect description “a man is washing a bath”.
\section{Conclusion} We introduce a general framework for an memory-based sequence learning model, trained end-to-end. We apply this framework to the task of describing an input video with a natural language description. Our model utilizes a deep learning architecture that represents video with an explicit model of the video's temporal structure, and jointly models the video description and the temporal video sequence. This effectively connects the visual video space and the language description space. A memory-based attention mechanism helps guide where to attend and what to reason about as the description is generated. This allows the model to not only reason efficiently about local attention, but also to consider the full sequence of video frames during the generation of each word. Our experiments confirm that the memory components in our architecture, most notably from the {\sc iam} module, play a significant role in improving the performance of the entire network. Future work should raim to refine the temporal video frame model, {\sc tem}, and explore how to improve performance on capturing the ideal frames for each description. \section{Introduction} Deep neural architectures have led to remarkable progress in computer vision and natural language processing problems. Image captioning is one such problem, where the combination of convolutional structures~\citep{alexnet, lecun1998gradient}, %built from image classification, %challenges such as ImageNet \cite{ImageNetCite}, and sequential recurrent structures \citep{s2sIlya} leads to remarkable improvements over previous work \cite{FangEtAl2015,DevlinEtAl2015}. One of the emerging modelling paradigms, shared by models for image captioning as well as related vision-language problems, is the notion of an attention mechanism that guides the model to attend to certain parts of the image while generating \cite{icml2015_xuc15}. The attention models used for problems such as image captioning typically depend on the single image under consideration and the partial output generated so far, jointly capturing one region of an image and the words being generated. However, such models cannot directly capture the temporal reasoning necessary to effectively produce words that refer to actions and events taking place over multiple frames in a video. For example, in a video depicting ``someone waving a hand'', the ``waving'' action can start from any frame and can continue on for a variable number of following frames. At the same time, videos contain many frames that do not provide additional information over the smaller set of frames necessary to generate a summarizing description. Given these challenges, it is not surprising that even with recent advancements in image captioning~\cite{FangEtAl2015, icml2015_xuc15, densecap, Vinyals_2015_CVPR, lrcn2014}, video captioning has remained challenging. Motivated by these observations, we introduce a memory-based attention mechanism for video captioning and description. Our model utilizes memories of past attention in the video when reasoning about where to attend in a current time step. This allows the model to not only effectively leverage local attention, but also to consider the entire video as it generates each word. This mechanism effectively binds information from both vision and language sources into a coherent structure. %This mechanism is similar to the proposed {\it central executive} system in human cognition, which is thought to permit human performance on two simultaneous tasks (e.g., seeing and saying) using two separate perceptual domains (e.g., visual and linguistic) by binding information from both sources into coherent structure that enables coordination, selective attention, and inhibition. Our work shares the same goals as recent work on attention mechanisms for sequence-to-sequence architectures, such as \citet{RocktaschelGHKB15} and \citet{YangYWSC16}. %However, there are major differences between these works and our current work. ~\citet{RocktaschelGHKB15} consider the domain of entailment relations, where the goal is to determine entailment given two input sentences. They propose a soft attention model that is not only focused on the current state, but the previous as well. In our model, all previous attentions are explicitly stored into memory, and the system learns to memorize the encoded version of the input videos conditioned on previously seen words. \citet{YangYWSC16} and our work both try to solve the problem of locality of attention in vision-to-language, but while \citet{YangYWSC16} introduce a memory architecture optimized for single image caption generation, we introduce a memory architecture that operates on a streaming video's temporal sequence. % More specifically, they incorporate discriminative supervision into their ''reviewer'' mechanism, which is not the case in our model. Further, their model is applied to image caption generation, which is to some extent simpler than video caption generation because there is no temporal structure to model. The contributions of this work include: \begin{itemize} \item A deep learning architecture that represents video with an explicit model of the video's temporal structure. \vspace{-.5em} \item A method to jointly model the video description and temporal video sequence, connecting the visual video space and the language description space. \vspace{-.5em} \item A memory-based attention mechanism that learns iterative attention relationships in a simple and effective sequence-to-sequence memory structure. \vspace{-.5em} \item Extensive comparison of this work and previous work on the video captioning problem on the MSVD \citep{chencl11} and the Charades \citep{sigurdsson2016hollywood} datasets. \end{itemize} \noindent We focus on the video captioning problem, however, the proposed model is general enough to be applicable in other sequence problems where attention models are used (e.g., machine translation or recognizing entailment relations).\begin{abstract} We present a method to improve video description generation by modeling higher-order interactions between video frames and described concepts. By storing past visual attention in the video associated to previously generated words, the system is able to decide what to look at and describe in light of what it has already looked at and described. This enables not only more effective local attention, but tractable consideration of the video sequence while generating each word. Evaluation on the challenging and popular {\it MSVD} and {\it Charades} datasets demonstrates that the proposed architecture outperforms previous video description approaches without requiring external temporal video features. The source code for this paper is available on \url{https://github.com/rasoolfa/videocap}. \end{abstract} \section{Experiments} \paragraph*{Dataset} We evaluate the model on the \textit{Charades}~\citep{sigurdsson2016hollywood} dataset and the {\it Microsoft Video Description Corpus (MSVD)}~\citep{chencl11}. Charades contains $9,848$ videos (in total) and provides $27,847$\footnote{Only $16087$ out of $27,847$ are used as descriptions for our evaluation since the $27,847$ refers to script of the video as well as descriptions.} video descriptions. We follow the same train/test splits as \citet{sigurdsson2016hollywood}, with $7569$ train, $1,863$ test, and $400$ validation. A main difference between this dataset and others is that it uses a ``Hollywood in Homes'' approach to data collection, where ``actors'' are crowdsourced to act out different actions. This yields a diverse set of videos, with each containing a specific action. % -- useful for evaluating the basics of video description. MSVD is a set of YouTube videos annotated by workers on Mechanical Turk,\footnote{\url{https://www.mturk.com/mturk/welcome}} who were asked to pick a video clips representing an activity. In this dataset, each clip is annotated by multiple workers with a single sentence. The dataset contains $1,970$ videos and about $80,000$ descriptions, where $1,200$ of the videos are training data, $670$ test, and the rest ($100$ videos) for validation. In order for the results to be comparable to other approaches, we follow the \textit{\textbf{exact}} training/validation/test splits provided by \citet{venugopalannaacl15}. \paragraph*{Evaluation metrics} We report results on the video description generation task. In order to evaluate descriptions generated by our model, we use model-free automatic evaluation metrics. We adopt \textsc{meteor}, \textsc{bleu-n}, and \textsc{cide}r metrics available from the Microsoft COCO Caption Evaluation code\footnote{\url{https://github.com/tylin/coco-caption}} to score the system. \paragraph*{Video and Caption preprocessing} We preprocess the captions for both datasets using the Natural Language Toolkit (NLTK)\footnote{\url{http://www.nltk.org/}} and clip each description up to $30$ words, since the majority have less. %Beyond this, no other type of preprocessing is used. We extract sample frames from each video and pass each frame through VGGnet~\citep{Simonyan14c} without fine-tuning. For the experiments in this paper, we use the feature maps from \textit{conv5\_3} layer after applying \textit{ReLU}. The feature map in this layer is $14\times 14 \times 512$. Our {\sc tem} component operates on the flattened $196\times 512$ of this feature cubes. For the ablation studies, features from the fully connected layer with $4096$ dimensions are used as well. \paragraph*{Hyper-parameter optimization} We use random search~\citep{Bergstra2012} on the validation set to select hyper-parameters on both datasets. The word-embedding size, hidden layer size (for both the {\sc tem} and the Decoder), and memory size of the best model on Charades are: $237$, $1316$, and $437$, respectively. These values are $402$, $1479$, and $797$ for the model on the MSVD dataset. A stack of two LSTMs are used in the Decoder and {\sc tem}. The number of frame samples is a hyperparameter which is selected among $4$, $8$, $16$, $40$ on the validation set. \textsc{att + No {\sc tem}} and \textsc{No iam + {\sc tem}} get the best results on the validation set with $40$ frames, and we use this as the number of frames for all models in the ablation study. \subsection{Video Caption Generation} We first present an ablation analysis to elucidate the contribution of the different components of our proposed model. Then, we compare the overall performance of our model to other recent models. \subsection*{Ablation Analysis} Ablation results are shown in Table~\ref{tab:abl}, evaluating on the MSVD test set. The first (\textsc{Att + No tem}) corresponds to a simpler version of our model in which we remove the {\sc tem} component and instead pass each frame of the video through a CNN, extracting features from the last fully-connected hidden layer. % (e.g., \textit{fc7}). In addition, we replace our {\sc iam} with a simpler version where the model only memorizes the current step instead of all previous steps. In the next variation (\textsc{Att + tem}), it is same as the first one except we use {\sc tem} instead of fully connected CNN features. In the next ablation (\textsc{No iam + tem}), we remove the {\sc iam} component from our model and keep the rest of the model as-is. In the next variation (\textsc{iam + No tem}), we remove the {\sc tem} and calculate features for each frame, similar to \textsc{Att + No tem}. Finally, the last row in the table is our proposed model (\textsc{iam + tem}) with all its components. The {\sc iam} plays a significant role in the proposed model, and removing it causes a large drop in performance, as measured by both {\sc bleu} and {\sc meteor}. On the other hand, removing the {\sc tem} by itself does not drop performance as much as dropping the {\sc iam}. Putting the two together, they complement one another to result in overall better performance for {\sc meteor}. However, further development on the {\sc tem} component in future work is warranted. In the \textsc{No iam + tem} condition, an entire video must be represented with a fixed-length vector, which may contribute to the lower performance~\citep{BahdanauCB14}. This is in contrast to the other models, which apply single layer attention or {\sc iam} to search relevant parts of the video aligned with the description. \subsection*{Performance Comparison} To extensively evaluate the proposed model, we compare with state-of-the-art models and baselines for the video caption generation task on the MSVD dataset. In this experiment, we use $8$ frames per video as the inputs to the {\sc tem} module. As shown in Table \ref{tab:main_res},\footnote{The symbol $-$ indicates that the score was not reported by the corresponding paper. The horizontal line in Table \ref{tab:main_res} separates models that do/do not use external features for the video representation.} our proposed model achieves state-of-the-art scores in {\sc bleu}-4, and outperforms almost all systems on {\sc meteor}. The closest-scoring comparison system, from \citet{pan2016hierarchical}, shows a trade-off between {\sc meteor} and {\sc bleu}: {\sc bleu} prefers descriptions with short-distance fluency and high lexical overlap with the observed descriptions, while {\sc meteor} permits less direct overlap and longer descriptions. A detailed study of the generated descriptions between the two systems would be needed to better understand these differences. The improvement over previous work is particularly noteworthy because we do not use external features for the video, such as Optical Flow \citep{Bro04a} (denoted Flow), 3-Dimensional Convolutional Network features~\citep{C3DTan} (denoted C3D), or fine-tuned CNN features (denoted FT), which further enhances aspects such as action recognition by leveraging an external dataset such as UCF-101. The only system using external features that outperforms the model proposed here is from \citet{YuWHYX15}, who uses a slightly different version of the same dataset\footnote{\citet{YuWHYX15} uses the MSVD dataset reported in \cite{Guadarrama2013}, which has different preprocessing.} along with C3D features for a large improvement in results (compare Table~\ref{tab:main_res} rows 4 and 11); future work may explore the utility of external visual features for this work. Here, we demonstrate that the proposed architecture maps visual space to language space with improved performance over previous work, before addition of further resources. We additionally report results on the Charades dataset \cite{sigurdsson2016hollywood}, which is challenging to train on because there are only a few ($\approx2$) captions per video. In this experiment, we use $16$ frames per video as the input to the {\sc tem} module. As shown in Table~\ref{tab:main_res_HW}, our method achieves a $10\%$ relative improvement over the \citet{venugopalan15iccv} model reported by \citet{sigurdsson2016hollywood}. It is worth noting that humans reach a {\sc meteor} score of $24$ and a {\sc bleu-4} score of $20$, illustrating the low upper bound in this task.\footnote{For comparison, the upper bound {\sc bleu} score in machine translation for English to French is above 30.} \subsection*{Results Discussion} We show some example descriptions generated by our system in Figure \ref{fig:caps_samples}. The model generates mostly correct descriptions, with naturalistic variation from the ground truth. Errors illustrate a preference to describe items that have a higher likelihood of being mentioned, even if they appear in less of the frames. For example, in the ``a dog is on a trampoline" video, our model focuses on the man, who appears in only a few frames, and generates the incorrect description ``a man is washing a bath". The errors, alongside the ablation study shown in Table \ref{tab:abl}, suggest that the {\sc tem} module in particular may be further improved by focusing on how frames in the video sequence are captured and passed to the {\sc iam} module. \section{Related Work} One of the primary challenges in learning a mapping from a visual space (i.e., video or image) to a language space is learning a representation that not only effectively represents each of these modalities, but is also able to translate a representation from one space to the other. \citet{Rohrbachiccv2013} developed a model that generates a semantic representation of visual content that can be used as the source language for the language generation module. \citet{venugopalannaacl15} proposed a deep method to translate a video into a sentence where an entire video is represented with a single vector based on the mean pool of frame features. However, it was recognized that representing a video by an average of its frames loses the temporal structure of the video. To address this problem, recent work \citep{yao2015capgenvid, pan2016hierarchical, venugopalan15iccv, AndrewShinICP, PanMYLR15, XuXiChAAAI2015, BallasYPC15, YuWHYX15} proposed methods to model the temporal structure of videos as well as language. The majority of these methods are inspired by sequence-to-sequence~\citep{s2sIlya} and attention~\citep{BahdanauCB14} models. Sequence learning was proposed to map the input sequence of a source language to a target language~\citep{s2sIlya}. Applying this method with an additional attention mechanism to the problem of translating a video to a description showed promising initial results, however, revealed additional challenges. First, modelling the video content with a fixed-length vector in order to map it to a language space is a more complex problem than mapping from a language to a language, given the complexity of visual content and the difference between the two modalities. Since not all frames in a video are equally salient for a short description, and an event can happen in multiple frames, it is important for a model to identify which frames are most salient. Further, the models need additional work to be able to focus on points of interest within the video frames to select what to talk about. Even a variable-length vector to represent a video using attention \citep{yao2015capgenvid} can have some problems. More specifically, current attention methods are local~\cite{YangYWSC16}, since the attention mechanism works in a sequential structure, and lack the ability to capture global structure. Moreover, combining a video and a description as a sequence-to-sequence problem motivates using some variant of a recurrent neural network (RNN) \citep{Hochreiter}: Given the limited capacity of a recurrent network to model very long sequences, memory networks~\citep{WestonCB14,SukhbaatarSWF15} have been introduced to help the RNN memorize sequences. However, one problem these memory networks suffer from is difficulty in training the model. The model proposed by \citet{WestonCB14} requires supervision at each layer, which makes training with backpropagation a challenging task. \citet{SukhbaatarSWF15} proposed a memory network that can be trained end-to-end, and the current work follows this research line to tackle the challenging problem of modeling vision and language memories for video description generation. %especially with write operation~\citep{GravesWD14}.\section{Learning to Attend and Memorize} A main challenge in video description is to find a mapping that can capture the connection between the video frames and the video description. Sequence-to-sequence models, which work well at connecting input and output sequences in machine translation~\citep{s2sIlya}, do not perform as well for this task, as there is not the same direct alignment between a full video sequence and its summarizing description. Our goal in the video description problem is to create an architecture that learns which moments to focus on in a video sequence in order to generate a summarizing natural language description. The modelling challenges we set forth for the video description problem are: (1) Processing the temporal structure of the video; (2) Learning to attend to important parts of the video; and (3) Generating a description where each word is relevant to the video. At a high-level, this can be understood as having three primary parts: {\it When} moments in the video are particularly salient; {\it what} concepts to focus on; and {\it how} to talk about them. We directly address these issues in an end-to-end network with three primary corresponding components (Figure \ref{fig:our_model}): A Temporal Model ({\sc tem}), An Iterative Attention/Memory Model ({\sc iam}), and a Decoder. In summary: \begin{itemize} \item {\bf When:} Frames within the video sequence - The Temporal Model ({\sc tem}). \item {\bf What:} Language-grounded concepts depicted in the video - The Iterative Attention/Memory mechanism ({\sc iam}). \item {\bf How:} Words that fluently describe the {\it what} and {\it when} - The Decoder. \end{itemize} The Temporal Model is in place to capture the temporal structure of the video: It functions as a {\it when} component. The Iterative Attention/Memory is a main contribution of this work, functioning as a {\it what} component to remember relationships between words and video frames, and storing longer term memories. The Decoder generates language, and functions as the {\it how} component to create the final description. To train the system end to end, we formulate the problem as sequence learning to maximize the probability of generating a correct description given a video: \begin{equation} \Theta^* = \underset{\Theta}{\arg\max}\sum_{(S, {f_1,\dots, f_N})} \log~p(S|f_1, \dots,f_N ;\mathbf{\Theta}) \end{equation} where $S$ is the description, ${f_1, f_2,\dots,f_N}$ are the input video frames, and $\Theta$ is the model parameter vector. In the next sections, we will describe each component of the model, then explain the details of training and inference. \small \paragraph{\small Notational note:} Numbered equations use bold face to denote multi-dimensional learnable parameters, e.g., ${\mathbf{W^j_p}}$. To distinguish the two different sets of time steps, one for video frames and one for words in the description, we use the notation $t$ for video and $t^\prime$ for language. Throughout, the terms {\it description} and {\it caption} are used interchangeably. \normalsize \subsection{Temporal Model ({\sc tem})}\label{sec:TEM} The first module we introduce encodes the temporal structure of the input video. A clear framework to use for this is a Recurrent Neural Network (RNN), which has been shown to be effectual in modelling the temporal structure of sequential data such as video \citep{BallasYPC15, SharmaKS15, venugopalan15iccv} and speech \citep{graves14}. In order to apply this in video sequences to generate a description, we seek to capture the fact that frame-to-frame temporal variation tends to be local \citep{BroxMalik2011} and critical in modeling motion~\citep{BallasYPC15}. %, it is important to also consider a frame representation that can preserve frame-to-frame temporal variation. Visual features extracted from the last fully connected layers of Convolutional Neural Networks (CNNs) have been shown to produce state-of-the-art results in image classification and recognition \citep{Simonyan14c, He_2016_CVPR}, and thus seem a good choice for modeling visual frames. However, these features tend to discard low level information useful in modeling the motion in the video \citep{BallasYPC15}. To address these challenges, we implement an RNN we call the Temporal Model ({\sc tem}). % to model the temporal structure of the video. At each time step of the {\sc tem}, a video frame encoding from a CNN %with $D$ dimensions %with size $R^{D}$ serves as input. Rather than extracting video frame features from a fully connected layer %last hidden layers of the pretrained CNN, we extract intermediate convolutional maps. In detail, for a given video $X$ with $N$ frames $X = [X^1, X^2, \cdots, X^N]$, $N$ convolutional maps of size $ R ^{L \times D}$ are extracted, where $L$ is the number of locations in the input frame and $D$ is the number of dimensions (See {\sc tem} in Figure \ref{fig:our_model}). To enable the network to store the most important $L$ locations of each frame, %given the hidden state of RNN, we use a soft location attention mechanism, $f_{\mathbf{Latt}}$ \citep{BahdanauCB14, icml2015_xuc15, SharmaKS15}. %called ``Location Attention (\textbf{Latt})''. More specifically, We first use a softmax to compute $L$ probabilities that specify the importance of different parts in the frame, and this creates an input map for $f_{\mathbf{Latt}}$. Formally, given a video frame at time $t$, $X^t \in R^{L \times D}$, the $f_{\mathbf{Latt}}$ mechanism is defined as follows: \begin{align} \label{eq:Latt} & {\rho^t_j} = \frac{ \exp( \mathbf{ W_p^j} h^{t-1}_v )}{\sum_{k=1}^L \exp(\mathbf{W_p^k} h^{t-1}_v )} \\ & f_{\mathbf{Latt}}({X^t, h^{t-1}_v};\mathbf{W_p}) = \sum_{j=1}^L {\rho_j^t} {X^t_{j}} \end{align} where $h^{t-1}_v \in R^K$ is the hidden state of the {\sc tem} at time $t$-1 with $K$ dimensions, and $W_p \in R^{L \times K}$. %and $F^{t} \in R^{D}$. For each video frame time step, {\sc tem} learns a vector representation by applying location attention on the frame convolution map, conditioned on all previously seen frames: \begin{align} \label{eq:temporal_Latt} & {F^{t}} = f_{\mathbf{Latt}}({X^t, h^{t-1}_v};\mathbf{W_p}) \\ & {h^{t}_v} = f_\mathbf{v}({F^{t}},~{h^{t-1}_v};\mathbf{\Theta_v} ) \end{align} where $f_\mathbf{v}$ can be an RNN/LSTM/GRU cell and {\bf $\Theta_v$} is the parameters of the $f_\mathbf{v}$. Due to the fact that vanilla RNNs have gradient vanishing and exploding problems~\citep{pascanu2013difficulty}, we use gradient clipping, and an LSTM with the following flow to handle potential vanishing gradients: \begin{align*} & {i^t} = \sigma(F^{t} \mathbf{W_{x_i}} + {(h^{t-1}_v)}^T\mathbf{W_{h_i}}) \\ & {f^t} = \sigma(F^{t} \mathbf{W_{x_f}} + {(h^{t-1}_v)}^T\mathbf{W_{h_f}}) \\ & {o^t} = \sigma(F^{t} \mathbf{W_{x_o}} + {(h^{t-1}_v)}^T\mathbf{W_{h_o}}) \\ & {g^t} = {\tanh}(F^{t} \mathbf{W_{x_g}} + {(h^{t-1}_v)}^T\mathbf{W_{h_g}}) \\ & {c^t_v} = {f^t} \odot {c^{t-1}_v} + {i^t} \odot {g^{t}} \\ & {h^t_v} = {o_t}\odot {\tanh(c^t)} \end{align*} where $W_{h*} \in R^{K \times K}$, $W_{x*} \in R^{D \times K}$, and we define $\Theta_v = \{W_{h*},W_{x*}\}$. \subsection{Iterative Attention/Memory ({\sc iam})}\label{sec:HAM} A main contribution of this work is a global view for the video description task: A memory-based attention mechanism that learns iterative attention relationships in an efficient sequence-to-sequence memory structure. We refer to this as the Iterative Attention/Memory mechanism ({\sc iam}), and it aggregates information from previously generated words and all input frames. The {\sc iam} component is an iterative memorized attention between an input video and a description. More specifically, it learns a iterative attention structure for where to attend in a video given all previously generated words (from the Decoder), and previous states (from the {\sc tem}). This functions as a memory structure, remembering encoded versions of the video with corresponding language, and in turn, enabling the Decoder to access the full encoded video and previously generated words as it generates new words. This component addresses several key issues in generating a coherent video description. In video description, a single word or phrase often describes action spanning multiple frames within the input video. By employing the {\sc iam}, the model can effectively capture the relationship between a relatively short bit of language and an action that occurs over multiple frames. This also functions to directly address the problem of identifying which parts of the video are most relevant for description. The proposed Iterative Attention/Memory mechanism is formalized with an {\bf Attention} update and a {\bf Memory} update, detailed in Figure \ref{fig:HAM}. Figure \ref{fig:our_model} illustrates where the {\sc iam} sits within the full model, with the Attention module shown in \ref{fig:our_model}a and the Memory module shown in \ref{fig:our_model}b. As formalized in Figure \ref{fig:HAM}, the {\it Attention} update $\hat{F}(\Theta_a)$ computes the set of probabilities in a given time step for attention within the input video states, the memory state, and the decoder state. The {\it Memory} update stores what has been attended to and described. This serves as the memorization component, combining the previous memory with the current iterative attention $\hat{F}$. % within the video sequence and the last decoder state. %$\hat{F}$ and encoded version of the input video with respect to the language model. We use an LSTM $f_m$ with the equations described above to enable the network to learn multi-layer attention over the input video and its corresponding language. The output of this function is then used as input to the Decoder. %It is worth noting that $f_\mathbf{_m}$ is an LSTM with the equations described previously. \todoMM{Check this} \subsection{Decoder}\label{sec:Dec} In order to generate a new word conditioned on all previous words and {\sc iam} states, a recurrent structure is modelled as follows: \begin{align} \label{eq:Dec} &h^{{t^\prime }}_g = f_g(s^{t^\prime}, ~h_m^{t^\prime},~h^{{t^\prime-1}}_g; \mathbf{\Theta_g}) \\ &\hat{s}^{t^\prime} = \mathrm{softmax}((h^{{t^\prime}}_g)^T\mathbf{W_e}) \end{align} where $h^{t^\prime}_g \in R^K$, $s^{t^\prime}$ is a word vector at time ${t^\prime}$, $W_e\in R^{K \times |V|}$, and $|V|$ is the vocabulary size. In addition, $\hat{s}_{t^\prime}$ assigns a probability to each word in the language. $f_{g}$ is an LSTM where $s^{t^\prime}$ and $h_m^{t^\prime}$ are inputs and $h^{{t^\prime}}_g$ is the recurrent state. \subsection{Training and Optimization }\label{sec:training} The goal in our network is to predict the next word given all previously seen words and an input video. In order to optimize our network parameters $\Theta = \{W_p, \Theta_v, \Theta_a, \Theta_m, \Theta_g, W_e \} $, we minimize a negative log likelihood loss function: \begin{equation}\label{eq:NLL} L (S, X; \mathbf{\Theta}) = -\sum_{t^\prime}^T \sum_{i}^{|V|} s_i^{t^\prime}\log (\hat{s}_i^{t^\prime}) + \lambda\parallel\Theta \parallel_2^2 \end{equation} where $|V|$ is the vocabulary size. We fully train our network in an \textit{end-to-end} fashion using first-order stochastic gradient-based optimization method with an adaptive learning rate. More specifically, in order to optimize our network parameters, we use Adam~\citep{KingmaB14} with learning rate $2\times 10^{-5}$ and set $\beta_1$, $\beta_2$ to $0.8$ and $0.999$, respectively. During training, we use a batch size of $16$. The source code for this paper is available on \url{https://github.com/rasoolfa/videocap}. \documentclass{article} % more modern \newcommand{\theHalgorithm}{\arabic{algorithm}} \usepackage[accepted]{icml2017} \hypersetup{ pdfinfo={ Title={Memory-augmented Attention Modelling for Videos}, Author={Rasool Fakoor, Abdel-rahman Mohamed, Margaret Mitchell, Sing Bing Kang, Pushmeet Kohli}, } } \pdfoutput=1 \title{Memory-augmented Attention Modelling for Videos} \date{} \author{Rasool Fakoor$^{\dag}$\thanks{ Corresponding author: Rasool Fakoor ( rasool.fakoor@mavs.uta.edu)}, Abdel-rahman Mohamed,$^{\dag\dag}$, Margaret Mitchell$^{\ddag\dag}$, Sing Bing Kang$^{\dag\dag}$,\\ Pushmeet Kohli$^{\dag\dag}$\\ $^{\dag\dag}$Microsoft Research ~~ $^{\dag}$University of Texas at Arlington~~ $^{\ddag\dag}$Google\\ $^{\dag}$\small{rasool.fakoor@mavs.uta.edu}, $^{\dag\dag}$\small{\{asamir,~singbing.kang,~pkohli\}@microsoft.com} $^{\ddag\dag}$\small{mmitchellai@google.com}, } \cfoot{\thepage} \makeatletter \patchcmd\@combinedblfloats{\box\@outputbox}{\unvbox\@outputbox}{}{% \errmessage{\noexpand\@combinedblfloats could not be patched}% }% \makeatother \begin{document} \maketitle \input{Abstract} \input{Introduction} \input{RelatedWorks} \input{Model} \input{Experiments} \input{Conclusion} \bibliographystyle{icml2017} \end{document}
Memory-augmented Attention Modelling for Videos
1611.02261
Table 3: Video captioning evaluation on Charades (1863 videos). M=Meteor, B=bleu, C=cider. Sigurdsson et al. (2016) results use the Venugopalan et al. (2015a) model.
[ "Method", "M", "B@1", "B@2", "B@3", "B@4", "C" ]
[ [ "Human", "24", "62", "43", "29", "20", "53" ], [ "(Sigurdsson et al., 2016 )", "24", "62", "43", "29", "20", "53" ], [ "Sigurdsson et al. ( 2016 )", "16", "49", "30", "18", "11", "14" ], [ "[BOLD] Our Model", "[BOLD] 17.6", "[BOLD] 50", "[BOLD] 31.1", "[BOLD] 18.8", "[BOLD] 11.5", "[BOLD] 16.7" ] ]
We additionally report results on the Charades dataset In this experiment, we use 16 frames per video as the input to the tem module. It is worth noting that humans reach a meteor score of 24 and a bleu-4 score of 20, illustrating the low upper bound in this task.
\section{Conclusion} We introduce a general framework for an memory-based sequence learning model, trained end-to-end. We apply this framework to the task of describing an input video with a natural language description. Our model utilizes a deep learning architecture that represents video with an explicit model of the video's temporal structure, and jointly models the video description and the temporal video sequence. This effectively connects the visual video space and the language description space. A memory-based attention mechanism helps guide where to attend and what to reason about as the description is generated. This allows the model to not only reason efficiently about local attention, but also to consider the full sequence of video frames during the generation of each word. Our experiments confirm that the memory components in our architecture, most notably from the {\sc iam} module, play a significant role in improving the performance of the entire network. Future work should raim to refine the temporal video frame model, {\sc tem}, and explore how to improve performance on capturing the ideal frames for each description. \section{Introduction} Deep neural architectures have led to remarkable progress in computer vision and natural language processing problems. Image captioning is one such problem, where the combination of convolutional structures~\citep{alexnet, lecun1998gradient}, %built from image classification, %challenges such as ImageNet \cite{ImageNetCite}, and sequential recurrent structures \citep{s2sIlya} leads to remarkable improvements over previous work \cite{FangEtAl2015,DevlinEtAl2015}. One of the emerging modelling paradigms, shared by models for image captioning as well as related vision-language problems, is the notion of an attention mechanism that guides the model to attend to certain parts of the image while generating \cite{icml2015_xuc15}. The attention models used for problems such as image captioning typically depend on the single image under consideration and the partial output generated so far, jointly capturing one region of an image and the words being generated. However, such models cannot directly capture the temporal reasoning necessary to effectively produce words that refer to actions and events taking place over multiple frames in a video. For example, in a video depicting ``someone waving a hand'', the ``waving'' action can start from any frame and can continue on for a variable number of following frames. At the same time, videos contain many frames that do not provide additional information over the smaller set of frames necessary to generate a summarizing description. Given these challenges, it is not surprising that even with recent advancements in image captioning~\cite{FangEtAl2015, icml2015_xuc15, densecap, Vinyals_2015_CVPR, lrcn2014}, video captioning has remained challenging. Motivated by these observations, we introduce a memory-based attention mechanism for video captioning and description. Our model utilizes memories of past attention in the video when reasoning about where to attend in a current time step. This allows the model to not only effectively leverage local attention, but also to consider the entire video as it generates each word. This mechanism effectively binds information from both vision and language sources into a coherent structure. %This mechanism is similar to the proposed {\it central executive} system in human cognition, which is thought to permit human performance on two simultaneous tasks (e.g., seeing and saying) using two separate perceptual domains (e.g., visual and linguistic) by binding information from both sources into coherent structure that enables coordination, selective attention, and inhibition. Our work shares the same goals as recent work on attention mechanisms for sequence-to-sequence architectures, such as \citet{RocktaschelGHKB15} and \citet{YangYWSC16}. %However, there are major differences between these works and our current work. ~\citet{RocktaschelGHKB15} consider the domain of entailment relations, where the goal is to determine entailment given two input sentences. They propose a soft attention model that is not only focused on the current state, but the previous as well. In our model, all previous attentions are explicitly stored into memory, and the system learns to memorize the encoded version of the input videos conditioned on previously seen words. \citet{YangYWSC16} and our work both try to solve the problem of locality of attention in vision-to-language, but while \citet{YangYWSC16} introduce a memory architecture optimized for single image caption generation, we introduce a memory architecture that operates on a streaming video's temporal sequence. % More specifically, they incorporate discriminative supervision into their ''reviewer'' mechanism, which is not the case in our model. Further, their model is applied to image caption generation, which is to some extent simpler than video caption generation because there is no temporal structure to model. The contributions of this work include: \begin{itemize} \item A deep learning architecture that represents video with an explicit model of the video's temporal structure. \vspace{-.5em} \item A method to jointly model the video description and temporal video sequence, connecting the visual video space and the language description space. \vspace{-.5em} \item A memory-based attention mechanism that learns iterative attention relationships in a simple and effective sequence-to-sequence memory structure. \vspace{-.5em} \item Extensive comparison of this work and previous work on the video captioning problem on the MSVD \citep{chencl11} and the Charades \citep{sigurdsson2016hollywood} datasets. \end{itemize} \noindent We focus on the video captioning problem, however, the proposed model is general enough to be applicable in other sequence problems where attention models are used (e.g., machine translation or recognizing entailment relations).\begin{abstract} We present a method to improve video description generation by modeling higher-order interactions between video frames and described concepts. By storing past visual attention in the video associated to previously generated words, the system is able to decide what to look at and describe in light of what it has already looked at and described. This enables not only more effective local attention, but tractable consideration of the video sequence while generating each word. Evaluation on the challenging and popular {\it MSVD} and {\it Charades} datasets demonstrates that the proposed architecture outperforms previous video description approaches without requiring external temporal video features. The source code for this paper is available on \url{https://github.com/rasoolfa/videocap}. \end{abstract} \section{Experiments} \paragraph*{Dataset} We evaluate the model on the \textit{Charades}~\citep{sigurdsson2016hollywood} dataset and the {\it Microsoft Video Description Corpus (MSVD)}~\citep{chencl11}. Charades contains $9,848$ videos (in total) and provides $27,847$\footnote{Only $16087$ out of $27,847$ are used as descriptions for our evaluation since the $27,847$ refers to script of the video as well as descriptions.} video descriptions. We follow the same train/test splits as \citet{sigurdsson2016hollywood}, with $7569$ train, $1,863$ test, and $400$ validation. A main difference between this dataset and others is that it uses a ``Hollywood in Homes'' approach to data collection, where ``actors'' are crowdsourced to act out different actions. This yields a diverse set of videos, with each containing a specific action. % -- useful for evaluating the basics of video description. MSVD is a set of YouTube videos annotated by workers on Mechanical Turk,\footnote{\url{https://www.mturk.com/mturk/welcome}} who were asked to pick a video clips representing an activity. In this dataset, each clip is annotated by multiple workers with a single sentence. The dataset contains $1,970$ videos and about $80,000$ descriptions, where $1,200$ of the videos are training data, $670$ test, and the rest ($100$ videos) for validation. In order for the results to be comparable to other approaches, we follow the \textit{\textbf{exact}} training/validation/test splits provided by \citet{venugopalannaacl15}. \paragraph*{Evaluation metrics} We report results on the video description generation task. In order to evaluate descriptions generated by our model, we use model-free automatic evaluation metrics. We adopt \textsc{meteor}, \textsc{bleu-n}, and \textsc{cide}r metrics available from the Microsoft COCO Caption Evaluation code\footnote{\url{https://github.com/tylin/coco-caption}} to score the system. \paragraph*{Video and Caption preprocessing} We preprocess the captions for both datasets using the Natural Language Toolkit (NLTK)\footnote{\url{http://www.nltk.org/}} and clip each description up to $30$ words, since the majority have less. %Beyond this, no other type of preprocessing is used. We extract sample frames from each video and pass each frame through VGGnet~\citep{Simonyan14c} without fine-tuning. For the experiments in this paper, we use the feature maps from \textit{conv5\_3} layer after applying \textit{ReLU}. The feature map in this layer is $14\times 14 \times 512$. Our {\sc tem} component operates on the flattened $196\times 512$ of this feature cubes. For the ablation studies, features from the fully connected layer with $4096$ dimensions are used as well. \paragraph*{Hyper-parameter optimization} We use random search~\citep{Bergstra2012} on the validation set to select hyper-parameters on both datasets. The word-embedding size, hidden layer size (for both the {\sc tem} and the Decoder), and memory size of the best model on Charades are: $237$, $1316$, and $437$, respectively. These values are $402$, $1479$, and $797$ for the model on the MSVD dataset. A stack of two LSTMs are used in the Decoder and {\sc tem}. The number of frame samples is a hyperparameter which is selected among $4$, $8$, $16$, $40$ on the validation set. \textsc{att + No {\sc tem}} and \textsc{No iam + {\sc tem}} get the best results on the validation set with $40$ frames, and we use this as the number of frames for all models in the ablation study. \subsection{Video Caption Generation} We first present an ablation analysis to elucidate the contribution of the different components of our proposed model. Then, we compare the overall performance of our model to other recent models. \subsection*{Ablation Analysis} Ablation results are shown in Table~\ref{tab:abl}, evaluating on the MSVD test set. The first (\textsc{Att + No tem}) corresponds to a simpler version of our model in which we remove the {\sc tem} component and instead pass each frame of the video through a CNN, extracting features from the last fully-connected hidden layer. % (e.g., \textit{fc7}). In addition, we replace our {\sc iam} with a simpler version where the model only memorizes the current step instead of all previous steps. In the next variation (\textsc{Att + tem}), it is same as the first one except we use {\sc tem} instead of fully connected CNN features. In the next ablation (\textsc{No iam + tem}), we remove the {\sc iam} component from our model and keep the rest of the model as-is. In the next variation (\textsc{iam + No tem}), we remove the {\sc tem} and calculate features for each frame, similar to \textsc{Att + No tem}. Finally, the last row in the table is our proposed model (\textsc{iam + tem}) with all its components. The {\sc iam} plays a significant role in the proposed model, and removing it causes a large drop in performance, as measured by both {\sc bleu} and {\sc meteor}. On the other hand, removing the {\sc tem} by itself does not drop performance as much as dropping the {\sc iam}. Putting the two together, they complement one another to result in overall better performance for {\sc meteor}. However, further development on the {\sc tem} component in future work is warranted. In the \textsc{No iam + tem} condition, an entire video must be represented with a fixed-length vector, which may contribute to the lower performance~\citep{BahdanauCB14}. This is in contrast to the other models, which apply single layer attention or {\sc iam} to search relevant parts of the video aligned with the description. \subsection*{Performance Comparison} To extensively evaluate the proposed model, we compare with state-of-the-art models and baselines for the video caption generation task on the MSVD dataset. In this experiment, we use $8$ frames per video as the inputs to the {\sc tem} module. As shown in Table \ref{tab:main_res},\footnote{The symbol $-$ indicates that the score was not reported by the corresponding paper. The horizontal line in Table \ref{tab:main_res} separates models that do/do not use external features for the video representation.} our proposed model achieves state-of-the-art scores in {\sc bleu}-4, and outperforms almost all systems on {\sc meteor}. The closest-scoring comparison system, from \citet{pan2016hierarchical}, shows a trade-off between {\sc meteor} and {\sc bleu}: {\sc bleu} prefers descriptions with short-distance fluency and high lexical overlap with the observed descriptions, while {\sc meteor} permits less direct overlap and longer descriptions. A detailed study of the generated descriptions between the two systems would be needed to better understand these differences. The improvement over previous work is particularly noteworthy because we do not use external features for the video, such as Optical Flow \citep{Bro04a} (denoted Flow), 3-Dimensional Convolutional Network features~\citep{C3DTan} (denoted C3D), or fine-tuned CNN features (denoted FT), which further enhances aspects such as action recognition by leveraging an external dataset such as UCF-101. The only system using external features that outperforms the model proposed here is from \citet{YuWHYX15}, who uses a slightly different version of the same dataset\footnote{\citet{YuWHYX15} uses the MSVD dataset reported in \cite{Guadarrama2013}, which has different preprocessing.} along with C3D features for a large improvement in results (compare Table~\ref{tab:main_res} rows 4 and 11); future work may explore the utility of external visual features for this work. Here, we demonstrate that the proposed architecture maps visual space to language space with improved performance over previous work, before addition of further resources. We additionally report results on the Charades dataset \cite{sigurdsson2016hollywood}, which is challenging to train on because there are only a few ($\approx2$) captions per video. In this experiment, we use $16$ frames per video as the input to the {\sc tem} module. As shown in Table~\ref{tab:main_res_HW}, our method achieves a $10\%$ relative improvement over the \citet{venugopalan15iccv} model reported by \citet{sigurdsson2016hollywood}. It is worth noting that humans reach a {\sc meteor} score of $24$ and a {\sc bleu-4} score of $20$, illustrating the low upper bound in this task.\footnote{For comparison, the upper bound {\sc bleu} score in machine translation for English to French is above 30.} \subsection*{Results Discussion} We show some example descriptions generated by our system in Figure \ref{fig:caps_samples}. The model generates mostly correct descriptions, with naturalistic variation from the ground truth. Errors illustrate a preference to describe items that have a higher likelihood of being mentioned, even if they appear in less of the frames. For example, in the ``a dog is on a trampoline" video, our model focuses on the man, who appears in only a few frames, and generates the incorrect description ``a man is washing a bath". The errors, alongside the ablation study shown in Table \ref{tab:abl}, suggest that the {\sc tem} module in particular may be further improved by focusing on how frames in the video sequence are captured and passed to the {\sc iam} module. \section{Related Work} One of the primary challenges in learning a mapping from a visual space (i.e., video or image) to a language space is learning a representation that not only effectively represents each of these modalities, but is also able to translate a representation from one space to the other. \citet{Rohrbachiccv2013} developed a model that generates a semantic representation of visual content that can be used as the source language for the language generation module. \citet{venugopalannaacl15} proposed a deep method to translate a video into a sentence where an entire video is represented with a single vector based on the mean pool of frame features. However, it was recognized that representing a video by an average of its frames loses the temporal structure of the video. To address this problem, recent work \citep{yao2015capgenvid, pan2016hierarchical, venugopalan15iccv, AndrewShinICP, PanMYLR15, XuXiChAAAI2015, BallasYPC15, YuWHYX15} proposed methods to model the temporal structure of videos as well as language. The majority of these methods are inspired by sequence-to-sequence~\citep{s2sIlya} and attention~\citep{BahdanauCB14} models. Sequence learning was proposed to map the input sequence of a source language to a target language~\citep{s2sIlya}. Applying this method with an additional attention mechanism to the problem of translating a video to a description showed promising initial results, however, revealed additional challenges. First, modelling the video content with a fixed-length vector in order to map it to a language space is a more complex problem than mapping from a language to a language, given the complexity of visual content and the difference between the two modalities. Since not all frames in a video are equally salient for a short description, and an event can happen in multiple frames, it is important for a model to identify which frames are most salient. Further, the models need additional work to be able to focus on points of interest within the video frames to select what to talk about. Even a variable-length vector to represent a video using attention \citep{yao2015capgenvid} can have some problems. More specifically, current attention methods are local~\cite{YangYWSC16}, since the attention mechanism works in a sequential structure, and lack the ability to capture global structure. Moreover, combining a video and a description as a sequence-to-sequence problem motivates using some variant of a recurrent neural network (RNN) \citep{Hochreiter}: Given the limited capacity of a recurrent network to model very long sequences, memory networks~\citep{WestonCB14,SukhbaatarSWF15} have been introduced to help the RNN memorize sequences. However, one problem these memory networks suffer from is difficulty in training the model. The model proposed by \citet{WestonCB14} requires supervision at each layer, which makes training with backpropagation a challenging task. \citet{SukhbaatarSWF15} proposed a memory network that can be trained end-to-end, and the current work follows this research line to tackle the challenging problem of modeling vision and language memories for video description generation. %especially with write operation~\citep{GravesWD14}.\section{Learning to Attend and Memorize} A main challenge in video description is to find a mapping that can capture the connection between the video frames and the video description. Sequence-to-sequence models, which work well at connecting input and output sequences in machine translation~\citep{s2sIlya}, do not perform as well for this task, as there is not the same direct alignment between a full video sequence and its summarizing description. Our goal in the video description problem is to create an architecture that learns which moments to focus on in a video sequence in order to generate a summarizing natural language description. The modelling challenges we set forth for the video description problem are: (1) Processing the temporal structure of the video; (2) Learning to attend to important parts of the video; and (3) Generating a description where each word is relevant to the video. At a high-level, this can be understood as having three primary parts: {\it When} moments in the video are particularly salient; {\it what} concepts to focus on; and {\it how} to talk about them. We directly address these issues in an end-to-end network with three primary corresponding components (Figure \ref{fig:our_model}): A Temporal Model ({\sc tem}), An Iterative Attention/Memory Model ({\sc iam}), and a Decoder. In summary: \begin{itemize} \item {\bf When:} Frames within the video sequence - The Temporal Model ({\sc tem}). \item {\bf What:} Language-grounded concepts depicted in the video - The Iterative Attention/Memory mechanism ({\sc iam}). \item {\bf How:} Words that fluently describe the {\it what} and {\it when} - The Decoder. \end{itemize} The Temporal Model is in place to capture the temporal structure of the video: It functions as a {\it when} component. The Iterative Attention/Memory is a main contribution of this work, functioning as a {\it what} component to remember relationships between words and video frames, and storing longer term memories. The Decoder generates language, and functions as the {\it how} component to create the final description. To train the system end to end, we formulate the problem as sequence learning to maximize the probability of generating a correct description given a video: \begin{equation} \Theta^* = \underset{\Theta}{\arg\max}\sum_{(S, {f_1,\dots, f_N})} \log~p(S|f_1, \dots,f_N ;\mathbf{\Theta}) \end{equation} where $S$ is the description, ${f_1, f_2,\dots,f_N}$ are the input video frames, and $\Theta$ is the model parameter vector. In the next sections, we will describe each component of the model, then explain the details of training and inference. \small \paragraph{\small Notational note:} Numbered equations use bold face to denote multi-dimensional learnable parameters, e.g., ${\mathbf{W^j_p}}$. To distinguish the two different sets of time steps, one for video frames and one for words in the description, we use the notation $t$ for video and $t^\prime$ for language. Throughout, the terms {\it description} and {\it caption} are used interchangeably. \normalsize \subsection{Temporal Model ({\sc tem})}\label{sec:TEM} The first module we introduce encodes the temporal structure of the input video. A clear framework to use for this is a Recurrent Neural Network (RNN), which has been shown to be effectual in modelling the temporal structure of sequential data such as video \citep{BallasYPC15, SharmaKS15, venugopalan15iccv} and speech \citep{graves14}. In order to apply this in video sequences to generate a description, we seek to capture the fact that frame-to-frame temporal variation tends to be local \citep{BroxMalik2011} and critical in modeling motion~\citep{BallasYPC15}. %, it is important to also consider a frame representation that can preserve frame-to-frame temporal variation. Visual features extracted from the last fully connected layers of Convolutional Neural Networks (CNNs) have been shown to produce state-of-the-art results in image classification and recognition \citep{Simonyan14c, He_2016_CVPR}, and thus seem a good choice for modeling visual frames. However, these features tend to discard low level information useful in modeling the motion in the video \citep{BallasYPC15}. To address these challenges, we implement an RNN we call the Temporal Model ({\sc tem}). % to model the temporal structure of the video. At each time step of the {\sc tem}, a video frame encoding from a CNN %with $D$ dimensions %with size $R^{D}$ serves as input. Rather than extracting video frame features from a fully connected layer %last hidden layers of the pretrained CNN, we extract intermediate convolutional maps. In detail, for a given video $X$ with $N$ frames $X = [X^1, X^2, \cdots, X^N]$, $N$ convolutional maps of size $ R ^{L \times D}$ are extracted, where $L$ is the number of locations in the input frame and $D$ is the number of dimensions (See {\sc tem} in Figure \ref{fig:our_model}). To enable the network to store the most important $L$ locations of each frame, %given the hidden state of RNN, we use a soft location attention mechanism, $f_{\mathbf{Latt}}$ \citep{BahdanauCB14, icml2015_xuc15, SharmaKS15}. %called ``Location Attention (\textbf{Latt})''. More specifically, We first use a softmax to compute $L$ probabilities that specify the importance of different parts in the frame, and this creates an input map for $f_{\mathbf{Latt}}$. Formally, given a video frame at time $t$, $X^t \in R^{L \times D}$, the $f_{\mathbf{Latt}}$ mechanism is defined as follows: \begin{align} \label{eq:Latt} & {\rho^t_j} = \frac{ \exp( \mathbf{ W_p^j} h^{t-1}_v )}{\sum_{k=1}^L \exp(\mathbf{W_p^k} h^{t-1}_v )} \\ & f_{\mathbf{Latt}}({X^t, h^{t-1}_v};\mathbf{W_p}) = \sum_{j=1}^L {\rho_j^t} {X^t_{j}} \end{align} where $h^{t-1}_v \in R^K$ is the hidden state of the {\sc tem} at time $t$-1 with $K$ dimensions, and $W_p \in R^{L \times K}$. %and $F^{t} \in R^{D}$. For each video frame time step, {\sc tem} learns a vector representation by applying location attention on the frame convolution map, conditioned on all previously seen frames: \begin{align} \label{eq:temporal_Latt} & {F^{t}} = f_{\mathbf{Latt}}({X^t, h^{t-1}_v};\mathbf{W_p}) \\ & {h^{t}_v} = f_\mathbf{v}({F^{t}},~{h^{t-1}_v};\mathbf{\Theta_v} ) \end{align} where $f_\mathbf{v}$ can be an RNN/LSTM/GRU cell and {\bf $\Theta_v$} is the parameters of the $f_\mathbf{v}$. Due to the fact that vanilla RNNs have gradient vanishing and exploding problems~\citep{pascanu2013difficulty}, we use gradient clipping, and an LSTM with the following flow to handle potential vanishing gradients: \begin{align*} & {i^t} = \sigma(F^{t} \mathbf{W_{x_i}} + {(h^{t-1}_v)}^T\mathbf{W_{h_i}}) \\ & {f^t} = \sigma(F^{t} \mathbf{W_{x_f}} + {(h^{t-1}_v)}^T\mathbf{W_{h_f}}) \\ & {o^t} = \sigma(F^{t} \mathbf{W_{x_o}} + {(h^{t-1}_v)}^T\mathbf{W_{h_o}}) \\ & {g^t} = {\tanh}(F^{t} \mathbf{W_{x_g}} + {(h^{t-1}_v)}^T\mathbf{W_{h_g}}) \\ & {c^t_v} = {f^t} \odot {c^{t-1}_v} + {i^t} \odot {g^{t}} \\ & {h^t_v} = {o_t}\odot {\tanh(c^t)} \end{align*} where $W_{h*} \in R^{K \times K}$, $W_{x*} \in R^{D \times K}$, and we define $\Theta_v = \{W_{h*},W_{x*}\}$. \subsection{Iterative Attention/Memory ({\sc iam})}\label{sec:HAM} A main contribution of this work is a global view for the video description task: A memory-based attention mechanism that learns iterative attention relationships in an efficient sequence-to-sequence memory structure. We refer to this as the Iterative Attention/Memory mechanism ({\sc iam}), and it aggregates information from previously generated words and all input frames. The {\sc iam} component is an iterative memorized attention between an input video and a description. More specifically, it learns a iterative attention structure for where to attend in a video given all previously generated words (from the Decoder), and previous states (from the {\sc tem}). This functions as a memory structure, remembering encoded versions of the video with corresponding language, and in turn, enabling the Decoder to access the full encoded video and previously generated words as it generates new words. This component addresses several key issues in generating a coherent video description. In video description, a single word or phrase often describes action spanning multiple frames within the input video. By employing the {\sc iam}, the model can effectively capture the relationship between a relatively short bit of language and an action that occurs over multiple frames. This also functions to directly address the problem of identifying which parts of the video are most relevant for description. The proposed Iterative Attention/Memory mechanism is formalized with an {\bf Attention} update and a {\bf Memory} update, detailed in Figure \ref{fig:HAM}. Figure \ref{fig:our_model} illustrates where the {\sc iam} sits within the full model, with the Attention module shown in \ref{fig:our_model}a and the Memory module shown in \ref{fig:our_model}b. As formalized in Figure \ref{fig:HAM}, the {\it Attention} update $\hat{F}(\Theta_a)$ computes the set of probabilities in a given time step for attention within the input video states, the memory state, and the decoder state. The {\it Memory} update stores what has been attended to and described. This serves as the memorization component, combining the previous memory with the current iterative attention $\hat{F}$. % within the video sequence and the last decoder state. %$\hat{F}$ and encoded version of the input video with respect to the language model. We use an LSTM $f_m$ with the equations described above to enable the network to learn multi-layer attention over the input video and its corresponding language. The output of this function is then used as input to the Decoder. %It is worth noting that $f_\mathbf{_m}$ is an LSTM with the equations described previously. \todoMM{Check this} \subsection{Decoder}\label{sec:Dec} In order to generate a new word conditioned on all previous words and {\sc iam} states, a recurrent structure is modelled as follows: \begin{align} \label{eq:Dec} &h^{{t^\prime }}_g = f_g(s^{t^\prime}, ~h_m^{t^\prime},~h^{{t^\prime-1}}_g; \mathbf{\Theta_g}) \\ &\hat{s}^{t^\prime} = \mathrm{softmax}((h^{{t^\prime}}_g)^T\mathbf{W_e}) \end{align} where $h^{t^\prime}_g \in R^K$, $s^{t^\prime}$ is a word vector at time ${t^\prime}$, $W_e\in R^{K \times |V|}$, and $|V|$ is the vocabulary size. In addition, $\hat{s}_{t^\prime}$ assigns a probability to each word in the language. $f_{g}$ is an LSTM where $s^{t^\prime}$ and $h_m^{t^\prime}$ are inputs and $h^{{t^\prime}}_g$ is the recurrent state. \subsection{Training and Optimization }\label{sec:training} The goal in our network is to predict the next word given all previously seen words and an input video. In order to optimize our network parameters $\Theta = \{W_p, \Theta_v, \Theta_a, \Theta_m, \Theta_g, W_e \} $, we minimize a negative log likelihood loss function: \begin{equation}\label{eq:NLL} L (S, X; \mathbf{\Theta}) = -\sum_{t^\prime}^T \sum_{i}^{|V|} s_i^{t^\prime}\log (\hat{s}_i^{t^\prime}) + \lambda\parallel\Theta \parallel_2^2 \end{equation} where $|V|$ is the vocabulary size. We fully train our network in an \textit{end-to-end} fashion using first-order stochastic gradient-based optimization method with an adaptive learning rate. More specifically, in order to optimize our network parameters, we use Adam~\citep{KingmaB14} with learning rate $2\times 10^{-5}$ and set $\beta_1$, $\beta_2$ to $0.8$ and $0.999$, respectively. During training, we use a batch size of $16$. The source code for this paper is available on \url{https://github.com/rasoolfa/videocap}. \documentclass{article} % more modern \newcommand{\theHalgorithm}{\arabic{algorithm}} \usepackage[accepted]{icml2017} \hypersetup{ pdfinfo={ Title={Memory-augmented Attention Modelling for Videos}, Author={Rasool Fakoor, Abdel-rahman Mohamed, Margaret Mitchell, Sing Bing Kang, Pushmeet Kohli}, } } \pdfoutput=1 \title{Memory-augmented Attention Modelling for Videos} \date{} \author{Rasool Fakoor$^{\dag}$\thanks{ Corresponding author: Rasool Fakoor ( rasool.fakoor@mavs.uta.edu)}, Abdel-rahman Mohamed,$^{\dag\dag}$, Margaret Mitchell$^{\ddag\dag}$, Sing Bing Kang$^{\dag\dag}$,\\ Pushmeet Kohli$^{\dag\dag}$\\ $^{\dag\dag}$Microsoft Research ~~ $^{\dag}$University of Texas at Arlington~~ $^{\ddag\dag}$Google\\ $^{\dag}$\small{rasool.fakoor@mavs.uta.edu}, $^{\dag\dag}$\small{\{asamir,~singbing.kang,~pkohli\}@microsoft.com} $^{\ddag\dag}$\small{mmitchellai@google.com}, } \cfoot{\thepage} \makeatletter \patchcmd\@combinedblfloats{\box\@outputbox}{\unvbox\@outputbox}{}{% \errmessage{\noexpand\@combinedblfloats could not be patched}% }% \makeatother \begin{document} \maketitle \input{Abstract} \input{Introduction} \input{RelatedWorks} \input{Model} \input{Experiments} \input{Conclusion} \bibliographystyle{icml2017} \end{document}
Recurrent Batch Normalization
1603.09025
Table 3: Bits-per-character on the text8 test sequence.
[ "[BOLD] Model", "[BOLD] text8" ]
[ [ "[ITALIC] td-LSTM (Zhang et al., 2016 )", "1.63" ], [ "HF-MRNN (Mikolov et al., 2012 )", "1.54" ], [ "skipping RNN (Pachitariu & Sahani, 2013 )", "1.48" ], [ "LSTM (ours)", "1.43" ], [ "BN-LSTM (ours)", "1.36" ], [ "HM-LSTM (Chung et al., 2016 )", "[BOLD] 1.29" ] ]
We observe that BN-LSTM obtains a significant performance improvement over the LSTM baseline. Chung et al. has since improved on our performance.
\documentclass{article} % For LaTeX2e \pdfoutput=1 % for arxiv to accept pdf figures \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[hidelinks]{hyperref} \title{Recurrent Batch Normalization} \author{ Tim Cooijmans, Nicolas Ballas, C\'esar Laurent, Çağlar Gülçehre \& Aaron Courville \\ MILA - Universit\'e de Montr\'eal \\ \texttt{firstname.lastname@umontreal.ca} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\act}{f} \newcommand{\ewprod}{\odot} \newcommand{\reals}{\mathbb{R}} \newcommand{\given}{\vert} \iclrfinalcopy \begin{document} \maketitle \begin{abstract} We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization. \end{abstract} \section{Introduction} Recurrent neural network architectures such as LSTM~\citep{lstm} and GRU~\citep{cho2014learning} have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition~\cite{baidu}, machine translation~\citep{bahdanau2014neural} and image and video captioning~\citep{xu2015show,yao2015describing}. Top-performing models, however, are based on very high-capacity networks that are computationally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study~\citep{pascanudifficulty,hessianfree,ollivier}. It is well-known that for deep feed-forward neural networks, covariate shift~\citep{shimodaira2000improving,batchnorm} degrades the efficiency of training. Covariate shift is a change in the distribution of the inputs to a model. This occurs continuously during training of feed-forward neural networks, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. As a result, the upper layers are continually adapting to the shifting input distribution and unable to learn effectively. This \emph{internal} covariate shift~\citep{batchnorm} may play an especially important role in recurrent neural networks, which resemble very deep feed-forward networks. Batch normalization~\citep{batchnorm} is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layer's parameters from those of other layers, leading to a better-conditioned optimization problem. Indeed, deep neural networks trained with batch normalization converge significantly faster and generalize better. Although batch normalization has demonstrated significant training speed-ups and generalization benefits in feed-forward networks, it is proven to be difficult to apply in recurrent architectures~\citep{cesar,baidu}. It has found limited use in stacked RNNs, where the normalization is applied ``vertically'', i.e. to the input of each RNN, but not ``horizontally'' between timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most beneficial when applied horizontally. However,~\citet{cesar} hypothesized that applying batch normalization in this way hurts training because of exploding gradients due to repeated rescaling. Our findings run counter to this hypothesis. We show that it is both possible and highly beneficial to apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we describe a reparameterization of LSTM (Section~\ref{sec:recurrent-batch-normalization}) that involves batch normalization and demonstrate that it is easier to optimize and generalizes better. In addition, we empirically analyze the gradient backpropagation and show that proper initialization of the batch normalization parameters is crucial to avoiding vanishing gradient (Section~\ref{sec:activation-variance}). We evaluate our proposal on several sequential problems and show (Section~\ref{sec:experiments}) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance. \citet{liao2016bridging} simultaneously investigated batch normalization in recurrent neural networks, albeit only for very short sequences (10 steps). \citet{ba2016layer} independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar improvements as our method. \section{Prerequisites} \label{sec:prerequisites} \subsection{LSTM} Long Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review briefly in this paper. Given an input sequence $\mat{X} = ( \vect{x}_1, \vect{x}_2, \ldots, \vect{x}_T )$, an RNN defines a sequence of hidden states $\vect{h}_t$ according to \begin{eqnarray} \vect{h}_t = \phi(\mat{W}_h \vect{h}_{t-1} + \mat{W}_x \vect{x}_t + \vect{b}), \end{eqnarray} where $\mat{W}_h \in \reals^{d_h \times d_h}, \mat{W}_x \in \reals^{d_x \times d_h}, \vect{b} \in \reals^{d_h}$ and the initial state $\vect{h}_0 \in \reals^{d_h}$ % I've put this back because I believe we do need to say something about the initial state -- tim are model parameters. A popular choice for the activation function $\phi(\ \cdot\ )$ is $\tanh$. RNNs are popular in sequence modeling thanks to their natural ability to process variable-length sequences. However, training RNNs using first-order stochastic gradient descent (SGD) is notoriously difficult due to the well-known problem of exploding/vanishing gradients~\citep{bengio1994learning,hochreiter1991untersuchungen,pascanudifficulty}. Gradient vanishing occurs when states $\vect{h}_t$ are not influenced by small changes in much earlier states $\vect{h}_{\tau}$, $t \ll \tau$, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally difficult~\citep{bengio1994learning}, its effects can be mitigated through architectural variations such as LSTM~\citep{lstm}, GRU~\citep{cho2014learning} and $i$RNN/$u$RNN~\citep{le2015simple,urnn}. In what follows, we focus on the LSTM architecture~\citep{lstm} with recurrent transition given by \begin{eqnarray} \left(\begin{array}{ccc} \tilde{\vect{f}}_t \\ \tilde{\vect{i}}_t \\ \tilde{\vect{o}}_t \\ \tilde{\vect{g}}_t \end{array}\right) &=& \mat{W}_h \vect{h}_{t-1} + \mat{W}_x \vect{x}_t + \vect{b} \\ \vect{c}_t &= &\sigma(\tilde{\vect{f}}_t) \ewprod \vect{c}_{t-1} + \sigma(\tilde{\vect{i}}_t) \ewprod \tanh(\tilde{\vect{g}_t}) \\ \vect{h}_t &= &\sigma(\tilde{\vect{o}}_t) \ewprod \tanh(\vect{c}_t), \end{eqnarray} where $\vect{W}_h \in \reals^{d_h \times 4 d_h}, \vect{W}_x \reals^{d_x \times 4 d_h}, \vect{b} \in \reals^{4 d_h}$ and the initial states $\vect{h}_0 \in \reals^{d_h}, \vect{c}_0 \in \reals^{d_h}$ % I've put this back because I believe we do need to say something about the initial state -- tim are model parameters. $\sigma$ is the logistic sigmoid function, and the $\ewprod$ operator denotes the Hadamard product. The LSTM differs from simple RNNs in that it has an additional memory \emph{cell} $\vect{c}_t$ whose update is nearly linear which allows the gradient to flow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate $\vect{f}_t$ determines the extent to which information is carried over from the previous timestep, and the input gate $\vect{i}_t$ controls the flow of information from the current input $\vect{x}_t$. The output gate $\vect{o}_t$ allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time. \subsection{Batch Normalization} \emph{Covariate shift}~\citep{shimodaira2000improving} is a phenomenon in machine learning where the features presented to a model change in distribution. In order for learning to succeed in the presence of covariate shift, the model's parameters must be adjusted not just to learn the concept at hand but also to adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as \emph{internal covariate shift}~\citep{batchnorm}, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. Batch Normalization~\citep{batchnorm} is a recently proposed network reparameterization which aims to reduce internal covariate shift. It does so by standardizing the activations using empirical estimates of their means and standard deviations. However, it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is as follows: \begin{align} \mathrm{BN}(\vect{h}; \gamma, \beta) = \beta + \gamma \ewprod \frac{\vect{h} - \widehat{\mathbb{E }}[\vect{h}]} { \sqrt{\widehat{\mathrm{Var}}[\vect{h}] + \epsilon}} \end{align} where $\vect{h} \in \reals^d$ is the vector of (pre)activations to be normalized, $\gamma \in \reals^d, \beta \in \reals^d$ are model parameters that determine the mean and standard deviation of the normalized activation, and $\epsilon \in \reals$ is a regularization hyperparameter. The division should be understood to proceed elementwise. At training time, the statistics $\mathbb{E}[\vect{h}]$ and $\mathrm{Var}[\vect{h}]$ are estimated by the sample mean and sample variance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction. \section{Batch-Normalized LSTM} \label{sec:recurrent-batch-normalization} This section introduces a reparameterization of LSTM that takes advantage of batch normalization. Contrary to~\citet{cesar, baidu}, we leverage batch normalization in both the input-to-hidden \emph{and} the hidden-to-hidden transformations. We introduce the batch-normalizing transform $\mathrm{BN}(\ \cdot\ ; \gamma, \beta)$ into the LSTM as follows: \begin{eqnarray} \left(\begin{array}{ccc} \tilde{\vect{f}}_t \\ \tilde{\vect{i}}_t \\ \tilde{\vect{o}}_t \\ \tilde{\vect{g}}_t \end{array}\right) &=& \mathrm{BN} (\mat{W}_h \vect{h}_{t-1}; \gamma_h, \beta_h) + \mathrm{BN} (\mat{W}_x \vect{x}_t ; \gamma_x, \beta_x) + \vect{b} \\ \vect{c}_t &=& \sigma(\tilde{\vect{f}}_t) \ewprod \vect{c}_{t-1} + \sigma(\tilde{\vect{i}}_t) \ewprod \tanh(\tilde{\vect{g}_t}) \\ \vect{h}_t &=& \sigma(\tilde{\vect{o}}_t) \ewprod \tanh( \mathrm{BN} (\vect{c}_t; \gamma_c, \beta_c) ) \end{eqnarray} In our formulation, we normalize the recurrent term $\mat{W}_h \vect{h}_{t-1}$ and the input term $\mat{W}_x \vect{x}_t$ separately. Normalizing these terms individually gives the model better control over the relative contribution of the terms using the $\gamma_h$ and $\gamma_x$ parameters. We set $\beta_h = \beta_x = \vect{0}$ to avoid unnecessary redundancy, instead relying on the pre-existing parameter vector $\vect{b}$ to account for both biases. In order to leave the LSTM dynamics intact and preserve the gradient flow through $\vect{c}_t$, we do not apply batch normalization in the cell update. The batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. However, we find that simply averaging statistics over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ significantly (see Figure~\ref{fig:popstat_stationarity} in Appendix~\ref{sec:popstat_stationarity}). Consequently, we recommend using separate statistics for each timestep to preserve information of the initial transient phase in the activations.\footnote{ Note that we separate \emph{only} the statistics over time and not the $\gamma$ and $\beta$ parameters.} Generalizing the model to sequences longer than those seen during training is straightforward thanks to the rapid convergence of the activations to their steady-state distributions (cf. Figure~\ref{fig:popstat_stationarity}). For our experiments we estimate the population statistics separately for each timestep $1, \ldots, T_{max}$ where $T_{max}$ is the length of the longest training sequence. When at test time we need to generalize beyond $T_{max}$, we use the population statistic of time $T_{max}$ for all time steps beyond it. During training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set. \section{Initializing $\gamma$ for Gradient Flow} \label{sec:activation-variance} Although batch normalization allows for easy control of the pre-activation variance through the $\gamma$ parameters, common practice is to normalize to unit variance. We suspect that the previous difficulties with recurrent batch normalization reported in~\citet{cesar,baidu} are largely due to improper initialization of the batch normalization parameters, and $\gamma$ in particular. In this section we demonstrate the impact of $\gamma$ on gradient flow. In Figure~\ref{fig:rnn_grad_prop}, we show how the pre-activation variance impacts gradient propagation in a simple RNN on the sequential MNIST task described in Section~\ref{sec:seqmnist}. Since backpropagation operates in reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradient of the loss with respect to the hidden state at different time steps. For large values of $\gamma$, the norm quickly goes to zero as gradient is propagated back in time. For small values of $\gamma$ the norm is nearly constant. To demonstrate what we think is the cause of this vanishing, we drew samples $x$ from a set of centered Gaussian distributions with standard deviation ranging from 0 to 1, and computed the derivative $\tanh'(x) = 1 - \tanh^2(x) \in [0, 1]$ for each. Figure~\ref{fig:tanh_grad} shows the empirical distribution of the derivative as a function of standard deviation. When the input standard deviation is low, the input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1. We conjecture that this is what causes the gradient to vanish, and recommend initializing $\gamma$ to a small value. In our trials we found that values of 0.01 or lower caused instabilities during training. Our choice of 0.1 seems to work well across different tasks. \section{Experiments} \label{sec:experiments} This section presents an empirical evaluation of the proposed batch-normalized LSTM on four different tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters $\gamma$ and $\beta$ to $0.1$ and $0$ respectively. \subsection{Sequential MNIST} \label{sec:seqmnist} We evaluate our batch-normalized LSTM on a sequential version of the MNIST classification task~\citep{le2015simple}. The model processes each image one pixel at a time and finally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST ($p$MNIST). In MNIST, the pixels are processed in scanline order. In $p$MNIST the pixels are processed in a fixed random order. Our baseline consists of an LSTM with 100 hidden units, with a softmax classifier to produce a prediction from the final hidden state. We use orthogonal initialization for all weight matrices, except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp~\citep{rmsprop} with learning rate of $10^{-3}$ and $0.9$ momentum. We apply gradient clipping at 1 to avoid exploding gradients. The in-order MNIST task poses a unique problem for our model: the input for the first hundred or so timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zero-variance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We work around this by adding Gaussian noise to the initial hidden states. Although the normalization amplifies the noise to signal level, we find that it does not hurt performance compared to data-dependent ways of initializing the hidden states. In Figure~\ref{fig:seqmnist_valid} we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we observe that BN-LSTM generalizes significantly better on $p$MNIST. It has been highlighted in~\cite{urnn} that $p$MNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to characterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies. Table~\ref{tab:seqmnist_test} reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the population statistics. Recurrent batch normalization leads to a better test score, especially for $p$MNIST where models have to leverage long-term temporal depencies. In addition, Table~\ref{tab:seqmnist_test} shows that our batch-normalized LSTM achieves state of the art on both MNIST and $p$MNIST. \subsection{Character-level Penn Treebank} We evaluate our model on the task of character-level language modeling on the Penn Treebank corpus~\citep{penntreebank} according to the train/valid/test partition of~\citet{mikolov2012subword}. For training, we segment the training sequence into examples of length 100. The training sequence does not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment that instead. Our baseline is an LSTM with 1000 units, trained to predict the next character using a softmax classifier on the hidden state $\vect{h}_t$. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam~\citep{kingma2014adam} with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalized LSTM is the same in all respects except for the introduction of batch normalization as detailed in~\ref{sec:recurrent-batch-normalization}. We show the learning curves in Figure~\ref{fig:ptb_valid}. BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure~\ref{fig:ptb_lengths} shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which confirms that repeating the last population statistic (cf. Section~\ref{sec:recurrent-batch-normalization}) is a viable strategy. In table~\ref{tab:ptb_test} we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art~\citep{krueger2016zoneout,chung2016hierarchical,ha2016hypernetworks}. \subsection{Text8} We evaluate our model on a second character-level language modeling task on the much larger text8 dataset~\citep{mahoney2009large}. This dataset is derived from Wikipedia and consists of a sequence of 100M characters including only alphabetical characters and spaces. We follow~\citet{mikolov2012subword,zhang2016architectural} and use the first 90M characters for training, the next 5M for validation and the final 5M characters for testing. We train on nonoverlapping sequences of length 180. Both our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classifier on the hidden state $\vect{h}_t$. We use stochastic gradient descent on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam~\citep{kingma2014adam} with learning rate 0.001. All weight matrices were initialized to be orthogonal. We early-stop on validation performance and report the test performance of the resulting model in table~\ref{tab:text8_test}. We observe that BN-LSTM obtains a significant performance improvement over the LSTM baseline. \citet{chung2016hierarchical} has since improved on our performance. \subsection{Teaching Machines to Read and Comprehend} \label{sec:less-attr} Recently,~\citet{attentivereader} introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Attentive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular. To demonstrate the generality and practical applicability of our proposal, we apply batch normalization in the Attentive Reader model and show that this drastically improves training. We evaluate several variants. The first variant, referred to as BN-LSTM, consists of the vanilla Attentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the first, except that we also introduce batch normalization into the attention computations, normalizing each term going into the $\tanh$ nonlinearities. Our third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variable-length sequences. Throughout this experiment we followed the common practice of padding each batch of variable-length data with zeros. However, this biases the batch mean and variance of $\vect{x}_t$ toward zero. We address this effect using \emph{sequencewise} normalization of the inputs as proposed by~\citet{cesar,baidu}. That is, we share statistics over time for normalization of the input terms $\mat{W}_x \vect{x}_t$, but \emph{not} for the recurrent terms $\mat{W}_h \vect{h}_t$ or the cell output $\vect{c}_t$. Doing so avoids many issues involving degenerate statistics due to input sequence padding. Our fourth and final variant BN-e** is like BN-e* but bidirectional. The main difficulty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly ignored (by not updating the hidden states based on padded regions of the input). However to perform the reverse application of a bidirectional model, it is common to simply reverse the padded sequences, thus moving the padding to the front. This causes similar problems as were observed on the sequential MNIST task (Section~\ref{sec:seqmnist}): the hidden states will not diverge during the initial timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place. See Appendix~\ref{sec:more-attr} for hyperparameters and task details. Figure~\ref{fig:attr_valid2} shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a significant improvement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization benefit over the baseline. The validation curves have minima of 50.3\%, 49.5\% and 50.0\% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were obtained without any tweaking -- all we did was to introduce batch normalization. BN-e* and BN-e** converge faster yet, and reach lower minima: 47.1\% and 43.9\% respectively. We train and evaluate our best model, BN-e**, on the full task from~\citep{attentivereader}. On this dataset we had to reduce the number of hidden units to 120 to avoid severe overfitting. Training curves for BN-e** and a vanilla LSTM are shown in Figure~\ref{fig:attr_full_valid}. Table~\ref{tab:attr_full} reports performances of the early-stopped models. \section{Conclusion} Contrary to previous findings by~\citet{cesar,baidu}, we have demonstrated that batch-normalizing the hidden states of recurrent neural networks greatly improves optimization. Indeed, doing so yields benefits similar to those of batch normalization in feed-forward neural networks: our proposed BN-LSTM trains faster and generalizes better on a variety of tasks including language modeling and question-answering. We have argued that proper initialization of the batch normalization parameters is crucial, and suggest that previous difficulties~\citep{cesar, baidu} were due in large part to improper initialization. Finally, we have shown our model to apply to complex settings involving variable-length data, bidirectionality and highly nonlinear attention mechanisms. \section*{Acknowledgements} The authors would like to acknowledge the following agencies for research funding and computing support: the Nuance Foundation, Samsung, NSERC, Calcul Qu\'{e}bec, Compute Canada, the Canada Research Chairs and CIFAR. Experiments were carried out using the Theano~\citep{theano} and the Blocks and Fuel~\citep{blocks} libraries for scientific computing. We thank David Krueger, Saizheng Zhang, Ishmael Belghazi and Yoshua Bengio for discussions and suggestions. \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Convergence of population statistics} \label{sec:popstat_stationarity} \section{Sensitivity to initialization of $\gamma$} In Section~\ref{sec:activation-variance} we investigated the effect of initial $\gamma$ on gradient flow. To show the practical implications of this, we performed several experiments on the $p$MNIST and Penn Treebank benchmarks. The resulting performances are shown in Figure~\ref{fig:gammas}. The $p$MNIST training curves confirm that higher initial values of $\gamma$ are detrimental to the optimization of the model. For the Penn Treebank task however, the effect is gone. We believe this is explained by the difference in the nature of the two tasks. For $p$MNIST, the model absorbs the input sequence and only at the end of the sequence does it make a prediction on which it receives feedback. Learning from this feedback requires propagating the gradient all the way back through the sequence. In the Penn Treebank task on the other hand, the model makes a prediction at each timestep. At each step of the backward pass, a fresh learning signal is added to the backpropagated gradient. Essentially, the model is able to get off the ground by picking up short-term dependencies. This fails on $p$MNIST wich is dominated by long-term dependencies~\citep{urnn}. \section{Teaching Machines to Read and Comprehend: Task setup} \label{sec:more-attr} We evaluate the models on the question answering task using the CNN corpus~\citep{attentivereader}, with placeholders for the named entities. We follow a similar preprocessing pipeline as~\citet{attentivereader}. During training, we randomly sample the examples with replacement and shuffle the order of the placeholders in each text inside the minibatch. We use a vocabulary of 65829 words. We deviate from~\citet{attentivereader} in order to save computation: we use only the 4 most relevant sentences from the description, as identified by a string matching procedure. Both the training and validation sets are preprocessed in this way. Due to imprecision this heuristic sometimes strips the answers from the passage, putting an upper bound of 57\% on the validation accuracy that can be achieved. For the reported performances, the first three models (LSTM, BN-LSTM and BN-everywhere) are trained using the exact same hyperparameters, which were chosen because they work well for the baseline. The hidden state is composed of 240 units. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 10 and step rule determined by Adam~\citep{kingma2014adam} with learning rate $8 \times 10^{-5}$. For BN-e* and BN-e**, we use the same hyperparameters except that we reduce the learning rate to $8 \times 10^{-4}$ and the minibatch size to 40. \section{Hyperparameter Searches} Table~\ref{tab:hyperparams} reports hyperparameter values that were tried in the experiments. For MNIST and $p$MNIST, the hyperparameters were varied independently. For Penn Treebank, we performed a full grid search on learning rate and hidden state size, and later performed a sensitivity analysis on the batch size and initial $\gamma$. For the text8 task and the experiments with the Attentive Reader, we carried out a grid search on the learning rate and hidden state size. The same values were tried for both the baseline and our BN-LSTM. In each case, our reported results are those of the model with the best validation performance. \end{document}
Recurrent Batch Normalization
1603.09025
Table 1: Accuracy obtained on the test set for the pixel by pixel MNIST classification tasks
[ "[BOLD] Model", "[BOLD] MNIST", "[ITALIC] p [BOLD] MNIST" ]
[ [ "TANH-RNN (Le et al., 2015 )", "35.0", "35.0" ], [ "[ITALIC] iRNN (Le et al., 2015 )", "97.0", "82.0" ], [ "[ITALIC] uRNN (Arjovsky et al., 2015 )", "95.1", "91.4" ], [ "[ITALIC] sTANH-RNN (Zhang et al., 2016 )", "98.1", "94.0" ], [ "LSTM (ours)", "98.9", "90.2" ], [ "BN-LSTM (ours)", "[BOLD] 99.0", "[BOLD] 95.4" ] ]
Recurrent batch normalization leads to a better test score, especially for pMNIST where models have to leverage long-term temporal depencies.
\documentclass{article} % For LaTeX2e \pdfoutput=1 % for arxiv to accept pdf figures \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[hidelinks]{hyperref} \title{Recurrent Batch Normalization} \author{ Tim Cooijmans, Nicolas Ballas, C\'esar Laurent, Çağlar Gülçehre \& Aaron Courville \\ MILA - Universit\'e de Montr\'eal \\ \texttt{firstname.lastname@umontreal.ca} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\act}{f} \newcommand{\ewprod}{\odot} \newcommand{\reals}{\mathbb{R}} \newcommand{\given}{\vert} \iclrfinalcopy \begin{document} \maketitle \begin{abstract} We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization. \end{abstract} \section{Introduction} Recurrent neural network architectures such as LSTM~\citep{lstm} and GRU~\citep{cho2014learning} have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition~\cite{baidu}, machine translation~\citep{bahdanau2014neural} and image and video captioning~\citep{xu2015show,yao2015describing}. Top-performing models, however, are based on very high-capacity networks that are computationally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study~\citep{pascanudifficulty,hessianfree,ollivier}. It is well-known that for deep feed-forward neural networks, covariate shift~\citep{shimodaira2000improving,batchnorm} degrades the efficiency of training. Covariate shift is a change in the distribution of the inputs to a model. This occurs continuously during training of feed-forward neural networks, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. As a result, the upper layers are continually adapting to the shifting input distribution and unable to learn effectively. This \emph{internal} covariate shift~\citep{batchnorm} may play an especially important role in recurrent neural networks, which resemble very deep feed-forward networks. Batch normalization~\citep{batchnorm} is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layer's parameters from those of other layers, leading to a better-conditioned optimization problem. Indeed, deep neural networks trained with batch normalization converge significantly faster and generalize better. Although batch normalization has demonstrated significant training speed-ups and generalization benefits in feed-forward networks, it is proven to be difficult to apply in recurrent architectures~\citep{cesar,baidu}. It has found limited use in stacked RNNs, where the normalization is applied ``vertically'', i.e. to the input of each RNN, but not ``horizontally'' between timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most beneficial when applied horizontally. However,~\citet{cesar} hypothesized that applying batch normalization in this way hurts training because of exploding gradients due to repeated rescaling. Our findings run counter to this hypothesis. We show that it is both possible and highly beneficial to apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we describe a reparameterization of LSTM (Section~\ref{sec:recurrent-batch-normalization}) that involves batch normalization and demonstrate that it is easier to optimize and generalizes better. In addition, we empirically analyze the gradient backpropagation and show that proper initialization of the batch normalization parameters is crucial to avoiding vanishing gradient (Section~\ref{sec:activation-variance}). We evaluate our proposal on several sequential problems and show (Section~\ref{sec:experiments}) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance. \citet{liao2016bridging} simultaneously investigated batch normalization in recurrent neural networks, albeit only for very short sequences (10 steps). \citet{ba2016layer} independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar improvements as our method. \section{Prerequisites} \label{sec:prerequisites} \subsection{LSTM} Long Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review briefly in this paper. Given an input sequence $\mat{X} = ( \vect{x}_1, \vect{x}_2, \ldots, \vect{x}_T )$, an RNN defines a sequence of hidden states $\vect{h}_t$ according to \begin{eqnarray} \vect{h}_t = \phi(\mat{W}_h \vect{h}_{t-1} + \mat{W}_x \vect{x}_t + \vect{b}), \end{eqnarray} where $\mat{W}_h \in \reals^{d_h \times d_h}, \mat{W}_x \in \reals^{d_x \times d_h}, \vect{b} \in \reals^{d_h}$ and the initial state $\vect{h}_0 \in \reals^{d_h}$ % I've put this back because I believe we do need to say something about the initial state -- tim are model parameters. A popular choice for the activation function $\phi(\ \cdot\ )$ is $\tanh$. RNNs are popular in sequence modeling thanks to their natural ability to process variable-length sequences. However, training RNNs using first-order stochastic gradient descent (SGD) is notoriously difficult due to the well-known problem of exploding/vanishing gradients~\citep{bengio1994learning,hochreiter1991untersuchungen,pascanudifficulty}. Gradient vanishing occurs when states $\vect{h}_t$ are not influenced by small changes in much earlier states $\vect{h}_{\tau}$, $t \ll \tau$, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally difficult~\citep{bengio1994learning}, its effects can be mitigated through architectural variations such as LSTM~\citep{lstm}, GRU~\citep{cho2014learning} and $i$RNN/$u$RNN~\citep{le2015simple,urnn}. In what follows, we focus on the LSTM architecture~\citep{lstm} with recurrent transition given by \begin{eqnarray} \left(\begin{array}{ccc} \tilde{\vect{f}}_t \\ \tilde{\vect{i}}_t \\ \tilde{\vect{o}}_t \\ \tilde{\vect{g}}_t \end{array}\right) &=& \mat{W}_h \vect{h}_{t-1} + \mat{W}_x \vect{x}_t + \vect{b} \\ \vect{c}_t &= &\sigma(\tilde{\vect{f}}_t) \ewprod \vect{c}_{t-1} + \sigma(\tilde{\vect{i}}_t) \ewprod \tanh(\tilde{\vect{g}_t}) \\ \vect{h}_t &= &\sigma(\tilde{\vect{o}}_t) \ewprod \tanh(\vect{c}_t), \end{eqnarray} where $\vect{W}_h \in \reals^{d_h \times 4 d_h}, \vect{W}_x \reals^{d_x \times 4 d_h}, \vect{b} \in \reals^{4 d_h}$ and the initial states $\vect{h}_0 \in \reals^{d_h}, \vect{c}_0 \in \reals^{d_h}$ % I've put this back because I believe we do need to say something about the initial state -- tim are model parameters. $\sigma$ is the logistic sigmoid function, and the $\ewprod$ operator denotes the Hadamard product. The LSTM differs from simple RNNs in that it has an additional memory \emph{cell} $\vect{c}_t$ whose update is nearly linear which allows the gradient to flow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate $\vect{f}_t$ determines the extent to which information is carried over from the previous timestep, and the input gate $\vect{i}_t$ controls the flow of information from the current input $\vect{x}_t$. The output gate $\vect{o}_t$ allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time. \subsection{Batch Normalization} \emph{Covariate shift}~\citep{shimodaira2000improving} is a phenomenon in machine learning where the features presented to a model change in distribution. In order for learning to succeed in the presence of covariate shift, the model's parameters must be adjusted not just to learn the concept at hand but also to adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as \emph{internal covariate shift}~\citep{batchnorm}, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. Batch Normalization~\citep{batchnorm} is a recently proposed network reparameterization which aims to reduce internal covariate shift. It does so by standardizing the activations using empirical estimates of their means and standard deviations. However, it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is as follows: \begin{align} \mathrm{BN}(\vect{h}; \gamma, \beta) = \beta + \gamma \ewprod \frac{\vect{h} - \widehat{\mathbb{E }}[\vect{h}]} { \sqrt{\widehat{\mathrm{Var}}[\vect{h}] + \epsilon}} \end{align} where $\vect{h} \in \reals^d$ is the vector of (pre)activations to be normalized, $\gamma \in \reals^d, \beta \in \reals^d$ are model parameters that determine the mean and standard deviation of the normalized activation, and $\epsilon \in \reals$ is a regularization hyperparameter. The division should be understood to proceed elementwise. At training time, the statistics $\mathbb{E}[\vect{h}]$ and $\mathrm{Var}[\vect{h}]$ are estimated by the sample mean and sample variance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction. \section{Batch-Normalized LSTM} \label{sec:recurrent-batch-normalization} This section introduces a reparameterization of LSTM that takes advantage of batch normalization. Contrary to~\citet{cesar, baidu}, we leverage batch normalization in both the input-to-hidden \emph{and} the hidden-to-hidden transformations. We introduce the batch-normalizing transform $\mathrm{BN}(\ \cdot\ ; \gamma, \beta)$ into the LSTM as follows: \begin{eqnarray} \left(\begin{array}{ccc} \tilde{\vect{f}}_t \\ \tilde{\vect{i}}_t \\ \tilde{\vect{o}}_t \\ \tilde{\vect{g}}_t \end{array}\right) &=& \mathrm{BN} (\mat{W}_h \vect{h}_{t-1}; \gamma_h, \beta_h) + \mathrm{BN} (\mat{W}_x \vect{x}_t ; \gamma_x, \beta_x) + \vect{b} \\ \vect{c}_t &=& \sigma(\tilde{\vect{f}}_t) \ewprod \vect{c}_{t-1} + \sigma(\tilde{\vect{i}}_t) \ewprod \tanh(\tilde{\vect{g}_t}) \\ \vect{h}_t &=& \sigma(\tilde{\vect{o}}_t) \ewprod \tanh( \mathrm{BN} (\vect{c}_t; \gamma_c, \beta_c) ) \end{eqnarray} In our formulation, we normalize the recurrent term $\mat{W}_h \vect{h}_{t-1}$ and the input term $\mat{W}_x \vect{x}_t$ separately. Normalizing these terms individually gives the model better control over the relative contribution of the terms using the $\gamma_h$ and $\gamma_x$ parameters. We set $\beta_h = \beta_x = \vect{0}$ to avoid unnecessary redundancy, instead relying on the pre-existing parameter vector $\vect{b}$ to account for both biases. In order to leave the LSTM dynamics intact and preserve the gradient flow through $\vect{c}_t$, we do not apply batch normalization in the cell update. The batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. However, we find that simply averaging statistics over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ significantly (see Figure~\ref{fig:popstat_stationarity} in Appendix~\ref{sec:popstat_stationarity}). Consequently, we recommend using separate statistics for each timestep to preserve information of the initial transient phase in the activations.\footnote{ Note that we separate \emph{only} the statistics over time and not the $\gamma$ and $\beta$ parameters.} Generalizing the model to sequences longer than those seen during training is straightforward thanks to the rapid convergence of the activations to their steady-state distributions (cf. Figure~\ref{fig:popstat_stationarity}). For our experiments we estimate the population statistics separately for each timestep $1, \ldots, T_{max}$ where $T_{max}$ is the length of the longest training sequence. When at test time we need to generalize beyond $T_{max}$, we use the population statistic of time $T_{max}$ for all time steps beyond it. During training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set. \section{Initializing $\gamma$ for Gradient Flow} \label{sec:activation-variance} Although batch normalization allows for easy control of the pre-activation variance through the $\gamma$ parameters, common practice is to normalize to unit variance. We suspect that the previous difficulties with recurrent batch normalization reported in~\citet{cesar,baidu} are largely due to improper initialization of the batch normalization parameters, and $\gamma$ in particular. In this section we demonstrate the impact of $\gamma$ on gradient flow. In Figure~\ref{fig:rnn_grad_prop}, we show how the pre-activation variance impacts gradient propagation in a simple RNN on the sequential MNIST task described in Section~\ref{sec:seqmnist}. Since backpropagation operates in reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradient of the loss with respect to the hidden state at different time steps. For large values of $\gamma$, the norm quickly goes to zero as gradient is propagated back in time. For small values of $\gamma$ the norm is nearly constant. To demonstrate what we think is the cause of this vanishing, we drew samples $x$ from a set of centered Gaussian distributions with standard deviation ranging from 0 to 1, and computed the derivative $\tanh'(x) = 1 - \tanh^2(x) \in [0, 1]$ for each. Figure~\ref{fig:tanh_grad} shows the empirical distribution of the derivative as a function of standard deviation. When the input standard deviation is low, the input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1. We conjecture that this is what causes the gradient to vanish, and recommend initializing $\gamma$ to a small value. In our trials we found that values of 0.01 or lower caused instabilities during training. Our choice of 0.1 seems to work well across different tasks. \section{Experiments} \label{sec:experiments} This section presents an empirical evaluation of the proposed batch-normalized LSTM on four different tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters $\gamma$ and $\beta$ to $0.1$ and $0$ respectively. \subsection{Sequential MNIST} \label{sec:seqmnist} We evaluate our batch-normalized LSTM on a sequential version of the MNIST classification task~\citep{le2015simple}. The model processes each image one pixel at a time and finally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST ($p$MNIST). In MNIST, the pixels are processed in scanline order. In $p$MNIST the pixels are processed in a fixed random order. Our baseline consists of an LSTM with 100 hidden units, with a softmax classifier to produce a prediction from the final hidden state. We use orthogonal initialization for all weight matrices, except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp~\citep{rmsprop} with learning rate of $10^{-3}$ and $0.9$ momentum. We apply gradient clipping at 1 to avoid exploding gradients. The in-order MNIST task poses a unique problem for our model: the input for the first hundred or so timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zero-variance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We work around this by adding Gaussian noise to the initial hidden states. Although the normalization amplifies the noise to signal level, we find that it does not hurt performance compared to data-dependent ways of initializing the hidden states. In Figure~\ref{fig:seqmnist_valid} we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we observe that BN-LSTM generalizes significantly better on $p$MNIST. It has been highlighted in~\cite{urnn} that $p$MNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to characterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies. Table~\ref{tab:seqmnist_test} reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the population statistics. Recurrent batch normalization leads to a better test score, especially for $p$MNIST where models have to leverage long-term temporal depencies. In addition, Table~\ref{tab:seqmnist_test} shows that our batch-normalized LSTM achieves state of the art on both MNIST and $p$MNIST. \subsection{Character-level Penn Treebank} We evaluate our model on the task of character-level language modeling on the Penn Treebank corpus~\citep{penntreebank} according to the train/valid/test partition of~\citet{mikolov2012subword}. For training, we segment the training sequence into examples of length 100. The training sequence does not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment that instead. Our baseline is an LSTM with 1000 units, trained to predict the next character using a softmax classifier on the hidden state $\vect{h}_t$. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam~\citep{kingma2014adam} with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalized LSTM is the same in all respects except for the introduction of batch normalization as detailed in~\ref{sec:recurrent-batch-normalization}. We show the learning curves in Figure~\ref{fig:ptb_valid}. BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure~\ref{fig:ptb_lengths} shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which confirms that repeating the last population statistic (cf. Section~\ref{sec:recurrent-batch-normalization}) is a viable strategy. In table~\ref{tab:ptb_test} we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art~\citep{krueger2016zoneout,chung2016hierarchical,ha2016hypernetworks}. \subsection{Text8} We evaluate our model on a second character-level language modeling task on the much larger text8 dataset~\citep{mahoney2009large}. This dataset is derived from Wikipedia and consists of a sequence of 100M characters including only alphabetical characters and spaces. We follow~\citet{mikolov2012subword,zhang2016architectural} and use the first 90M characters for training, the next 5M for validation and the final 5M characters for testing. We train on nonoverlapping sequences of length 180. Both our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classifier on the hidden state $\vect{h}_t$. We use stochastic gradient descent on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam~\citep{kingma2014adam} with learning rate 0.001. All weight matrices were initialized to be orthogonal. We early-stop on validation performance and report the test performance of the resulting model in table~\ref{tab:text8_test}. We observe that BN-LSTM obtains a significant performance improvement over the LSTM baseline. \citet{chung2016hierarchical} has since improved on our performance. \subsection{Teaching Machines to Read and Comprehend} \label{sec:less-attr} Recently,~\citet{attentivereader} introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Attentive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular. To demonstrate the generality and practical applicability of our proposal, we apply batch normalization in the Attentive Reader model and show that this drastically improves training. We evaluate several variants. The first variant, referred to as BN-LSTM, consists of the vanilla Attentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the first, except that we also introduce batch normalization into the attention computations, normalizing each term going into the $\tanh$ nonlinearities. Our third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variable-length sequences. Throughout this experiment we followed the common practice of padding each batch of variable-length data with zeros. However, this biases the batch mean and variance of $\vect{x}_t$ toward zero. We address this effect using \emph{sequencewise} normalization of the inputs as proposed by~\citet{cesar,baidu}. That is, we share statistics over time for normalization of the input terms $\mat{W}_x \vect{x}_t$, but \emph{not} for the recurrent terms $\mat{W}_h \vect{h}_t$ or the cell output $\vect{c}_t$. Doing so avoids many issues involving degenerate statistics due to input sequence padding. Our fourth and final variant BN-e** is like BN-e* but bidirectional. The main difficulty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly ignored (by not updating the hidden states based on padded regions of the input). However to perform the reverse application of a bidirectional model, it is common to simply reverse the padded sequences, thus moving the padding to the front. This causes similar problems as were observed on the sequential MNIST task (Section~\ref{sec:seqmnist}): the hidden states will not diverge during the initial timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place. See Appendix~\ref{sec:more-attr} for hyperparameters and task details. Figure~\ref{fig:attr_valid2} shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a significant improvement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization benefit over the baseline. The validation curves have minima of 50.3\%, 49.5\% and 50.0\% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were obtained without any tweaking -- all we did was to introduce batch normalization. BN-e* and BN-e** converge faster yet, and reach lower minima: 47.1\% and 43.9\% respectively. We train and evaluate our best model, BN-e**, on the full task from~\citep{attentivereader}. On this dataset we had to reduce the number of hidden units to 120 to avoid severe overfitting. Training curves for BN-e** and a vanilla LSTM are shown in Figure~\ref{fig:attr_full_valid}. Table~\ref{tab:attr_full} reports performances of the early-stopped models. \section{Conclusion} Contrary to previous findings by~\citet{cesar,baidu}, we have demonstrated that batch-normalizing the hidden states of recurrent neural networks greatly improves optimization. Indeed, doing so yields benefits similar to those of batch normalization in feed-forward neural networks: our proposed BN-LSTM trains faster and generalizes better on a variety of tasks including language modeling and question-answering. We have argued that proper initialization of the batch normalization parameters is crucial, and suggest that previous difficulties~\citep{cesar, baidu} were due in large part to improper initialization. Finally, we have shown our model to apply to complex settings involving variable-length data, bidirectionality and highly nonlinear attention mechanisms. \section*{Acknowledgements} The authors would like to acknowledge the following agencies for research funding and computing support: the Nuance Foundation, Samsung, NSERC, Calcul Qu\'{e}bec, Compute Canada, the Canada Research Chairs and CIFAR. Experiments were carried out using the Theano~\citep{theano} and the Blocks and Fuel~\citep{blocks} libraries for scientific computing. We thank David Krueger, Saizheng Zhang, Ishmael Belghazi and Yoshua Bengio for discussions and suggestions. \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Convergence of population statistics} \label{sec:popstat_stationarity} \section{Sensitivity to initialization of $\gamma$} In Section~\ref{sec:activation-variance} we investigated the effect of initial $\gamma$ on gradient flow. To show the practical implications of this, we performed several experiments on the $p$MNIST and Penn Treebank benchmarks. The resulting performances are shown in Figure~\ref{fig:gammas}. The $p$MNIST training curves confirm that higher initial values of $\gamma$ are detrimental to the optimization of the model. For the Penn Treebank task however, the effect is gone. We believe this is explained by the difference in the nature of the two tasks. For $p$MNIST, the model absorbs the input sequence and only at the end of the sequence does it make a prediction on which it receives feedback. Learning from this feedback requires propagating the gradient all the way back through the sequence. In the Penn Treebank task on the other hand, the model makes a prediction at each timestep. At each step of the backward pass, a fresh learning signal is added to the backpropagated gradient. Essentially, the model is able to get off the ground by picking up short-term dependencies. This fails on $p$MNIST wich is dominated by long-term dependencies~\citep{urnn}. \section{Teaching Machines to Read and Comprehend: Task setup} \label{sec:more-attr} We evaluate the models on the question answering task using the CNN corpus~\citep{attentivereader}, with placeholders for the named entities. We follow a similar preprocessing pipeline as~\citet{attentivereader}. During training, we randomly sample the examples with replacement and shuffle the order of the placeholders in each text inside the minibatch. We use a vocabulary of 65829 words. We deviate from~\citet{attentivereader} in order to save computation: we use only the 4 most relevant sentences from the description, as identified by a string matching procedure. Both the training and validation sets are preprocessed in this way. Due to imprecision this heuristic sometimes strips the answers from the passage, putting an upper bound of 57\% on the validation accuracy that can be achieved. For the reported performances, the first three models (LSTM, BN-LSTM and BN-everywhere) are trained using the exact same hyperparameters, which were chosen because they work well for the baseline. The hidden state is composed of 240 units. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 10 and step rule determined by Adam~\citep{kingma2014adam} with learning rate $8 \times 10^{-5}$. For BN-e* and BN-e**, we use the same hyperparameters except that we reduce the learning rate to $8 \times 10^{-4}$ and the minibatch size to 40. \section{Hyperparameter Searches} Table~\ref{tab:hyperparams} reports hyperparameter values that were tried in the experiments. For MNIST and $p$MNIST, the hyperparameters were varied independently. For Penn Treebank, we performed a full grid search on learning rate and hidden state size, and later performed a sensitivity analysis on the batch size and initial $\gamma$. For the text8 task and the experiments with the Attentive Reader, we carried out a grid search on the learning rate and hidden state size. The same values were tried for both the baseline and our BN-LSTM. In each case, our reported results are those of the model with the best validation performance. \end{document}
Recurrent Batch Normalization
1603.09025
Table 2: Bits-per-character on the Penn Treebank test sequence.
[ "[BOLD] Model", "[BOLD] Penn Treebank" ]
[ [ "LSTM (Graves, 2013 )", "1.26" ], [ "HF-MRNN (Mikolov et al., 2012 )", "1.41" ], [ "Norm-stabilized LSTM (Krueger & Memisevic, 2016 )", "1.39" ], [ "ME n-gram (Mikolov et al., 2012 )", "1.37" ], [ "LSTM (ours)", "1.38" ], [ "BN-LSTM (ours)", "1.32" ], [ "Zoneout (Krueger et al., 2016 )", "1.27" ], [ "HM-LSTM (Chung et al., 2016 )", "1.24" ], [ "HyperNetworks (Ha et al., 2016 )", "[BOLD] 1.22" ] ]
We show the learning curves in BN-LSTM converges faster and generalizes better than the LSTM baseline. We observe that using the population statistics improves generalization performance, which confirms that repeating the last population statistic (cf. Follow up works havd since improved the state of the art
\documentclass{article} % For LaTeX2e \pdfoutput=1 % for arxiv to accept pdf figures \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[hidelinks]{hyperref} \title{Recurrent Batch Normalization} \author{ Tim Cooijmans, Nicolas Ballas, C\'esar Laurent, Çağlar Gülçehre \& Aaron Courville \\ MILA - Universit\'e de Montr\'eal \\ \texttt{firstname.lastname@umontreal.ca} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\act}{f} \newcommand{\ewprod}{\odot} \newcommand{\reals}{\mathbb{R}} \newcommand{\given}{\vert} \iclrfinalcopy \begin{document} \maketitle \begin{abstract} We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization. \end{abstract} \section{Introduction} Recurrent neural network architectures such as LSTM~\citep{lstm} and GRU~\citep{cho2014learning} have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition~\cite{baidu}, machine translation~\citep{bahdanau2014neural} and image and video captioning~\citep{xu2015show,yao2015describing}. Top-performing models, however, are based on very high-capacity networks that are computationally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study~\citep{pascanudifficulty,hessianfree,ollivier}. It is well-known that for deep feed-forward neural networks, covariate shift~\citep{shimodaira2000improving,batchnorm} degrades the efficiency of training. Covariate shift is a change in the distribution of the inputs to a model. This occurs continuously during training of feed-forward neural networks, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. As a result, the upper layers are continually adapting to the shifting input distribution and unable to learn effectively. This \emph{internal} covariate shift~\citep{batchnorm} may play an especially important role in recurrent neural networks, which resemble very deep feed-forward networks. Batch normalization~\citep{batchnorm} is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layer's parameters from those of other layers, leading to a better-conditioned optimization problem. Indeed, deep neural networks trained with batch normalization converge significantly faster and generalize better. Although batch normalization has demonstrated significant training speed-ups and generalization benefits in feed-forward networks, it is proven to be difficult to apply in recurrent architectures~\citep{cesar,baidu}. It has found limited use in stacked RNNs, where the normalization is applied ``vertically'', i.e. to the input of each RNN, but not ``horizontally'' between timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most beneficial when applied horizontally. However,~\citet{cesar} hypothesized that applying batch normalization in this way hurts training because of exploding gradients due to repeated rescaling. Our findings run counter to this hypothesis. We show that it is both possible and highly beneficial to apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we describe a reparameterization of LSTM (Section~\ref{sec:recurrent-batch-normalization}) that involves batch normalization and demonstrate that it is easier to optimize and generalizes better. In addition, we empirically analyze the gradient backpropagation and show that proper initialization of the batch normalization parameters is crucial to avoiding vanishing gradient (Section~\ref{sec:activation-variance}). We evaluate our proposal on several sequential problems and show (Section~\ref{sec:experiments}) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance. \citet{liao2016bridging} simultaneously investigated batch normalization in recurrent neural networks, albeit only for very short sequences (10 steps). \citet{ba2016layer} independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar improvements as our method. \section{Prerequisites} \label{sec:prerequisites} \subsection{LSTM} Long Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review briefly in this paper. Given an input sequence $\mat{X} = ( \vect{x}_1, \vect{x}_2, \ldots, \vect{x}_T )$, an RNN defines a sequence of hidden states $\vect{h}_t$ according to \begin{eqnarray} \vect{h}_t = \phi(\mat{W}_h \vect{h}_{t-1} + \mat{W}_x \vect{x}_t + \vect{b}), \end{eqnarray} where $\mat{W}_h \in \reals^{d_h \times d_h}, \mat{W}_x \in \reals^{d_x \times d_h}, \vect{b} \in \reals^{d_h}$ and the initial state $\vect{h}_0 \in \reals^{d_h}$ % I've put this back because I believe we do need to say something about the initial state -- tim are model parameters. A popular choice for the activation function $\phi(\ \cdot\ )$ is $\tanh$. RNNs are popular in sequence modeling thanks to their natural ability to process variable-length sequences. However, training RNNs using first-order stochastic gradient descent (SGD) is notoriously difficult due to the well-known problem of exploding/vanishing gradients~\citep{bengio1994learning,hochreiter1991untersuchungen,pascanudifficulty}. Gradient vanishing occurs when states $\vect{h}_t$ are not influenced by small changes in much earlier states $\vect{h}_{\tau}$, $t \ll \tau$, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally difficult~\citep{bengio1994learning}, its effects can be mitigated through architectural variations such as LSTM~\citep{lstm}, GRU~\citep{cho2014learning} and $i$RNN/$u$RNN~\citep{le2015simple,urnn}. In what follows, we focus on the LSTM architecture~\citep{lstm} with recurrent transition given by \begin{eqnarray} \left(\begin{array}{ccc} \tilde{\vect{f}}_t \\ \tilde{\vect{i}}_t \\ \tilde{\vect{o}}_t \\ \tilde{\vect{g}}_t \end{array}\right) &=& \mat{W}_h \vect{h}_{t-1} + \mat{W}_x \vect{x}_t + \vect{b} \\ \vect{c}_t &= &\sigma(\tilde{\vect{f}}_t) \ewprod \vect{c}_{t-1} + \sigma(\tilde{\vect{i}}_t) \ewprod \tanh(\tilde{\vect{g}_t}) \\ \vect{h}_t &= &\sigma(\tilde{\vect{o}}_t) \ewprod \tanh(\vect{c}_t), \end{eqnarray} where $\vect{W}_h \in \reals^{d_h \times 4 d_h}, \vect{W}_x \reals^{d_x \times 4 d_h}, \vect{b} \in \reals^{4 d_h}$ and the initial states $\vect{h}_0 \in \reals^{d_h}, \vect{c}_0 \in \reals^{d_h}$ % I've put this back because I believe we do need to say something about the initial state -- tim are model parameters. $\sigma$ is the logistic sigmoid function, and the $\ewprod$ operator denotes the Hadamard product. The LSTM differs from simple RNNs in that it has an additional memory \emph{cell} $\vect{c}_t$ whose update is nearly linear which allows the gradient to flow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate $\vect{f}_t$ determines the extent to which information is carried over from the previous timestep, and the input gate $\vect{i}_t$ controls the flow of information from the current input $\vect{x}_t$. The output gate $\vect{o}_t$ allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time. \subsection{Batch Normalization} \emph{Covariate shift}~\citep{shimodaira2000improving} is a phenomenon in machine learning where the features presented to a model change in distribution. In order for learning to succeed in the presence of covariate shift, the model's parameters must be adjusted not just to learn the concept at hand but also to adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as \emph{internal covariate shift}~\citep{batchnorm}, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. Batch Normalization~\citep{batchnorm} is a recently proposed network reparameterization which aims to reduce internal covariate shift. It does so by standardizing the activations using empirical estimates of their means and standard deviations. However, it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is as follows: \begin{align} \mathrm{BN}(\vect{h}; \gamma, \beta) = \beta + \gamma \ewprod \frac{\vect{h} - \widehat{\mathbb{E }}[\vect{h}]} { \sqrt{\widehat{\mathrm{Var}}[\vect{h}] + \epsilon}} \end{align} where $\vect{h} \in \reals^d$ is the vector of (pre)activations to be normalized, $\gamma \in \reals^d, \beta \in \reals^d$ are model parameters that determine the mean and standard deviation of the normalized activation, and $\epsilon \in \reals$ is a regularization hyperparameter. The division should be understood to proceed elementwise. At training time, the statistics $\mathbb{E}[\vect{h}]$ and $\mathrm{Var}[\vect{h}]$ are estimated by the sample mean and sample variance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction. \section{Batch-Normalized LSTM} \label{sec:recurrent-batch-normalization} This section introduces a reparameterization of LSTM that takes advantage of batch normalization. Contrary to~\citet{cesar, baidu}, we leverage batch normalization in both the input-to-hidden \emph{and} the hidden-to-hidden transformations. We introduce the batch-normalizing transform $\mathrm{BN}(\ \cdot\ ; \gamma, \beta)$ into the LSTM as follows: \begin{eqnarray} \left(\begin{array}{ccc} \tilde{\vect{f}}_t \\ \tilde{\vect{i}}_t \\ \tilde{\vect{o}}_t \\ \tilde{\vect{g}}_t \end{array}\right) &=& \mathrm{BN} (\mat{W}_h \vect{h}_{t-1}; \gamma_h, \beta_h) + \mathrm{BN} (\mat{W}_x \vect{x}_t ; \gamma_x, \beta_x) + \vect{b} \\ \vect{c}_t &=& \sigma(\tilde{\vect{f}}_t) \ewprod \vect{c}_{t-1} + \sigma(\tilde{\vect{i}}_t) \ewprod \tanh(\tilde{\vect{g}_t}) \\ \vect{h}_t &=& \sigma(\tilde{\vect{o}}_t) \ewprod \tanh( \mathrm{BN} (\vect{c}_t; \gamma_c, \beta_c) ) \end{eqnarray} In our formulation, we normalize the recurrent term $\mat{W}_h \vect{h}_{t-1}$ and the input term $\mat{W}_x \vect{x}_t$ separately. Normalizing these terms individually gives the model better control over the relative contribution of the terms using the $\gamma_h$ and $\gamma_x$ parameters. We set $\beta_h = \beta_x = \vect{0}$ to avoid unnecessary redundancy, instead relying on the pre-existing parameter vector $\vect{b}$ to account for both biases. In order to leave the LSTM dynamics intact and preserve the gradient flow through $\vect{c}_t$, we do not apply batch normalization in the cell update. The batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. However, we find that simply averaging statistics over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ significantly (see Figure~\ref{fig:popstat_stationarity} in Appendix~\ref{sec:popstat_stationarity}). Consequently, we recommend using separate statistics for each timestep to preserve information of the initial transient phase in the activations.\footnote{ Note that we separate \emph{only} the statistics over time and not the $\gamma$ and $\beta$ parameters.} Generalizing the model to sequences longer than those seen during training is straightforward thanks to the rapid convergence of the activations to their steady-state distributions (cf. Figure~\ref{fig:popstat_stationarity}). For our experiments we estimate the population statistics separately for each timestep $1, \ldots, T_{max}$ where $T_{max}$ is the length of the longest training sequence. When at test time we need to generalize beyond $T_{max}$, we use the population statistic of time $T_{max}$ for all time steps beyond it. During training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set. \section{Initializing $\gamma$ for Gradient Flow} \label{sec:activation-variance} Although batch normalization allows for easy control of the pre-activation variance through the $\gamma$ parameters, common practice is to normalize to unit variance. We suspect that the previous difficulties with recurrent batch normalization reported in~\citet{cesar,baidu} are largely due to improper initialization of the batch normalization parameters, and $\gamma$ in particular. In this section we demonstrate the impact of $\gamma$ on gradient flow. In Figure~\ref{fig:rnn_grad_prop}, we show how the pre-activation variance impacts gradient propagation in a simple RNN on the sequential MNIST task described in Section~\ref{sec:seqmnist}. Since backpropagation operates in reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradient of the loss with respect to the hidden state at different time steps. For large values of $\gamma$, the norm quickly goes to zero as gradient is propagated back in time. For small values of $\gamma$ the norm is nearly constant. To demonstrate what we think is the cause of this vanishing, we drew samples $x$ from a set of centered Gaussian distributions with standard deviation ranging from 0 to 1, and computed the derivative $\tanh'(x) = 1 - \tanh^2(x) \in [0, 1]$ for each. Figure~\ref{fig:tanh_grad} shows the empirical distribution of the derivative as a function of standard deviation. When the input standard deviation is low, the input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1. We conjecture that this is what causes the gradient to vanish, and recommend initializing $\gamma$ to a small value. In our trials we found that values of 0.01 or lower caused instabilities during training. Our choice of 0.1 seems to work well across different tasks. \section{Experiments} \label{sec:experiments} This section presents an empirical evaluation of the proposed batch-normalized LSTM on four different tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters $\gamma$ and $\beta$ to $0.1$ and $0$ respectively. \subsection{Sequential MNIST} \label{sec:seqmnist} We evaluate our batch-normalized LSTM on a sequential version of the MNIST classification task~\citep{le2015simple}. The model processes each image one pixel at a time and finally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST ($p$MNIST). In MNIST, the pixels are processed in scanline order. In $p$MNIST the pixels are processed in a fixed random order. Our baseline consists of an LSTM with 100 hidden units, with a softmax classifier to produce a prediction from the final hidden state. We use orthogonal initialization for all weight matrices, except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp~\citep{rmsprop} with learning rate of $10^{-3}$ and $0.9$ momentum. We apply gradient clipping at 1 to avoid exploding gradients. The in-order MNIST task poses a unique problem for our model: the input for the first hundred or so timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zero-variance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We work around this by adding Gaussian noise to the initial hidden states. Although the normalization amplifies the noise to signal level, we find that it does not hurt performance compared to data-dependent ways of initializing the hidden states. In Figure~\ref{fig:seqmnist_valid} we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we observe that BN-LSTM generalizes significantly better on $p$MNIST. It has been highlighted in~\cite{urnn} that $p$MNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to characterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies. Table~\ref{tab:seqmnist_test} reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the population statistics. Recurrent batch normalization leads to a better test score, especially for $p$MNIST where models have to leverage long-term temporal depencies. In addition, Table~\ref{tab:seqmnist_test} shows that our batch-normalized LSTM achieves state of the art on both MNIST and $p$MNIST. \subsection{Character-level Penn Treebank} We evaluate our model on the task of character-level language modeling on the Penn Treebank corpus~\citep{penntreebank} according to the train/valid/test partition of~\citet{mikolov2012subword}. For training, we segment the training sequence into examples of length 100. The training sequence does not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment that instead. Our baseline is an LSTM with 1000 units, trained to predict the next character using a softmax classifier on the hidden state $\vect{h}_t$. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam~\citep{kingma2014adam} with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalized LSTM is the same in all respects except for the introduction of batch normalization as detailed in~\ref{sec:recurrent-batch-normalization}. We show the learning curves in Figure~\ref{fig:ptb_valid}. BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure~\ref{fig:ptb_lengths} shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which confirms that repeating the last population statistic (cf. Section~\ref{sec:recurrent-batch-normalization}) is a viable strategy. In table~\ref{tab:ptb_test} we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art~\citep{krueger2016zoneout,chung2016hierarchical,ha2016hypernetworks}. \subsection{Text8} We evaluate our model on a second character-level language modeling task on the much larger text8 dataset~\citep{mahoney2009large}. This dataset is derived from Wikipedia and consists of a sequence of 100M characters including only alphabetical characters and spaces. We follow~\citet{mikolov2012subword,zhang2016architectural} and use the first 90M characters for training, the next 5M for validation and the final 5M characters for testing. We train on nonoverlapping sequences of length 180. Both our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classifier on the hidden state $\vect{h}_t$. We use stochastic gradient descent on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam~\citep{kingma2014adam} with learning rate 0.001. All weight matrices were initialized to be orthogonal. We early-stop on validation performance and report the test performance of the resulting model in table~\ref{tab:text8_test}. We observe that BN-LSTM obtains a significant performance improvement over the LSTM baseline. \citet{chung2016hierarchical} has since improved on our performance. \subsection{Teaching Machines to Read and Comprehend} \label{sec:less-attr} Recently,~\citet{attentivereader} introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Attentive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular. To demonstrate the generality and practical applicability of our proposal, we apply batch normalization in the Attentive Reader model and show that this drastically improves training. We evaluate several variants. The first variant, referred to as BN-LSTM, consists of the vanilla Attentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the first, except that we also introduce batch normalization into the attention computations, normalizing each term going into the $\tanh$ nonlinearities. Our third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variable-length sequences. Throughout this experiment we followed the common practice of padding each batch of variable-length data with zeros. However, this biases the batch mean and variance of $\vect{x}_t$ toward zero. We address this effect using \emph{sequencewise} normalization of the inputs as proposed by~\citet{cesar,baidu}. That is, we share statistics over time for normalization of the input terms $\mat{W}_x \vect{x}_t$, but \emph{not} for the recurrent terms $\mat{W}_h \vect{h}_t$ or the cell output $\vect{c}_t$. Doing so avoids many issues involving degenerate statistics due to input sequence padding. Our fourth and final variant BN-e** is like BN-e* but bidirectional. The main difficulty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly ignored (by not updating the hidden states based on padded regions of the input). However to perform the reverse application of a bidirectional model, it is common to simply reverse the padded sequences, thus moving the padding to the front. This causes similar problems as were observed on the sequential MNIST task (Section~\ref{sec:seqmnist}): the hidden states will not diverge during the initial timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place. See Appendix~\ref{sec:more-attr} for hyperparameters and task details. Figure~\ref{fig:attr_valid2} shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a significant improvement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization benefit over the baseline. The validation curves have minima of 50.3\%, 49.5\% and 50.0\% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were obtained without any tweaking -- all we did was to introduce batch normalization. BN-e* and BN-e** converge faster yet, and reach lower minima: 47.1\% and 43.9\% respectively. We train and evaluate our best model, BN-e**, on the full task from~\citep{attentivereader}. On this dataset we had to reduce the number of hidden units to 120 to avoid severe overfitting. Training curves for BN-e** and a vanilla LSTM are shown in Figure~\ref{fig:attr_full_valid}. Table~\ref{tab:attr_full} reports performances of the early-stopped models. \section{Conclusion} Contrary to previous findings by~\citet{cesar,baidu}, we have demonstrated that batch-normalizing the hidden states of recurrent neural networks greatly improves optimization. Indeed, doing so yields benefits similar to those of batch normalization in feed-forward neural networks: our proposed BN-LSTM trains faster and generalizes better on a variety of tasks including language modeling and question-answering. We have argued that proper initialization of the batch normalization parameters is crucial, and suggest that previous difficulties~\citep{cesar, baidu} were due in large part to improper initialization. Finally, we have shown our model to apply to complex settings involving variable-length data, bidirectionality and highly nonlinear attention mechanisms. \section*{Acknowledgements} The authors would like to acknowledge the following agencies for research funding and computing support: the Nuance Foundation, Samsung, NSERC, Calcul Qu\'{e}bec, Compute Canada, the Canada Research Chairs and CIFAR. Experiments were carried out using the Theano~\citep{theano} and the Blocks and Fuel~\citep{blocks} libraries for scientific computing. We thank David Krueger, Saizheng Zhang, Ishmael Belghazi and Yoshua Bengio for discussions and suggestions. \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Convergence of population statistics} \label{sec:popstat_stationarity} \section{Sensitivity to initialization of $\gamma$} In Section~\ref{sec:activation-variance} we investigated the effect of initial $\gamma$ on gradient flow. To show the practical implications of this, we performed several experiments on the $p$MNIST and Penn Treebank benchmarks. The resulting performances are shown in Figure~\ref{fig:gammas}. The $p$MNIST training curves confirm that higher initial values of $\gamma$ are detrimental to the optimization of the model. For the Penn Treebank task however, the effect is gone. We believe this is explained by the difference in the nature of the two tasks. For $p$MNIST, the model absorbs the input sequence and only at the end of the sequence does it make a prediction on which it receives feedback. Learning from this feedback requires propagating the gradient all the way back through the sequence. In the Penn Treebank task on the other hand, the model makes a prediction at each timestep. At each step of the backward pass, a fresh learning signal is added to the backpropagated gradient. Essentially, the model is able to get off the ground by picking up short-term dependencies. This fails on $p$MNIST wich is dominated by long-term dependencies~\citep{urnn}. \section{Teaching Machines to Read and Comprehend: Task setup} \label{sec:more-attr} We evaluate the models on the question answering task using the CNN corpus~\citep{attentivereader}, with placeholders for the named entities. We follow a similar preprocessing pipeline as~\citet{attentivereader}. During training, we randomly sample the examples with replacement and shuffle the order of the placeholders in each text inside the minibatch. We use a vocabulary of 65829 words. We deviate from~\citet{attentivereader} in order to save computation: we use only the 4 most relevant sentences from the description, as identified by a string matching procedure. Both the training and validation sets are preprocessed in this way. Due to imprecision this heuristic sometimes strips the answers from the passage, putting an upper bound of 57\% on the validation accuracy that can be achieved. For the reported performances, the first three models (LSTM, BN-LSTM and BN-everywhere) are trained using the exact same hyperparameters, which were chosen because they work well for the baseline. The hidden state is composed of 240 units. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 10 and step rule determined by Adam~\citep{kingma2014adam} with learning rate $8 \times 10^{-5}$. For BN-e* and BN-e**, we use the same hyperparameters except that we reduce the learning rate to $8 \times 10^{-4}$ and the minibatch size to 40. \section{Hyperparameter Searches} Table~\ref{tab:hyperparams} reports hyperparameter values that were tried in the experiments. For MNIST and $p$MNIST, the hyperparameters were varied independently. For Penn Treebank, we performed a full grid search on learning rate and hidden state size, and later performed a sensitivity analysis on the batch size and initial $\gamma$. For the text8 task and the experiments with the Attentive Reader, we carried out a grid search on the learning rate and hidden state size. The same values were tried for both the baseline and our BN-LSTM. In each case, our reported results are those of the model with the best validation performance. \end{document}
Hierarchical Memory Networks
1605.07427
Figure 1: Accuracy in SQ test-set and average size of memory used. 10-softmax has high performance while using only smaller amount of memory.
[ "Model", "Test Acc.", "Avg. Softmax Size" ]
[ [ "Full-softmax", "59.5", "108442" ], [ "10-MIPS", "[BOLD] 62.2", "[BOLD] 1290" ], [ "50-MIPS", "61.2", "6180" ], [ "100-MIPS", "60.6", "11928" ], [ "1000-MIPS", "59.6", "70941" ], [ "Clustering", "51.5", "20006" ], [ "PCA-Tree", "32.4", "21108" ], [ "WTA-Hash", "40.2", "20008" ] ]
In this section, we compare the performance of the full soft attention reader and exact K-MIPS attention readers. Our goal is to verify that K-MIPS attention is in fact a valid and useful attention mechanism and see how it fares when compared to full soft attention. For K-MIPS attention, we tried K∈10,50,100,1000. We would like to emphasize that, at training time, along with K candidates for a particular question, we also add the K-candidates for each question in the mini-batch. So the exact size of the softmax layer would be higer than K during training. We also report the average softmax size during training. From the table, it is clear that the K-MIPS attention readers improve the performance of the network compared to soft attention reader. In fact, smaller the value of K is, better the performance. This result suggests that it is better to use a K-MIPS layer instead of softmax layer whenever possible. For a fair comparison, we varied the hyper-parameters of each algorithm in such a way that the average speedup is approximately the same. From the table, it is clear that the clustering-based method performs significantly better than the other two methods. However, performances are lower when compared to the performance of the full softmax.
\documentclass{article} \PassOptionsToPackage{numbers, compress}{natbib} \usepackage[final]{nips_2016} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts % hyperlinks % simple URL typesetting % professional-quality tables % blackboard math symbols % compact symbols for 1/2, etc. % microtypography \newfloatcommand{capbtabbox}{table}[][\FBwidth] \title{Hierarchical Memory Networks} \author{ Sarath Chandar\thanks{Corresponding author: apsarathchandar@gmail.com}~~$^{1}$, Sungjin Ahn$^{1}$, Hugo Larochelle$^{2,4}$, Pascal Vincent$^{1,4}$,\\ \textbf{Gerald Tesauro$^{3}$, Yoshua Bengio$^{1,4}$}\\~\\ $^{1}$~Universit{\'e} de Montr{\'e}al, Canada.\\ $^{2}$~Twitter Cortex, USA.\\ $^{3}$~IBM Watson Research Center, USA.\\ $^{4}$~CIFAR, Canada. } \newcommand{\argmin}{\mathrm{argmin}} \newcommand{\argmax}{\mathrm{argmax}} \newcommand{\cmt}[1]{\text{\textcolor{blue}{[#1]}}} \begin{document} \maketitle \begin{abstract} Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task. \end{abstract} \section{Introduction} Until recently, traditional machine learning approaches for challenging tasks such as image captioning, object detection, or machine translation have consisted in complex pipelines of algorithms, each being separately tuned for better performance. With the recent success of neural networks and deep learning research, it has now become possible to train a single model end-to-end, using backpropagation. Such end-to-end systems often outperform traditional approaches, since the entire model is directly optimized with respect to the final task at hand. However, simple encode-decode style neural networks often underperform on knowledge-based reasoning tasks like question-answering or dialog systems. Indeed, in such cases it is nearly impossible for regular neural networks to store all the necessary knowledge in their parameters. Neural networks with memory~\cite{graves2014neural,weston2014memory} can deal with knowledge bases by having an external memory component which can be used to explicitly store knowledge. The memory is accessed by reader and writer functions, which are both made differentiable so that the entire architecture (neural network, reader, writer and memory components) can be trained end-to-end using backpropagation. Memory-based architectures can also be considered as generalizations of RNNs and LSTMs, where the memory is analogous to recurrent hidden states. However they are much richer in structure and can handle very long-term dependencies because once a vector (i.e., a memory) is stored, it is copied from time step to time step and can thus stay there for a very long time (and gradients correspondingly flow back time unhampered). There exists several variants of neural networks with a memory component: Memory Networks~\cite{weston2014memory}, Neural Turing Machines (NTM)~\cite{graves2014neural}, Dynamic Memory Networks (DMN)~\cite{dmn1}. They all share five major components: memory, input module, reader, writer, and output module. \textbf{Memory:} The memory is an array of cells, each capable of storing a vector. The memory is often initialized with external data (e.g.\ a database of facts), by filling in its cells with a pre-trained vector representations of that data. \textbf{Input module:} The input module is to compute a representation of the input that can be used by other modules. \textbf{Writer:} The writer takes the input representation and updates the memory based on it. The writer can be as simple as filling the slots in the memory with input vectors in a sequential way (as often done in memory networks). If the memory is bounded, instead of sequential writing, the writer has to decide where to write and when to rewrite cells (as often done in NTMs). \textbf{Reader:} Given an input and the current state of the memory, the reader retrieves content from the memory, which will then be used by an output module. This often requires comparing the input's representation or a function of the recurrent state with memory cells using some scoring function such as a dot product. \textbf{Output module:} Given the content retrieved by the reader, the output module generates a prediction, which often takes the form of a conditional distribution over multiple labels for the output. For the rest of the paper, we will use the name \textit{memory network} to describe any model which has any form of these five components. We would like to highlight that all the components except the memory are learnable. Depending on the application, any of these components can also be fixed. In this paper, we will focus on the situation where a network does not write and only reads from the memory. In this paper, we focus on the application of memory networks to large-scale tasks. Specifically, we focus on large scale factoid question answering. For this problem, given a large set of facts and a natural language question, the goal of the system is to answer the question by retrieving the supporting fact for that question, from which the answer can be derived. Application of memory networks to this task has been studied in \cite{bordes2015large}. However, \cite{bordes2015large} depended on keyword based heuristics to filter the facts to a smaller set which is manageable for training. However heuristics are invariably dataset dependent and we are interested in a more general solution which can be used when the facts are of any structure. One can design soft attention retrieval mechanisms, where a convex combination of all the cells is retrieved or design hard attention retrieval mechanisms where one or few cells from the memory are retrieved. Soft attention is achieved by using softmax over the memory which makes the reader differentiable and hence learning can be done using gradient descent. Hard attention is achieved by using methods like REINFORCE~\cite{williams92}, which provides a noisy gradient estimate when discrete stochastic decisions are made by a model. Both soft attention and hard attention have limitations. As the size of the memory grows, soft attention using softmax weighting is not scalable. It is computationally very expensive, since its complexity is linear in the size of the memory. Also, at initialization, gradients are dispersed so much that it can reduce the effectiveness of gradient descent. These problems can be alleviated by a hard attention mechanism, for which the training method of choice is REINFORCE. However, REINFORCE can be brittle due to its high variance and existing variance reduction techniques are complex. Thus, it is rarely used in memory networks (even in cases of a small memory). In this paper, we propose a new memory selection mechanism based on Maximum Inner Product Search (MIPS) which is both scalable and easy to train. This can be considered as a hybrid of soft and hard attention mechanisms. The key idea is to structure the memory in a hierarchical way such that it is easy to perform MIPS, hence the name Hierarchical Memory Network (HMN). HMNs are scalable at both training and inference time. The main contributions of the paper are as follows: \begin{itemize} \item We explore hierarchical memory networks, where the memory is organized in a hierarchical fashion, which allows the reader to efficiently access only a subset of the memory. \item While there are several ways to decide which subset to access, we propose to pose memory access as a maximum inner product search (MIPS) problem. \item We empirically show that exact MIPS-based algorithms not only enjoy similar convergence as soft attention models, but can even improve the performance of the memory network. \item Since exact MIPS is as computationally expensive as a full soft attention model, we propose to train the memory networks using approximate MIPS techniques for scalable memory access. \item We empirically show that unlike exact MIPS, approximate MIPS algorithms provide a speedup and scalability of training, though at the cost of some performance. \end{itemize} \section{Hierarchical Memory Networks} In this section, we describe the proposed Hierarchical Memory Network (HMN). In this paper, HMNs only differ from regular memory networks in two of its components: the memory and the reader. \textbf{Memory:} Instead of a flat array of cells for the memory structure, HMNs leverages a hierarchical memory structure. Memory cells are organized into groups and the groups can further be organized into higher level groups. The choice for the memory structure is tightly coupled with the choice of reader, which is essential for fast memory access. We consider three classes of approaches for the memory's structure: hashing-based approaches, tree-based approaches, and clustering-based approaches. This is explained in detail in the next section. \textbf{Reader:} The reader in the HMN is different from the readers in flat memory networks. Flat memory-based readers use either soft attention over the entire memory or hard attention that retrieves a single cell. While these mechanisms might work with small memories, with HMNs we are more interested in achieving scalability towards very large memories. So instead, HMN readers use soft attention only over a selected subset of the memory. Selecting memory subsets is guided by a maximum inner product search algorithm, which can exploit the hierarchical structure of the organized memory to retrieve the most relevant facts in sub-linear time. The MIPS-based reader is explained in more detail in the next section. In HMNs, the reader is thus trained to create MIPS queries such that it can retrieve a sufficient set of facts. While most of the standard applications of MIPS~\cite{Ram:2012,xbox,Shrivastava014} so far have focused on settings where both query vector and database (memory) vectors are precomputed and fixed, memory readers in HMNs are learning to do MIPS by updating the input representation such that the result of MIPS retrieval contains the correct fact(s). \section{Memory Reader with $K$-MIPS attention} In this section, we describe how the HMN memory reader uses Maximum Inner Product Search (MIPS) during learning and inference. We begin with a formal definition of $K$-MIPS. Given a set of points $\mathcal{X}=\{x_1, \dots, x_n\}$ and a query vector $q$, our goal is to find \begin{equation} \argmax^{(K)}_{i \in \mathcal{X}} ~~ q^\top x_i \end{equation} where the $\argmax^{(K)}$ returns the indices of the top-$K$ maximum values. In the case of HMNs, $\mathcal{X}$ corresponds to the memory and $q$ corresponds to the vector computed by the input module. A simple but inefficient solution for $K$-MIPS involves a linear search over the cells in memory by performing the dot product of $q$ with all the memory cells. While this will return the exact result for $K$-MIPS, it is too costly to perform when we deal with a large-scale memory. However, in many practical applications, it is often sufficient to have an approximate result for $K$-MIPS, trading speed-up at the cost of the accuracy. There exist several approximate $K$-MIPS solutions in the literature~\cite{Shrivastava014,shrivastava2014improved,xbox,symmetric}. All the approximate $K$-MIPS solutions add a form of hierarchical structure to the memory and visit only a subset of the memory cells to find the maximum inner product for a given query. Hashing-based approaches~\cite{Shrivastava014, shrivastava2014improved, symmetric} hash cells into multiple bins, and given a query they search for $K$-MIPS cell vectors only in bins that are close to the bin associated with the query. Tree-based approaches~\cite{Ram:2012,xbox} create search trees with cells in the leaves of the tree. Given a query, a path in the tree is followed and MIPS is performed only for the leaf for the chosen path. Clustering-based approaches~\cite{cluMIPS} cluster cells into multiple clusters (or a hierarchy of clusters) and given a query, they perform MIPS on the centroids of the top few clusters. We refer the readers to \cite{cluMIPS} for an extensive comparison of various state-of-the-art approaches for approximate $K$-MIPS. Our proposal is to exploit this rich approximate $K$-MIPS literature to achieve scalable training and inference in HMNs. Instead of filtering the memory with heuristics, we propose to organize the memory based on approximate $K$-MIPS algorithms and then train the reader to learn to perform MIPS. Specifically, consider the following softmax over the memory which the reader has to perform for every reading step to retrieve a set of relevant candidates: \begin{equation} R_{out} = \mathrm{softmax}(h(q)M^T) \end{equation} where $h(q) \in \mathbb{R}^d$ is the representation of the query, $M \in \mathbb{R}^{N\times d}$ is the memory with $N$ being the total number of cells in the memory. We propose to replace this softmax with $\mathrm{softmax}^{(K)}$ which is defined as follows: \begin{equation} C = \argmax^{(K)} ~~ h(q)M ^T \label{q1} \end{equation} \begin{equation} R_{out} = \mathrm{softmax}^{(K)}(h(q)M^T) = \mathrm{softmax}(h(q)M[C]^T) \label{q2} \end{equation} where $C$ is the indices of top-$K$ MIP candidate cells and $M[C]$ is a sub-matrix of $M$ where the rows are indexed by $C$. One advantage of using the $\mathrm{softmax}^{(K)}$ is that it naturally focuses on cells that would normally receive the strongest gradients during learning. That is, in a full softmax, the gradients are otherwise more dispersed across cells, given the large number of cells and despite many contributing a small gradient. As our experiments will show, this results in slower training. One problematic situation when learning with the $\mathrm{softmax}^{(K)}$ is when we are at the initial stages of training and the $K$-MIPS reader is not including the correct fact candidate. To avoid this issue, we always include the correct candidate to the top-$K$ candidates retrieved by the $K$-MIPS algorithm, effectively performing a fully supervised form of learning. During training, the reader is updated by backpropagation from the output module, through the subset of memory cells. Additionally, the log-likelihood of the correct fact computed using $K$-softmax is also maximized. This second supervision helps the reader learn to modify the query such that the maximum inner product of the query with respect to the memory will yield the correct supporting fact in the top $K$ candidate set. Until now, we described the exact $K$-MIPS-based learning framework, which still requires a linear look-up over all memory cells and would be prohibitive for large-scale memories. In such scenarios, we can replace the exact $K$-MIPS in the training procedure with the approximate $K$-MIPS. This is achieved by deploying a suitable memory hierarchical structure. The same approximate $K$-MIPS-based reader can be used during inference stage as well. Of course, approximate $K$-MIPS algorithms might not return the exact MIPS candidates and will likely to hurt performance, but at the benefit of achieving scalability. While the memory representation is fixed in this paper, updating the memory along with the query representation should improve the likelihood of choosing the correct fact. However, updating the memory will reduce the precision of the approximate $K$-MIPS algorithms, since all of them assume that the vectors in the memory are static. Designing efficient dynamic $K$-MIPS should improve the performance of HMNs even further, a challenge that we hope to address in future work. \subsection{Reader with Clustering-based approximate $K$-MIPS} \label{sec:clusterkmips} Clustering-based approximate $K$-MIPS was proposed in \cite{cluMIPS} and it has been shown to outperform various other state-of-the-art data dependent and data independent approximate $K$-MIPS approaches for inference tasks. As we will show in the experiments section, clustering-based MIPS also performs better when used to training HMNs. Hence, we focus our presentation on the clustering-based approach and propose changes that were found to be helpful for learning HMNs. Following most of the other approximate $K$-MIPS algorithms, \cite{cluMIPS} converts MIPS to Maximum Cosine Similarity Search (MCSS) problem: \begin{equation} \argmax^{(K)}_{i \in \mathcal{X}} ~~ \frac{q^T x_i}{||q||~~ ||x_i||} = \argmax^{(K)}_{i \in \mathcal{X}} ~~ \frac{q^T x_i}{||x_i||} \end{equation} When all the data vectors $x_i$ have the same norm, then MCSS is equivalent to MIPS. However, it is often restrictive to have this additional constraint. Instead, \cite{cluMIPS} appends additional dimensions to both query and data vectors to convert MIPS to MCSS. In HMN terminology, this would correspond to adding a few more dimensions to the memory cells and input representations. The algorithm introduces two hyper-parameters, $U < 1$ and $m \in \mathbb{N}^{*}$. The first step is to scale all the vectors in the memory by the same factor, such that $\max_i ||x_i||_2 = U$. We then apply two mappings, $P$ and $Q$, on the memory cells and on the input vector, respectively. These two mappings simply concatenate $m$ new components to the vectors and make the norms of the data points all roughly the same~\cite{shrivastava2014improved}. The mappings are defined as follows: \begin{eqnarray} P(x) & = & [x, 1/2 - ||x||_2^2, 1/2 - ||x||_2^4, \dots, 1/2 - ||x||_2^{2^m}] \\ Q(x) & = & [x, 0, 0, \dots, 0] \end{eqnarray} We thus have the following approximation of MIPS by MCSS for any query vector $q$: \begin{eqnarray} \argmax^{(K)}_i q^\top x_i & \simeq & \argmax^{(K)}_i \frac{Q(q)^\top P(x_i)}{||Q(q)||_2 \cdot ||P(x_i)||_2} \end{eqnarray} Once we convert MIPS to MCSS, we can use spherical $K$-means \cite{zhong2005efficient} or its hierarchical version to approximate and speedup the cosine similarity search. Once the memory is clustered, then every read operation requires only $K$ dot-products, where $K$ is the number of cluster centroids. Since this is an approximation, it is error-prone. As we are using this approximation for the learning process, this introduces some bias in gradients, which can affect the overall performance of HMN. To alleviate this bias, we propose three simple strategies. \begin{itemize} \item Instead of using only the top-$K$ candidates for a single read query, we also add top-$K$ candidates retrieved for every other read query in the mini-batch. This serves two purposes. First, we can do efficient matrix multiplications by leveraging GPUs since all the $K$-softmax in a minibatch are over the same set of elements. Second, this also helps to decrease the bias introduced by the approximation error. \item For every read access, instead of only using the top few clusters which has a maximum product with the read query, we also sample some clusters from the rest, based on a probability distribution log-proportional to the dot product with the cluster centroids. This also decreases the bias. \item We can also sample random blocks of memory and add it to top-$K$ candidates. \end{itemize} We empirically investigate the effect of these variations in Section~\ref{sec:expapproxkmips}. \section{Related Work} Memory networks have been introduced in \cite{weston2014memory} and have been so far applied to comprehension-based question answering \cite{weston2015towards,sukhbaatarend}, large scale question answering \cite{bordes2015large} and dialogue systems \cite{dodge2015}. While \cite{weston2014memory} considered supervised memory networks in which the correct supporting fact is given during the training stage, \cite{sukhbaatarend} introduced semi-supervised memory networks that can learn the supporting fact by itself. \cite{dmn1,dmn2} introduced Dynamic Memory Networks (DMNs) which can be considered as a memory network with two types of memory: a regular large memory and an episodic memory. Another related class of model is the Neural Turing Machine~\cite{graves2014neural}, which is uses softmax-based soft attention. Later \cite{rlntm} extended NTM to hard attention using reinforcement learning. %\cite{sukhbaatarend} does multiple hops over the whole memory using a series of softmax and hence it is not scalable. \cite{dodge2015,bordes2015large} alleviate the problem of the scalability of soft attention by having an initial keyword based filtering stage, which reduces the number of facts being considered. Our work generalizes this filtering by using MIPS for filtering. This is desirable because MIPS can be applied for any modality of data or even when there is no overlap between the words in a question and the words in facts. The softmax arises in various situations and most relevant to this work are scaling methods for large vocabulary neural language modeling. In neural language modeling, the final layer is a softmax distribution over the next word and there exist several approaches to achieve scalability. \cite{Morin+al-2005} proposes a hierarchical softmax based on prior clustering of the words into a binary, or more generally $n$-ary tree, that serves as a fixed structure for the learning process of the model. The complexity of training is reduced from $O(n)$ to $O(\log n)$. Due to its clustering and tree structure, it resembles the clustering-based MIPS techniques we explore in this paper. However, the approaches differ at a fundamental level. Hierarchical softmax defines the probability of a leaf node as the product of all the probabilities computed by all the intermediate softmaxes on the way to that leaf node. By contrast, an approximate MIPS search imposes no such constraining structure on the probabilistic model, and is better thought as efficiently searching for top winners of what amounts to be a large ordinary flat softmax. Other methods such as Noice Constrastive Estimation~\cite{mnih2014neural} and Negative Sampling~\cite{MikolovT2013} avoid an expensive normalization constant by sampling negative samples from some marginal distribution. By contrast, our approach approximates the softmax by explicitly including in its negative samples candidates that likely would have a large softmax value. \cite{largev} introduces an importance sampling approach that considers all the words in a mini-batch as the candidate set. This in general might also not include the MIPS candidates with highest softmax values. \cite{dlhash} is the only work that we know of, proposing to use MIPS during learning. It proposes hashing-based MIPS to sort the hidden layer activations and reduce the computation in every layer. However, a small scale application was considered and data-independent methods like hashing will likely suffer as dimensionality increases. \section{Experiments} In this section, we report experiments on factoid question answering using hierarchical memory networks. Specifically, we use the SimpleQuestions dataset~\cite{bordes2015large}. The aim of these experiments is not to achieve state-of-the-art results on this dataset. Rather, we aim to propose and analyze various approaches to make memory networks more scalable and explore the achieved tradeoffs between speed and accuracy. \subsection{Dataset} We use SimpleQuestions~\cite{bordes2015large} which is a large scale factoid question answering dataset. SimpleQuestions consists of 108,442 natural language questions, each paired with a corresponding fact from Freebase. Each fact is a triple (subject,relation,object) and the answer to the question is always the object. The dataset is divided into training (75910), validation (10845), and test (21687) sets. Unlike \cite{bordes2015large} who additionally considered FB2M (10M facts) or FB5M (12M facts) with keyword-based heuristics for filtering most of the facts for each question, we only use SimpleQuestions, with no keyword-based heuristics. This allows us to do a direct comparison with the full softmax approach in a reasonable amount of time. Moreover, we would like to highlight that for this dataset, keyword-based filtering is a very efficient heuristic since all questions have an appropriate source entity with a matching word. Nevertheless, our goal is to design a general purpose architecture without such strong assumptions on the nature of the data. \subsection{Model} Let $V_q$ be the vocabulary of all words in the natural language questions. Let $W_q$ be a $|V_q| * m$ matrix where each row is some $m$ dimensional embedding for a word in the question vocabulary. This matrix is initialized with random values and learned during training. Given any question, we represent it with a bag-of-words representation by summing the vector representation of each word in the question. Let $q = \{w_i\}_{i=1}^{p}$, \[ h(q) = \sum_{i=1}^{p} W_q[w_i] \] Then, to find the relevant fact from the memory M, we call the $K$-MIPS-based reader module with $h(q)$ as the query. This uses Equation \ref{q1} and \ref{q2} to compute the output of the reader $R_{out}$. The reader is trained by minimizing the Negative Log Likelihood (NLL) of the correct fact. \[ \mathcal{J}_{\theta} = \sum_{i=1}^N -\textrm{log}(R_{out}[f_i]) \] where $f_i$ is the index of the correct fact in $W_m$. We are fixing the memory embeddings to the TransE \cite{transe} embeddings and learning only the question embeddings. This model is simpler than the one reported in \cite{bordes2015large} so that it is esay to analyze the effect of various memory reading strategies. \subsection{Training Details} We trained the model with the Adam optimizer~\cite{adam}, with a fixed learning rate of 0.001. We used mini-batches of size 128. We used 200 dimensional embeddings for the TransE entities, yielding 600 dimensional embeddings for facts by concatenating the embeddings of the subject, relation and object. We also experimented with summing the entities in the triple instead of concatenating, but we found that it was difficult for the model to differentiate facts this way. The only learnable parameters by the HMN model are the question word embeddings. The entity distribution in SimpleQuestions is extremely sparse and hence, following \cite{bordes2015large}, we also add artificial questions for all the facts for which we do not have natural language questions. Unlike \cite{bordes2015large}, we do not add any other additional tasks like paraphrase detection to the model, mainly to study the effect of the reader. We stopped training for all the models when the validation accuracy consistently decreased for 3 epochs. \subsection{Exact $K$-MIPS improves accuracy} In this section, we compare the performance of the full soft attention reader and exact $K$-MIPS attention readers. Our goal is to verify that $K$-MIPS attention is in fact a valid and useful attention mechanism and see how it fares when compared to full soft attention. For $K$-MIPS attention, we tried $K \in{10,50,100,1000}$. We would like to emphasize that, at training time, along with $K$ candidates for a particular question, we also add the $K$-candidates for each question in the mini-batch. So the exact size of the softmax layer would be higer than $K$ during training. In Table~\ref{tab:exactmips}, we report the test performance of memory networks using the soft attention reader and $K$-MIPS attention reader. We also report the average softmax size during training. From the table, it is clear that the $K$-MIPS attention readers improve the performance of the network compared to soft attention reader. In fact, smaller the value of $K$ is, better the performance. This result suggests that it is better to use a $K$-MIPS layer instead of softmax layer whenever possible. It is interesting to see that the convergence of the model is not slowed down due to this change in softmax computation (as shown in Figure~\ref{fig:exactmips}). This experiment confirms the usefulness of $K$-MIPS attention. However, exact $K$-MIPS has the same complexity as a full softmax. Hence, to scale up the training, we need more efficient forms of $K$-MIPS attention, which is the focus of next experiment. \subsection{Approximate $K$-MIPS based learning} \label{sec:expapproxkmips} As mentioned previously, designing faster algorithms for $K$-MIPS is an active area of research. \cite{cluMIPS} compared several state-of-the-art data-dependent and data-independent methods for faster approximate $K$-MIPS and it was found that clustering-based MIPS performs significantly better than other approaches. However the focus of the comparison was on performance during the inference stage. In HMNs, $K$-MIPS must be used at both training stage and inference stages. To verify if the same trend can been seen during learning stage as well, we compared three different approaches: \textbf{Clustering:} This was explained in detail in section 3. \textbf{WTA-Hash:} Winner Takes All hashing~\cite{wta} is a hashing-based $K$-MIPS algorithm which also converts MIPS to MCSS by augmenting additional dimensions to the vectors. This method used $n$ hash functions and each hash function does $p$ different random permutations of the vector. Then the prefix constituted by the first $k$ elements of each permuted vector is used to construct the hash for the vector. \textbf{PCA-Tree:} PCA-Tree \cite{xbox} is the state-of-the-art tree-based method, which converts MIPS to NNS by vector augmentation. It uses the principal components of the data to construct a balanced binary tree with data residing in the leaves. For a fair comparison, we varied the hyper-parameters of each algorithm in such a way that the average speedup is approximately the same. Table~\ref{tab:exactmips} shows the performance of all three methods, compared to a full softmax. From the table, it is clear that the clustering-based method performs significantly better than the other two methods. However, performances are lower when compared to the performance of the full softmax. As a next experiment, we analyze various the strategies proposed in Section~\ref{sec:clusterkmips} to reduce the approximation bias of clustering-based $K$-MIPS: \textbf{Top-K:} This strategy picks the vectors in the top $K$ clusters as candidates. \textbf{Sample-K:} This strategy samples $K$ clusters, without replacement, based on a probability distribution based on the dot product of the query with the cluster centroids. When combined with the Top-$K$ strategy, we ignore clusters selected by the Top-$k$ strategy for sampling. \textbf{Rand-block:} This strategy divides the memory into several blocks and uniformly samples a random block as candidate. We experimented with 1000 clusters and 2000 clusters. While comparing various training strategies, we made sure that the effective speedup is approximately the same. Memory access to facts per query for all the models is approximately 20,000, hence yielding a 5X speedup. Results are given in Table~\ref{tab:approxkmips}. We observe that the best approach is to combine the Top-K and Sample-K strategies, with Rand-block not being beneficial. Interestingly, the worst performances correspond to cases where the Sample-K strategy is ignored. \section{Conclusion} In this paper, we proposed a hierarchical memory network that exploits $K$-MIPS for its attention-based reader. Unlike soft attention readers, $K$-MIPS attention reader is easily scalable to larger memories. This is achieved by organizing the memory in a hierarchical way. %$K$-MIPS attention is also simple to train when compared to hard attention. % We don't directly show this, so I wouldn't emphasize that point Experiments on the SimpleQuestions dataset demonstrate that exact $K$-MIPS attention is better than soft attention. However, existing state-of-the-art approximate $K$-MIPS techniques provide a speedup at the cost of some accuracy. Future research will investigate designing efficient dynamic $K$-MIPS algorithms, where the memory can be dynamically updated during training. This should reduce the approximation bias and hence improve the overall performance. {\small \bibliographystyle{unsrtnat} } \end{document}
A Neural Stochastic Volatility Model
1712.00504
Table 1: The performance of the proposed model and the baselines in terms of negative log-likelihood (NLL) evaluated on the test samples of real-world stock price time series: each row from 1 to 10 lists the average NLL for a specific individual stock; the last row summarises the average NLL of the entire test samples of all 162 stocks.
[ "Stock", "NSVM-corr", "NSVM-diag", "ARCH", "GARCH", "GJR", "AVARCH", "AVGCH", "TARCH", "EARCH", "EGARCH", "stochvol", "GP-Vol" ]
[ [ "1", "[BOLD] 1.11341", "1.42816", "1.36733", "1.60087", "1.60262", "1.34792", "1.57115", "1.58156", "1.33528", "1.53651", "1.39638", "1.56260" ], [ "2", "[BOLD] 1.04058", "1.28639", "1.35682", "1.63586", "1.59978", "1.32049", "1.46016", "1.45951", "1.35758", "1.52856", "1.37080", "1.47025" ], [ "3", "[BOLD] 1.03159", "1.32285", "1.37576", "1.44640", "1.45826", "1.34921", "1.44437", "1.45838", "1.33821", "1.41331", "1.25928", "1.48203" ], [ "4", "[BOLD] 1.06467", "1.32964", "1.38872", "1.45215", "1.43133", "1.37418", "1.44565", "1.44371", "1.35542", "1.40754", "1.36199", "1.32451" ], [ "5", "[BOLD] 0.96804", "1.22451", "1.39470", "1.31141", "1.30394", "1.37545", "1.28204", "1.27847", "1.37697", "1.28191", "1.16348", "1.41417" ], [ "6", "[BOLD] 0.96835", "1.23537", "1.44126", "1.55520", "1.57794", "1.39190", "1.47442", "1.47438", "1.36163", "1.48209", "1.15107", "1.24458" ], [ "7", "[BOLD] 1.13580", "1.43244", "1.36829", "1.65549", "1.71652", "1.32314", "1.50407", "1.50899", "1.29369", "1.64631", "1.42043", "1.19983" ], [ "8", "[BOLD] 1.03752", "1.26901", "1.39010", "1.47522", "1.51466", "1.35704", "1.44956", "1.45029", "1.34560", "1.42528", "1.26289", "1.47421" ], [ "9", "[BOLD] 0.95157", "1.15896", "1.42636", "1.32367", "1.24404", "1.42047", "1.35427", "1.34465", "1.42143", "1.32895", "1.12615", "1.35478" ], [ "10", "[BOLD] 0.99105", "1.13143", "1.36919", "1.55220", "1.29989", "1.24032", "1.06932", "1.04675", "23.35983", "1.20704", "1.32947", "1.18123" ], [ "AVG", "[BOLD] 1.18354", "1.23521", "1.27062", "1.27051", "1.28809", "1.28827", "1.27754", "1.29010", "1.33450", "1.36465", "1.27098", "1.34751" ] ]
The result shows that NSVM has achieved higher accuracy over the baselines on the task of volatility modelling and forecasting on NLL, which validates the high flexibility and rich expressive power of NSVM for volatility modelling and forecasting. Although the improvement comes at the cost of longer training time before convergence, it can be mitigated by applying parallel computing techniques as well as more advanced network architecture or training methods. Apart from the higher accuracy NSVM obtained, it provides us with a rather general framework to generalise univariate time series models of any specific functional form to the corresponding multivariate cases by extending network dimensions and manipulating the covariance matrices. A case study on real-world financial datasets is illustrated in Fig.
\def\year{2018}\relax \documentclass[letterpaper]{article} %DO NOT CHANGE THIS %Required %Required %Required %Required \frenchspacing %Required \setlength{\pdfpagewidth}{8.5in} %Required \setlength{\pdfpageheight}{11in} %Required \renewcommand*{\familydefault}{\rmdefault} \usepackage[scaled=.86]{helvet} \usepackage[colorlinks=true,citecolor=blue,urlcolor=black,linkcolor=red]{hyperref} \usepackage[pdftex]{graphicx} \captionsetup[subfigure]{width=0.95\columnwidth} \usepackage[noend]{algpseudocode} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \algnewcommand{\IfInline}[2]{\State \algorithmicif\ #1 \algorithmicthen\ ~#2~} \newtheorem{theorem}{Theorem} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\sigm}{sigm} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\const}{const} \DeclareMathOperator{\MLP}{MLP} \DeclareMathOperator{\RNN}{RNN} \newcommand*\diff{\mathop{}\!\mathrm{d}} \newcommand*\diffn[1]{\mathop{}\!\mathrm{d^#1}} \newcommand*\vs[1]{\bm{\mathit{#1}}} % with {isomath}, \newcommand*\vs[1]{\vectorsym{#1}} \newcommand*\ts[1]{\mathbf{#1}} % with {isomath}, \newcommand*\ts[1]{\mathbf{#1}} \newcommand*\ps[1]{\mathrm{#1}} % parameters \usepackage[noamssymbols,slantedGreek]{newtxmath} \usepackage[cal=euler,calscaled=1.0,scr=boondoxo,scrscaled=1.0,bb=px]{mathalfa} \newcommand\Tstrut{\rule{0pt}{2.6ex}} % = `top' strut \newcommand\Bstrut{\rule[-0.9ex]{0pt}{0pt}} % = `bottom' strut \newcommand{\rui}[1]{{\bf \color{violet} [[Rui ``#1'']]}} \newcommand{\yaodong}[1]{{\bf \color{purple} [[Yaodong ``#1'']]}} \newcommand{\jun}[1]{{\bf \color{red} [[Jun ``#1'']]}} \newcommand{\zhanxing}[1]{{\bf \color{orange} [[Zhanxing ``#1'']]}} \pdfinfo{ /Title (A Neural Stochastic Volatility Model) /Author (Rui Luo, Weinan Zhang, Xiaojun Xu, and Jun Wang)} \setcounter{secnumdepth}{0} \begin{document} \title{A Neural Stochastic Volatility Model} \author{Rui Luo\textsuperscript{\dag}, Weinan Zhang\textsuperscript{\ddag}, Xiaojun Xu\textsuperscript{\ddag}, and Jun Wang\textsuperscript{\dag}\\ \textsuperscript{\dag}University College London and \textsuperscript{\ddag}Shanghai Jiao Tong University\\ \texttt{\{r.luo,j.wang\}@cs.ucl.ac.uk,~\{wnzhang,xuxj\}@apex.sjtu.edu.cn} } \maketitle \begin{abstract} In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model \emph{stochvol} as well as the Gaussian process volatility model \emph{GPVol}, on average negative log-likelihood. \end{abstract} \section{Introduction} The volatility of the price movements reflects the ubiquitous uncertainty within financial markets. It is critical that the level of risk (aka, the degree of variation), indicated by volatility, is taken into consideration before investment decisions are made and portfolio are optimised \cite{hull2006options}; volatility is substantially a key variable in the pricing of derivative securities. Hence, estimating and forecasting volatility is of great importance in branches of financial studies, including investment, risk management, security valuation and monetary policy making \cite{poon2003forecasting}. Volatility is measured typically by employing the standard deviation of price change in a fixed time interval, such as a day, a month or a year. The higher the volatility is, the riskier the asset should be. One of the primary challenges in designing volatility models is to identify the existence of latent stochastic processes and to characterise the underlying dependences or interactions between variables within a certain time span. A classic approach has been to handcraft the characteristic features of volatility models by imposing assumptions and constraints, given prior knowledge and observations. Notable examples include autoregressive conditional heteroscedasticity (ARCH) model \cite{engle1982autoregressive} and the extension, generalised ARCH (GARCH) \cite{bollerslev1986generalized}, which makes use of autoregression to capture the properties of time-varying volatility within many time series. As an alternative to the GARCH model family, the class of stochastic volatility (SV) models specify the variance to follow some latent stochastic process \cite{hull1987pricing}. Heston \cite{heston1993closed} proposed a continuous-time model with the volatility following an Ornstein-Uhlenbeck process and derived a closed-form solution for options pricing. Since the temporal discretisation of continuous-time dynamics sometimes leads to a deviation from the original trajectory of system, those continuous-time models are seldom applied in forecasting. For practical purposes of forecasting, the canonical model \cite{jacquier2002bayesian,kim1998stochastic} formulated in a discrete-time fashion for regularly spaced data such as daily prices of stocks is of great interest. While theoretically sound, those approaches require strong assumptions which might involve detailed insight of the target sequences and are difficult to determine without a thorough inspection. In this paper, we take a fully data driven approach and determine the configurations with as few exogenous input as possible, or even purely from the historical data. We propose a neural network re-formulation of stochastic volatility by leveraging stochastic models and recurrent neural networks (RNNs). In inspired by the work from Chung et al. \cite{DBLP:conf/nips/ChungKDGCB15} and Fraccaro et al. \cite{DBLP:conf/nips/FraccaroSPW16}, the proposed model is rooted in variational inference and equipped with the latest advances of stochastic neural networks. The model inherits the fundamentals of SV model and provides a general framework for volatility modelling; it extends previous sequential frameworks with autoregressive and bidirectional architecture and provide with a more systematic and volatility-specific formulation on stochastic volatility modelling for financial time series. We presume that the latent variables follow a Gaussian autoregressive process, which is then utilised to model the variance process. Our neural network formulation is essentially a general framework for volatility modelling, which covers two major classes of volatility models in financial study as the special cases with specific weights and activations on neurons. Experiments with real-world stock price datasets are performed. The result shows that the proposed model produces more accurate estimation and prediction, outperforming various widely-used deterministic models in the GARCH family and several recently proposed stochastic models on average negative log-likelihood; the high flexibility and rich expressive power are validated. \section{Related Work} A notable framework for volatility is autoregressive conditional heteroscedasticity (ARCH) model \cite{engle1982autoregressive}: it can accurately identify the characteristics of time-varying volatility within many types of time series. Inspired by ARCH model, a large body of diverse work based on stochastic process for volatility modelling has emerged \cite{bollerslev1994arch}. Bollerslev \cite{bollerslev1986generalized} generalised ARCH model to the generalised autoregressive conditional heteroscedasticity (GARCH) model in a manner analogous to the extension from autoregressive (AR) model to autoregressive moving average (ARMA) model by introducing the past conditional variances in the current conditional variance estimation. Engle and Kroner \cite{engle1995multivariate} presented theoretical results on the formulation and estimation of multivariate GARCH model within simultaneous equations systems. The extension to multivariate model allows the covariance to present and depend on the historical information, which are particularly useful in multivariate financial models. An alternative to the conditionally deterministic GARCH model family is the class of stochastic volatility (SV) models, which first appeared in the theoretical finance literature on option pricing \cite{hull1987pricing}. The SV models specify the variance to follow some latent stochastic process such that the current volatility is no longer a deterministic function even if the historical information is provided. As an example, Heston's model \cite{heston1993closed} characterises the variance process as a Cox-Ingersoll-Ross process driven by a latent Wiener process. While theoretically sound, those approaches require strong assumptions which might involve complex probability distributions and non-linear dynamics that drive the process. Nevertheless, empirical evidences have confirmed that volatility models provide accurate prediction \cite{andersen1998answering} and models such as ARCH and its descendants/variants have become indispensable tools in asset pricing and risk evaluation. Notably, several models have been recently proposed for practical forecasting tasks: Kastner et al. \cite{DBLP:journals/csda/KastnerF14} implemented the MCMC-based framework \emph{stochvol} where the ancillarity-sufficiency interweaving strategy (ASIS) is applied for boosting MCMC estimation of stochastic volatility; Wu et al. \cite{DBLP:conf/nips/WuHG14} designed the \emph{GP-Vol}, a non-parametric model which utilises Gaussian processes to characterise the dynamics and jointly learns the process and hidden states via online inference algorithm. Despite the fact that it provides us with a practical approach towards stochastic volatility forecasting, both models require a relatively large volume of samples to ensure the accuracy, which involves very expensive sampling routine at each time step. Another drawback is that those models are incapable to handle the forecasting task for multivariate time series. On the other hand, deep learning \cite{DBLP:journals/nature/LeCunBH15,DBLP:journals/nn/Schmidhuber15} that utilises nonlinear structures known as deep neural networks, powers various applications. It has triumph over pattern recognition challenges, such as image recognition \cite{DBLP:conf/nips/KrizhevskySH12}, speech recognition \cite{DBLP:conf/nips/ChorowskiBSCB15}, machine translation \cite{DBLP:journals/corr/BahdanauCB14} to name a few. Time-dependent neural networks models include RNNs with neuron structures such as long short-term memory (LSTM) \cite{DBLP:journals/neco/HochreiterS97}, bidirectional RNN (BRNN) \cite{DBLP:journals/tsp/SchusterP97}, gated recurrent unit (GRU) \cite{DBLP:conf/emnlp/ChoMGBBSB14} and attention mechanism \cite{DBLP:conf/icml/XuBKCCSZB15}. Recent results show that RNNs excel for sequence modelling and generation in various applications \cite{DBLP:conf/icml/OordKK16,DBLP:conf/emnlp/ChoMGBBSB14,DBLP:conf/icml/XuBKCCSZB15}. However, despite its capability as non-linear universal approximator, one of the drawbacks of neural networks is its deterministic nature. Adding latent variables and their processes into neural networks would easily make the posteriori computationally intractable. Recent work shows that efficient inference can be found by variational inference when hidden continuous variables are embedded into the neural networks structure \cite{DBLP:journals/corr/KingmaW13,DBLP:conf/icml/RezendeMW14}. Some early work has started to explore the use of variational inference to make RNNs stochastic: Chung et al. \cite{DBLP:conf/nips/ChungKDGCB15} defined a sequential framework with complex interacting dynamics of coupling observable and latent variables whereas Fraccaro et al. \cite{DBLP:conf/nips/FraccaroSPW16} utilised heterogeneous backward propagating layers in inference network according to its Markovian properties. In this paper, we apply the stochastic neural networks to solve the volatility modelling problem. In other words, we model the dynamics and stochastic nature of the degree of variation, not only the mean itself. Our neural network treatment of volatility modelling is a general one and existing volatility models (e.g., the Heston and GARCH models) are special cases in our formulation. \section{Preliminaries: Volatility Models} Volatility models characterise the dynamics of volatility processes, and help estimate and forecast the fluctuation within time series. As it is often the case that one seeks for prediction on quantity of interest with a collection of historical information at hand, we presume the conditional variance to have dependency -- either deterministic or stochastic -- on history, which results in two categories of volatility models. \subsection{Deterministic Volatility Models: the GARCH Model Family} The GARCH model family comprises various linear models that formulate the conditional variance at present as a linear function of observations and variances from the past. Bollerslev's extension \cite{bollerslev1986generalized} of Engle's primitive ARCH model \cite{engle1982autoregressive}, referred as generalised ARCH (GARCH) model, is one of the most well-studied and widely-used volatility models: \begin{align} \label{eq:garch} \sigma^2_t &= \alpha_0 + \sum_{i=1}^p \alpha_i x^2_{t-i} + \sum_{j=1}^q \beta_j \sigma^2_{t-j}, \\ \label{eq:assumption} x_t &\sim \mathscr{N}(0, \sigma^2_t), \end{align} where Eq. \eqref{eq:assumption} represents the assumption that the observation $x_t$ follows from the Gaussian distribution with mean 0 and variance $\sigma^2_t$; the (conditional) variance $\sigma^2_t$ is fully determined by a linear function (Eq. \eqref{eq:garch}) of previous observations $\{x_{<t}\}$ and variances $\{\sigma^2_{<t}\}$. Note that if $q=0$ in Eq. \eqref{eq:garch}, GARCH model degenerates to ARCH model. Various variants have been proposed ever since. Glosten, Jagannathan and Runkle \cite{glosten1993relation} extended GARCH model with additional terms to account for asymmetries in the volatility and proposed GJR-GARCH model; Zakoian \cite{zakoian1994threshold} replaced the quadratic operators with absolute values, leading to threshold ARCH/GARCH (TARCH) models. The general functional form is formulated as \begin{align} \label{eq:tarch} \sigma^d_t = \alpha_0 &+ \sum_{i=1}^p \alpha_i |x_{t-i}|^d + \sum_{j=1}^q \beta_j \sigma^d_{t-j}\notag \\ &+ \sum_{k=1}^o \gamma_k |x_{t-k}|^d I[x_{t-k} < 0], \end{align} where $I[x_{t-k} < 0]$ is the indicator function: $I[x_{t-k} < 0] = 1$ if $x_{t-k} < 0$, and 0 otherwise, which allows for asymmetric reactions of volatility in terms of the sign of previous $\{x_{<t}\}$. Various variants of GARCH model can be expressed by assigning values to parameters $p,o,q,d$ in Eq. \eqref{eq:tarch}: { \begin{enumerate} \setlength\itemsep{0em} \item{ARCH($p$): $p \in \mathbb{N}^+$; $q \equiv 0$; $o \equiv 0$; $d \equiv 2$} \item{GARCH($p,q$): $p \in \mathbb{N}^+$; $q \equiv 0$; $o \equiv 0$; $d \equiv 2$} \item{GJR-GARCH($p,o,q$): $p \in \mathbb{N}^+$; $q \in \mathbb{N}^+$; $o \in \mathbb{N}^+$; $d \equiv 2$} \item{AVARCH($p$): $p \in \mathbb{N}^+$; $q \equiv 0$; $o \equiv 0$; $d \equiv 2$} \item{AVGARCH($p,q$): $p \in \mathbb{N}^+$; $q \in \mathbb{N}^+$; $o \equiv 0$; $d \equiv 2$} \item{TARCH($p,o,q$): $p \in \mathbb{N}^+$; $q \in \mathbb{N}^+$; $o \in \mathbb{N}^+$; $d \equiv 1$} \end{enumerate} }% Another fruitful specification shall be Nelson's exponential GARCH (EGARCH) model \cite{nelson1991conditional}, which instead formulates the dependencies in log-variance $\log(\sigma^2_t)$: \begin{align} \label{eq:egarch} \log(\sigma^2_t) &= \alpha_0 + \sum_{i=1}^p \alpha_i g(x_{t-i}) + \sum_{j=1}^q \beta_j \log(\sigma^2_{t-j}), \\ \label{eq:g} g(x_t) &= \theta x_t + \gamma (|x_t| - \mathbb{E}[|x_t|]), \end{align} where $g(x_t)$ (Eq. \eqref{eq:g}) accommodates the asymmetric relation between observations and volatility changes. If setting $q \equiv 0$ in Eq. \eqref{eq:egarch}, EGARCH($p,q$) then degenerates to EARCH($p$). \subsection{Stochastic Volatility Models} An alternative to the (conditionally) deterministic volatility models is the class of stochastic volatility (SV) models. First introduced in the theoretical finance literature, earliest SV models such as Hull and White's \cite{hull1987pricing} as well as Heston model \cite{heston1993closed} are formulated by stochastic differential equations in a continuous-time fashion for analysis convenience. In particular, Heston model instantiates a continuous-time stochastic volatility model for univariate processes: \begin{align} \label{eq:heston_z} \diff{\sigma}(t) &= -\beta \sigma(t) \diff{t} + \delta \diff{w^{\sigma}}(t), \\ \diff{x}(t) &= (\mu - 0.5\sigma^2(t)) \diff{t} + \sigma(t) \diff{w^{x}}(t). \end{align} where $x(t)=\log{s}(t)$ is the logarithm of stock price $s_t$ at time $t$, $w^{x}(t)$ and $w^{\sigma}(t)$ represent two correlated Wiener processes and the correlation between $\diff{w^{x}}(t)$ and $\diff{w^{\sigma}}(t)$ is expressed as $\mathbb{E}[\diff{w^{x}}(t) \cdot \diff{w^{\sigma}}(t)] = \rho \diff{t}$. For practical use, empirical versions of the SV model, typically formulated in a discrete-time fashion, are of great interest. The canonical model \cite{jacquier2002bayesian,kim1998stochastic} for regularly spaced data is formulated as \begin{align} \label{eq:sv1} \log(\sigma^2_t) &= \eta + \phi (\log(\sigma^2_{t-1})-\eta) + z_t, \\ \label{eq:sv2} z_t &\sim \mathscr{N}(0, \sigma^2_{z}), \quad x_t \sim \mathscr{N}(0, \sigma^2_t). \end{align} Equation~\eqref{eq:sv1} indicates that the (conditional) log-variance $\log(\sigma^2_t)$ depends on not only the historical log-variances $\{\log(\sigma^2_t)\}$ but a latent stochastic process $\{z_t\}$. The latent process $\{z_t\}$ is, according to Eq. \eqref{eq:sv2}, white noise process with i.i.d. Gaussian variables. Notably, the volatility $\sigma^2_t$ is no longer conditionally deterministic (i.e. deterministic given the complete history $\{\sigma^2_{<t}\}$) but to some extent stochastic in the setting of SV models: Heston model involves two correlated continuous-time Wiener processes while the canonical model is driven by a discrete-time Gaussian white-noise process. \subsection{Volatility Models in a General Form} Hereafter we denote the sequence of observations as $\{x_t\}$ and the latent stochastic process as $\{z_t\}$. As seen in previous sections, the dynamics of the volatility process $\{\sigma^2_t\}$ can be abstracted in the form of \begin{align} \label{eq:f} \sigma^2_t = f(\sigma^2_{<t}, x_{<t}, z_{\le t}) = \Sigma^x(x_{<t}, z_{\le t}). \end{align} The latter equality holds when we recurrently substitute $\sigma^2_\tau$ with $f(\sigma^2_{<\tau}, x_{<\tau}, z_{\le \tau})$ for all $\tau<t$. For models within the GARCH family, we discard $z_{\le t}$ in $\Sigma^x(x_{<t}, z_{\le t})$ (Eq. \eqref{eq:f}); on the other hand, for the primitive SV model, $x_{<t}$ is ignored instead. We can loosen the constraint that $x_t$ is zero-mean to a time-varying mean $\mu^x(x_{<t}, z_{\le t})$ for more flexibility. Recall that the latent stochastic process $\{z_t\}$ (Eq. \eqref{eq:sv2}) in the SV model is by definition an i.i.d. Gaussian white noise process. We may extend this process to one with an inherent autoregressive dynamics and more flexibility that the mean $\mu^z(z_{<t})$ and variance $\Sigma^z(z_{<t})$ are functions of autoregressive structure on historical values. Hence, the generalised model can be formulated in the following framework: \begin{align} \label{eq:z} z_t | z_{<t} &\sim \mathscr{N}(\mu^z(z_{<t}), \Sigma^z(z_{<t})),\\ \label{eq:x} x_t | x_{<t}, z_{\le t} &\sim \mathscr{N}(\mu^x(x_{<t}, z_{\le t}), \Sigma^x(x_{<t}, z_{\le t})), \end{align} where we have presumed that both the observation $x_t$ and the latent variable $z_t$ are normally distributed. Note that the autoregressive process degenerates to i.i.d. white noise process when $\mu^z(z_{<t}) \equiv 0$ and $\Sigma^z(z_{<t}) \equiv \sigma^2_z$. It should be emphasised that the purpose of reinforcing an autoregressive structure \eqref{eq:z} of the latent variable $z_t$ is that we believe such formulation fits better to real scenarios from financial aspect compared with the i.i.d. convention: the price fluctuation of a certain stock is the consequence of not only its own history but also the influence from the environment, e.g. its competitors, up/downstream industries, relevant companies in the market, etc. Such external influence is ever-changing and may preserve memory and hence hard to characterise if restricted to i.i.d. noise. The latent variable $z_t$ with an autoregressive structure provides a possibility of decoupling the internal influential factors from the external ones, which we believe is the essence of introducing $z_t$. \section{Neural Stochastic Volatility Models} In this section, we establish the \emph{neural stochastic volatility model} (NSVM) for volatility estimation and prediction. \subsection{Generating Observable Sequence} Recall that the observable variable $x_t$ (Eq. \eqref{eq:x}) and the latent variable $z_t$ (Eq. \eqref{eq:z}) are described by autoregressive models (as $x_t$ also involves an exogenous input $z_{\le t}$). Let $p_{\ps{\Phi}}(x_t | x_{<t}, z_{\le t})$ and $p_{\ps{\Phi}}(z_t | z_{<t})$ denote the probability distributions of $x_t$ and $z_t$ at time $t$. The factorisation on the joint distributions of sequences $\{x_t\}$ and $\{z_t\}$ applies as follow: { \begin{align} \label{eq:pz} p_{\ps{\Phi}}(Z) &= \prod_t p_{\ps{\Phi}}(z_t | z_{<t})\notag \\ &= \prod_t \mathscr{N}(z_{t} ; \mu^z_{\ps{\Phi}}(z_{<t}), \Sigma^z_{\ps{\Phi}}(z_{<t})),\\ \label{eq:px|z} p_{\ps{\Phi}}(X|Z) &= \prod_t p_{\ps{\Phi}}(x_t | x_{<t}, z_{\le t})\notag \\ &= \prod_t \mathscr{N}(x_{t} ; \mu^x_{\ps{\Phi}}(x_{<t}, z_{\le t}), \Sigma^x_{\ps{\Phi}}(x_{<t}, z_{\le t})), \end{align} }% where $X = \{x_{1:T}\}$ and $Z = \{z_{1:T}\}$ represents the sequences of observable and latent variables, respectively, whereas $\ps{\Phi}$ stands for the collection of parameters of generative model. The unconditional generative model is defined as the joint distribution w.r.t. the latent variable $Z$ and observable $X$: \begin{align} \label{eq:pxz} p_{\ps{\Phi}}(X, Z) =& \prod_t p_{\ps{\Phi}}(x_t | x_{<t}, z_{\le t}) p_{\ps{\Phi}}(z_t|z_{<t}). \end{align} It can be observed that the mean and variance are conditionally deterministic: given the historical information $\{z_{<t}\}$, the current mean $\mu^z_t = \mu^z_{\ps{\Phi}}(z_{<t})$ and variance $\Sigma^z_t = \Sigma^z_{\ps{\Phi}}(z_{<t})$ of $z_t$ is obtained and hence the distribution $\mathscr{N}(z_t; \mu^z_t, \Sigma^z_t)$ of $z_t$ is specified; after sampling $z_t$ from the specified distribution, we incorporate $\{x_{<t}\}$ and calculate the current mean $\mu^x_t = \mu^x_{\ps{\Phi}}(x_{<t}, z_{\le t})$ and variance $\Sigma^x_t = \Sigma^x_{\ps{\Phi}}(x_{<t}, z_{\le t})$ of $x_t$ and determine its distribution $\mathscr{N}(x_t; \mu^x_t, \Sigma^x_t)$ of $x_t$. It is natural and convenient to present such a procedure in a recurrent fashion because of its autoregressive nature. Since RNNs can essentially approximate arbitrary function of recurrent form, the means and variances, which may be driven by complex non-linear dynamics, can be efficiently computed using RNNs. The unconditional generative model consists of two pairs of RNN and multi-layer perceptron (MLP), namely $\RNN^z_G$/$\MLP^z_G$ for the latent variable and $\RNN^x_G$/$\MLP^x_G$ for the observable. We stack those two RNN/MLP pairs together according to the causal dependency between variables. The unconditional generative model is implemented as the \emph{generative network} abstracted as follows: \begin{align} \label{eq:mlp_zg} \{\mu^z_t, \Sigma^z_t\} &= \MLP^z_G(h^z_t; \ps{\Phi}),\\ \label{eq:rnn_zg} h^z_t &= \RNN^z_G(h^z_{t-1}, z_{t-1}; \ps{\Phi}),\\ \label{eq:zg_t} z_t &\sim \mathscr{N}(\mu^z_t, \Sigma^z_t),\\ \label{eq:mlp_xg} \{\mu^x_t, \Sigma^x_t\} &= \MLP^x_G(h^x_t; \ps{\Phi}),\\ \label{eq:rnn_xg} h^x_t &= \RNN^x_G(h^x_{t-1}, x_{t-1}, z_t; \ps{\Phi}),\\ \label{eq:xg_t} x_t &\sim \mathscr{N}(\mu^x_t, \Sigma^x_t), \end{align} where $h^z_t$ and $h^x_t$ denote the hidden states of the corresponding RNNs. The MLPs map the hidden states of RNNs into the means and deviations of variables of interest. The collection of parameters $\ps{\Phi}$ is comprised of the weights of RNNs and MLPs. NSVM relaxes the conventional constraint that the latent variable $z_t$ is $\mathscr{N}(0,1)$ in a way that $z_t$ is no longer i.i.d noise but a time-varying signal from external process with self-evolving nature. As discussed above, this relaxation will benefit the effectiveness in real scenarios. One should notice that when the latent variable $z_t$ is obtained, e.g. by inference (see details in the next subsection), the conditional distribution $p_{\ps{\Phi}}(X|Z)$ (Eq. \eqref{eq:px|z}) will be involved in generating the observable $x_t$ instead of the joint distribution $p_{\ps{\Phi}}(X, Z)$ (Eq. \eqref{eq:pxz}). This is essentially the scenario of predicting future values of the observable variable given its history. We will use the term ``generative model'' and will not discriminate the unconditional generative model or the conditional one as it can be inferred in context. \subsection{Inferencing the Latent Process} As the generative model involves the latent variable $z_t$, of which the true values are inaccessible even we have observed $x_t$, the marginal distribution $p_{\ps{\Phi}}(X)$ becomes the key that bridges the model and the data. However, the calculation of $p_{\ps{\Phi}}(X)$ itself or its complement, the posterior distribution $p_{\ps{\Phi}}(Z | X)$, is often intractable as complex integrals are involved. We are unable to learn the parameters by differentiating the marginal log-likelihood $\log p_{\ps{\Phi}}(X)$ or to infer the latent variables through the true posterior. Therefore, we consider instead a restricted family of tractable distributions $q_{\ps{\Psi}}(Z | X)$, referred to as the approximate posterior family, as approximations to the true posterior $p_{\ps{\Phi}}(Z | X)$ such that the family is sufficiently rich and of high capacity to provide good approximations. It is straightforward to verify that given a sequence of observations $X=\{x_{1:T}\}$, for any $1\le t\le T$, $z_t$ is dependent on the entire observation sequences. Hence, we define the inference model with the spirit of mean-field approximation where the approximate posterior is Gaussian and the following factorisation applies: { \begin{align} \label{eq:qz|x} q_{\ps{\Psi}}(Z | X) &= \prod^T_{t=1} q_{\ps{\Psi}}(z_t | z_{<t}, x_{1:T})\notag \\ &= \prod_t \mathscr{N}(z_t ; \tilde{\mu}^z_{\ps{\Psi}}(z_{<t}, x_{1:T}), \tilde{\Sigma}^z_{\ps{\Psi}}(z_{<t}, x_{1:T})), \end{align} }% where $\tilde{\mu}^z_{\ps{\Psi}}(z_{t-1}, x_{1:T})$ and $\tilde{\Sigma}^z_{\ps{\Psi}}(z_{t-1}, x_{1:T})$ are functions of the given observation sequence $\{x_{1:T}\}$, representing the approximated mean and variance of the latent variable $z_t$; $\ps{\Psi}$ denotes the collection of parameters of inference model. The neural network implementation of the model, referred to as the \emph{inference network}, is designed to equip a cascaded architecture with an autoregressive RNN and a bidirectional RNN, where the bidirectional RNN incorporates both the forward and backward dependencies on the entire observations whereas the autoregressive RNN models the temporal dependencies on the latent variables: \begin{align} \label{eq:mlp_zi} \{\tilde{\mu}^z_t, \tilde{\Sigma}^z_t\} &= \MLP^z_I(\tilde{h}^z_{t}; \ps{\Psi}),\\ \label{eq:rnn_zi} \tilde{h}^z_{t} &= \RNN^z_I(\tilde{h}^z_{t-1}, z_{t-1}, [\tilde{h}^{\rightarrow}_t, \tilde{h}^{\leftarrow}_t]; \ps{\Psi}),\\ \label{eq:rnn_ri} \tilde{h}^{\rightarrow}_t &= \RNN^{\rightarrow}_I(\tilde{h}^{\rightarrow}_{t-1}, x_t; \ps{\Psi}),\\ \label{eq:rnn_li} \tilde{h}^{\leftarrow}_t &= \RNN^{\leftarrow}_I(\tilde{h}^{\leftarrow}_{t+1}, x_t; \ps{\Psi}),\\ \label{eq:zi_t} z_t &\sim \mathscr{N}(\tilde{\mu}^z_t, \tilde{\Sigma}^z_t; \ps{\Psi}), \end{align} where $\tilde{h}^{\rightarrow}_t$ and $\tilde{h}^{\leftarrow}_t$ represent the hidden states of the forward and backward directions of the bidirectional RNN. The autoregressive RNN with hidden state $\tilde{h}^z_{t}$ takes the joint state $[\tilde{h}^{\rightarrow}_t, \tilde{h}^{\leftarrow}_t]$ of the bidirectional RNN and the previous value of $z_{t-1}$ as input. The inference mean $\tilde{\mu}^z_t$ and variance $\tilde{\Sigma}^z_t$ is computed by an MLP from the hidden state $\tilde{h}^z_t$ of the autoregressive RNN. We use the subscript $I$ instead of $G$ to distinguish the architecture used in inference model in contrast to that of the generative model. It should be emphasised that the inference network will collaborates with the generative network on conditional generating procedure. \begin{algorithm} \centering \caption{Recursive Forecasting}\label{alg:recursiveforecast} \begin{algorithmic}[1] \Loop \State $\{z^{\langle 1:S \rangle}_{1:t}\}$ $\gets$ draw $S$ paths from $q(z_{1:t} | x_{1:t})$ \State $\{z^{\langle 1:S \rangle}_{1:t+1}\}$ $\gets$ extend $\{z^{\langle 1:S \rangle}_{1:t}\}$ for 1 step via $p(z_{t+1} | z_{1:t})$ \State $\hat{p}(x_{t+1} | x_{1:t})$ $\gets$ $1/S\times \sum_{s} p(x_{t+1} | z^{\langle s \rangle}_{1:t+1}, x_{1:t})$ \State $\hat{\sigma}^2_{t+1} \gets \mathtt{var}\{\hat{x}^{1:S}_{t+1}\}$, $\{\hat{x}^{1:S}_{t+1}\} \sim \hat{p}(x_{\tau+1} | x_{1:\tau})$ \State $\{x_{1:t+1}\} \gets$ extend $\{x_{1:t}\}$ with new observation $x_{t+1}$ \State $t \gets t+1$, (optionally) retrain the model \EndLoop \end{algorithmic} \end{algorithm} \subsection{Forecasting Observations in Future} For a volatility model to be practically applicable in forecasting, the generating procedure conditioning on the history is of essential interest. We start with 1-step-ahead prediction, which serves as building block of multi-step forecasting. Given the historical observations $\{x_{1:T}\}$ up to time step $T$, 1-step-ahead prediction of either $\Sigma^x_{T+1}$ or $x_{T+1}$ is fully depicted by the conditional predictive distribution: \begin{align} \label{eq:1-step-ahead_exact} p(x_{T+1} | x_{1:T}) &= \int_{z} p(x_{T+1} | z_{1:T+1}, x_{1:T})\notag \\ &\qquad \cdot p(z_{T+1} | z_{1:T}) p(z_{1:T} | x_{1:T})~\diff{z}, \end{align} where the distributions on the right-hand side refer to those in the generative model with the generative parameters $\ps{\Phi}$ omitted. As the true posterior $p(z_{1:T} | x_{1:T})$ involved in Eq. \eqref{eq:1-step-ahead_exact} is intractable, the exact evaluation of conditional predictive distribution $p(x_{T+1} | x_{1:T})$ is difficult. A straightforward solution is that we substitute the true posterior $p(z_{1:T} | x_{1:T})$ with the approximation $q(z_{1:T} | x_{1:T})$ (see Eq. \eqref{eq:qz|x}) and leverage $q(z_{1:T} | x_{1:T})$ to inference $S$ sample paths $\{z^{\langle 1:S \rangle}_{1:T}\}$ of the latent variables according to the historical observations $\{x_{1:T}\}$. The approximate posterior from a well-trained model is presumed to be a good approximation to the truth; hence the sample paths shall be mimics of the true but unobservable path. We then extend the sample paths one step further from $T$ to $T+1$ using the autoregressive generative distribution $p(z_{T+1} | z_{1:T})$ (see Eq. \eqref{eq:pz}). The conditional predictive distribution is thus approximated as \begin{align} \label{eq:1-step-ahead_approx} \hat{p}(x_{T+1} | x_{1:T}) &\approx \frac{1}{S} \sum_{s} p(x_{T+1} | z^{\langle s \rangle}_{1:T+1}, x_{1:T}), \end{align} which is essentially a mixture of $S$ Gaussians. In the case of multi-step forecasting, a common solution in practice is to perform a recursive 1-step-ahead forecasting routine with model updated as new observation comes in; the very same procedure can be applied except that more sample paths should be evaluated due to the accumulation of uncertainty. Algorithm~\ref{alg:recursiveforecast} gives the detailed rolling scheme. \section{Experiment} In this section, we present the experiment on real-world stock price time series to validate the effectiveness and to evaluate the performance of the prosed model. \subsection{Dataset and Pre-processing} The raw dataset comprises 162 univariate time series of the daily closing stock price, chosen from China's A-shares and collected from 3 institutions. The choice is made by selecting those with earlier listing date of trading (from 2006 or earlier) and fewer suspension days (at most 50 suspension days within the entire period of observation), such that the undesired noises introduced by insufficient observation or missing values -- highly influential on the performance but essentially irrelevant to the purpose of volatility modelling -- can be reduced to the minimum. The raw price series is cleaned by aligning and removing abnormalities: we manually aligned the mismatched part and interpolated the missing value by stochastic regression imputation \cite{little2014statistical} where the imputed value is drawn from a Gaussian distribution with mean and variance calculated by regression on the empirical value within a short interval of 20 recent days. The series is then transformed from actual prices $s_t$ into log-returns $x_t = \log(s_t/s_{t-1})$ and normalised. Moreover, we combinatorically choose a predefined number $d$ out of 162 univariate log-return series and aggregate the selected series at each time step to form a $d$-dimensional multivariate time series, the choice of $d$ is in accordance with the rank of correlation, e.g. $d=6$ in our experiments. Theoretically, it leads to a much larger volume of data as ${{162}\choose{6}} > 2\times 10^{10}$. Specifically, the actual dataset for training and evaluation comprises a collection of 2000 series of $d$-dimensional normalised log-return vectors of length $2570$ ($\sim$ 7 years) with no missing values. We divide the whole dataset into two subsets for training and testing along the time axis: the first 2000 time steps of each series have been used as training samples whereas the rest 570 steps of each series as the test samples. \subsection{Baselines} We select several deterministic volatility models from the GARCH family as baselines: { \begin{enumerate} \setlength\itemsep{0em} \item{Quadratic models} \begin{itemize} \setlength\itemsep{0em} \item{ARCH(1); GARCH(1,1); GJR-GARCH(1,1,1);} \end{itemize} \setlength\itemsep{0em} \item{Absolute value models} \begin{itemize} \setlength\itemsep{0em} \item{AVARCH(1); AVGARCH(1,1); TARCH(1,1,1);} \end{itemize} \setlength\itemsep{0em} \item{Exponential models.} \begin{itemize} \setlength\itemsep{0em} \item{EARCH(1); EGARCH(1,1);} \end{itemize} \end{enumerate} }% \noindent Moreover, two stochastic volatility models are compared: \begin{enumerate} \setlength\itemsep{0em} \item{MCMC volatility model: \emph{stochvol};} \item{Gaussian process volatility model \emph{GP-Vol}.} \end{enumerate} For the listed models, we retrieve the authors' implementations or tools: \emph{stochvol}\footnote{\scriptsize\url{https://cran.r-project.org/web/packages/stochvol}}, \emph{GP-Vol}\footnote{\scriptsize\url{http://jmhl.org}} (the hyperparameters are chosen as suggested in \cite{DBLP:conf/nips/WuHG14}) and implement the models, such as GARCH, EGARCH, GJR-GARCH, etc., based on several widely-used packages\footnote{\scriptsize\url{https://pypi.python.org/pypi/arch/4.0}}\footnote{\scriptsize\url{https://www.kevinsheppard.com/MFE_Toolbox}}\footnote{\scriptsize\url{https://cran.r-project.org/web/packages/fGarch}} for time series analysis. All baselines are evaluated in terms of the negative log-likelihood on the test samples, where 1-step-ahead forecasting is carried out in a recursive fashion similar to Algorithm \ref{alg:recursiveforecast}. \subsection{Model Implementation} In our experiments, we predefine the dimensions of observable variables to be $\dim{x_t} = 6$ and the latent variables $\dim{z_t} = 4$. Note that the dimension of the latent variable is smaller than that of the observable, which allows us to extract a compact representation. The NSVM implementation in our experiments is composed of two neural networks, namely the generative network (see Eq. \eqref{eq:mlp_zg}-\eqref{eq:xg_t}) and inference network (see Eq. \eqref{eq:mlp_zi}-\eqref{eq:zi_t}). Each RNN module contains one hidden layer of size $10$ with GRU cells; MLP modules are 2-layered fully-connected feedforward networks, where the hidden layer is also of size $10$ whereas the output layer splits into two equal-sized sublayers with different activation functions: one applies exponential function to ensure the non-negativity for variance while the other uses linear function to calculate mean estimates. Thus $\MLP^z_I$'s output layer is of size $4+4$ for $\{\tilde{\mu}^z,\tilde{\Sigma}^z\}$ whereas the size of $\MLP^x_G$'s output layer is $6+6$ for $\{\mu^x,\Sigma^x\}$. During the training phase, the inference network is connected with the conditional generative network (see, Eq. \eqref{eq:mlp_zg}-\eqref{eq:zg_t}) to establish a bottleneck structure, the latent variable $z_t$ inferred by variational inference \cite{DBLP:journals/corr/KingmaW13,DBLP:conf/icml/RezendeMW14} follows a Gaussian approximate posterior; the size of sample paths is set to $S=100$. The parameters of both networks are jointly learned, including those for the prior. We introduce Dropout \cite{DBLP:journals/jmlr/SrivastavaHKSS14} into each RNN modules and impose $L2$-norm on the weights of MLP modules as regularistion to prevent overshooting; Adam optimiser \cite{DBLP:journals/corr/KingmaB14} is exploited for fast convergence; exponential learning rate decay is adopted to anneal the variations of convergence as time goes. Two covariance configurations are adopted: 1. we stick with diagonal covariance matrices configurations; 2. we start with diagonal covariance and then apply rank-1 perturbation \cite{DBLP:conf/icml/RezendeMW14} during fine-tuning until training is finished. The recursive 1-step-ahead forecasting routine illustrated as Algorithm \ref{alg:recursiveforecast} is applied in the experiment for both training and test phase: during the training phase, a single NSVM is trained, at each time step, on the entire training samples to learn a holistic dynamics, where the latent shall reflect the evolution of environment; in the test phase, on the other hand, the model is optionally retrained, at every 20 time steps, on each particular input series of the test samples to keep track on the specific trend of that series. In other words, the trained NSVM predicts 20 consecutive steps before it is retrained using all historical time steps of the input series at present. Correspondingly, all baselines are trained and tested at every time step of each univariate series using standard calibration procedures. The negative log-likelihood on test samples has been collected for performance assessment. We train the model on a single-GPU (Titan X Pascal) server for roughly two hours before it converges to a certain degree of accuracy on the training samples. Empirically, the training phase can be processed on CPU in reasonable time, as the complexity of the model as well as the size of parameters is moderate. \vspace{-.5em} \vspace{-.5em} \subsection{Result and Discussion} The performance of NSVM and baselines is listed for comparison in Table \ref{tbl:performance}: the performance on the first 10 individual stocks (chosen in alphacetical order but anonymised here) and the average score on all 162 stocks are reported in terms of negative log-likelihood (NLL) measure. The result shows that NSVM has achieved higher accuracy over the baselines on the task of volatility modelling and forecasting on NLL, which validates the high flexibility and rich expressive power of NSVM for volatility modelling and forecasting. In particular, NSVM with rank-1 perturbation (referred to as NSVM-corr in Table \ref{tbl:performance}) beats all other models in terms of NLL, while NSVM with diagonal covariance matrix (i.e. NSVM-diag) outperforms GARCH(1,1) on 142 out of 162 stocks. Although the improvement comes at the cost of longer training time before convergence, it can be mitigated by applying parallel computing techniques as well as more advanced network architecture or training methods. Apart from the higher accuracy NSVM obtained, it provides us with a rather general framework to generalise univariate time series models of any specific functional form to the corresponding multivariate cases by extending network dimensions and manipulating the covariance matrices. A case study on real-world financial datasets is illustrated in Fig.~\ref{fig:casestudy}. NSVM shows higher sensibility on drastic changes and better stability on moderate fluctuations: the response of NSVM in Fig.~\ref{fig:case1} is more stable in $t\in [1600, 2250]$, the period of moderate price fluctuation; while for drastic price change at $t=2250$, the model responds with a sharper spike compared with the quadratic GARCH model. Furthermore, NSVM demonstrates the inherent non-linearity in both Fig.~\ref{fig:case1} and \ref{fig:case2}: at each time step within $t\in [1000, 2000]$, the model quickly adapts to the current fluctuation level whereas GARCH suffers from a relatively slower decay from the previous influences. The cyan vertical line at $t=2000$ splits the training samples and test samples. We show only one instance within our dataset due to the limitation of pages, the performance of other instances are similar. \section{Conclusion} In this paper, we proposed a new volatility model, referred to as NSVM, for volatility estimation and prediction. We integrated statistical models with deep neural networks, leveraged the characteristics of each model, organised the dependences between random variables in the form of graphical models, implemented the mappings among variables and parameters through RNNs and MLPs, and finally established a powerful stochastic recurrent model with universal approximation capability. The proposed architecture comprises a pair of complementary stochastic neural networks: the generative network and inference network. The former models the joint distribution of the stochastic volatility process with both observable and latent variables of interest; the latter provides with the approximate posterior i.e. an analytical approximation to the (intractable) conditional distribution of the latent variables given the observable ones. The parameters (and consequently the underlying distributions) are learned (and inferred) via variational inference, which maximises the lower bound for the marginal log-likelihood of the observable variables. NSVM has presented higher accuracy on the task of volatility modelling and forecasting on real-world financial datasets, compared with various widely-used models, such as GARCH, EGARCH, GJR-GARCH, TARCH in the GARCH family, MCMC volatility model \emph{stochvol} as well as Gaussian process volatility model \emph{GP-Vol}. Future work on NSVM would be to investigate the modelling of time series with non-Gaussian residual distributions, in particular the heavy-tailed distributions e.g. LogNormal $\log\mathscr{N}$ and Student's $t$-distribution. \clearpage {\small \bibliographystyle{aaai} }% \clearpage \appendix \section{Learning Parameters / Calibration} Given the observations $X=\{x_{1:T}\}$, we target at maximising the marginal log-likelihood $p_{\ps{\Phi}}(X)$ w.r.t. $\ps{\Phi}$, where the actual posterior $p_{\ps{\Phi}}(Z|X)$ is involved. Because of the intractability of $p_{\ps{\Phi}}(Z|X)$, the exact inference is not applicable; we have to seek for approximate solutions instead. We factorise the marginal log-likelihood $\log{p_{\ps{\Phi}}(X)}$ as \begin{align} \log{p_{\ps{\Phi}}(X)} &= \mathbb{E}_{q_{\ps{\Psi}}(Z|X)} \bigg[ \log{\frac{p_{\ps{\Phi}}(X,Z)}{p_{\ps{\Phi}}(Z|X)}} \bigg]\notag \\ &= \mathbb{E}_{q_{\ps{\Psi}}(Z|X)} \bigg[ \log{\bigg( \frac{p_{\ps{\Phi}}(X,Z)}{q_{\ps{\Psi}}(Z|X)} \cdot \frac{q_{\ps{\Psi}}(Z|X)}{p_{\ps{\Phi}}(Z|X)} \bigg)} \bigg]\notag \\ &= \mathcal{L}[q; X, \ps{\Phi}, \ps{\Psi}] + KL[ q_{\ps{\Psi}}(Z | X) \| p_{\ps{\Phi}}(Z | X) ]\notag \\ &\ge \mathcal{L}[q; X, \ps{\Phi}, \ps{\Psi}],\notag \\ \label{eq:elbo} \mbox{where}~~\mathcal{L}[&q; X, \ps{\Phi}, \ps{\Psi}] = \mathbb{E}_{q_{\ps{\Psi}}(Z | X)} \bigg[ \log{\frac{ p_{\ps{\Phi}}(X, Z)}{q_{\ps{\Psi}}(Z | X)}} \bigg]. \end{align} Note that we have introduced a tractable, $\ps{\Psi}$-parameterised distribution $q_{\ps{\Psi}}(Z|X)$ from a flexible family of distributions to approximate the actual posterior $p_{\ps{\Phi}}(Z|X)$. The evidence lower bound (ELBO) $\mathcal{L}[q; X, \ps{\Phi}, \ps{\Psi}]$ in Eq. \eqref{eq:elbo} is essentially a functional w.r.t. $q$, conditioning on the observations $X$ and parameterised by the parameter sets $\ps{\Phi}, \ps{\Psi}$ of both generative and inference models. Theoretically, ELBO ensures a lower bound on the marginal log-likelihood, and can be maximised via gradient-based optimisers. It is usually the case that Eq. \eqref{eq:elbo} lacks a closed-form expression. We have to estimate the ELBO using Monte Carlo integration on the latent variable $z_t$. Provided $S$ sample paths drawn by the inference model defined in Eq. \eqref{eq:qz|x}, the estimator of ELBO can be calculated as the path average: {\small \vskip -1.0em \begin{align*} \widehat{\mathcal{L}}\big[q; \{z^{\langle 1:S \rangle}_{1:T}\}, X, \ps{\Phi}, \ps{\Psi}\big] = \frac{1}{S} \sum_{s,t} \Bigg[ \log\frac{p_{\ps{\Phi}}(x_t | x_{<t}, z^{\langle s \rangle}_{\le t}) p_{\ps{\Phi}}(z^{\langle s \rangle}_t|z^{\langle s \rangle}_{<t})}{q_{\ps{\Psi}}(z^{\langle s \rangle}_t | z^{\langle s \rangle}_{<t}, x_{1:T})} \Bigg], \end{align*} \vskip -.5em\noindent }% where $\{z^{\langle 1:S \rangle}_{1:T}\}$ denotes the collection of $S$ sample paths. By assuming the latent variable $z_t$ being Gaussian, we can readily apply the reparameterisation technique \cite{DBLP:journals/corr/KingmaW13} to $z_t$ to form an unbiased gradient estimator: { \begin{align} \label{eq:deriv_est} \nabla_{\ps{\Phi}, \ps{\Psi}} \widehat{\mathcal{L}}\big[\{z^{\langle s \rangle}_t = \tilde{\mu}^{z}_t + \tilde{A}^{z}_t \epsilon^{\langle s \rangle}_t\}^{\langle 1:S \rangle}_{1:T}\big], \end{align} }% where $\epsilon^{\langle s \rangle}_t \sim \mathscr{N}(0, I_z)$ is the standard Gaussian variable. The reparameterisation extracts the randomness out of the latent variable $z^{\langle s \rangle}_t$ via $\epsilon^{\langle s \rangle}_t$, leaving $\tilde{\mu}^{z}_t$ and $\tilde{A}^{z}_t$, i.e. the mean and standard deviation of $z^{\langle s \rangle}_t$, being deterministic functions. It guarantees that the gradient-based optimisation techniques are applicable by isolating the model parameters ($\tilde{\mu}^{z}_t$ and $\tilde{A}^{z}_t$) from the sampling procedure (involving $\epsilon^{\langle s \rangle}_t$). \section{Illustrations of stochastic volatility modelling, training and forecasting} Recall Eq. \eqref{eq:z} and \eqref{eq:x}, the generalised formulation for stochastic volatility modelling reads \vskip -1.35em \begin{align*} z_t | z_{<t} &\sim \mathscr{N}(\mu^z(z_{<t}), \Sigma^z(z_{<t})),\\ x_t | x_{<t}, z_{\le t} &\sim \mathscr{N}(\mu^x(x_{<t}, z_{\le t}), \Sigma^x(x_{<t}, z_{\le t})). \end{align*} \vskip -0.35em By introducing hidden state $h^z_t$ and $h^x_t$ as memory for historical information integration, the formulation is essentially equivalent to the recurrent model illustrated as Fig. \ref{fig:gennet}. We decompose the recurrent model into two components in a similar way as one would apply in factorising $p_{\ps{\Phi}}(X, Z)$ into the marginal distribution $p_{\ps{\Phi}}(Z)$ in Eq. \eqref{eq:pz} and the conditional distribution $p_{\ps{\Phi}}(X|Z)$ in Eq. \eqref{eq:px|z}. The marginal $p_{\ps{\Phi}}(Z)$ is implemented by \begin{align*} \{\mu^z_t, \Sigma^z_t\} &= \MLP^z_G(h^z_t; \ps{\Phi}),\\ h^z_t &= \RNN^z_G(h^z_{t-1}, z_{t-1}; \ps{\Phi}),\\ z_t &\sim \mathscr{N}(\mu^z_t, \Sigma^z_t), \end{align*} which represents an autoregressive network for the latent $z_t$ as illustrated in Fig. \ref{fig:gennet-component1}. The conditional $p_{\ps{\Phi}}(X|Z)$ is built as \begin{align*} \{\mu^x_t, \Sigma^x_t\} &= \MLP^x_G(h^x_t; \ps{\Phi}),\\ h^x_t &= \RNN^x_G(h^x_{t-1}, x_{t-1}, z_t; \ps{\Phi}),\\ x_t &\sim \mathscr{N}(\mu^x_t, \Sigma^x_t), \end{align*} which corresponds to a conditional generative network for the observable $x_t$ as in Fig. \ref{fig:gennet-component2}. On the other hand, the inference network is implemented in a similar recurrent fashion, as an autoregressive network with bidirectional dependencies: \begin{align*} \{\tilde{\mu}^z_t, \tilde{\Sigma}^z_t\} &= \MLP^z_I(\tilde{h}^z_{t}; \ps{\Psi}),\\ \tilde{h}^z_{t} &= \RNN^z_I(\tilde{h}^z_{t-1}, z_{t-1}, [\tilde{h}^{\rightarrow}_t, \tilde{h}^{\leftarrow}_t]; \ps{\Psi}),\\ \tilde{h}^{\rightarrow}_t &= \RNN^{\rightarrow}_I(\tilde{h}^{\rightarrow}_{t-1}, x_t; \ps{\Psi}),\\ \tilde{h}^{\leftarrow}_t &= \RNN^{\leftarrow}_I(\tilde{h}^{\leftarrow}_{t+1}, x_t; \ps{\Psi}).\\ z_t &\sim \mathscr{N}(\tilde{\mu}^z_t, \tilde{\Sigma}^z_t; \ps{\Psi}), \end{align*} The architecture of inference network is illustrated in Fig. \ref{fig:infnet}. The training procedure involves the inference network (in Fig. \ref{fig:infnet}) and the conditional generative network (in Fig. \ref{fig:gennet-component2}); the autoregressive network (in Fig. \ref{fig:gennet-component1}) will not be utilised. The historical observations $\{x_{<t}\}$ is fed into the inference network, which outputs the inferred sequence $\{z_{<t}\}$ of the causing latent variable. The latent sequence $\{z_{<t}\}$ is then put into the conditional generative network (in Fig. \ref{fig:gennet-component2}) to generate the predictions $\{\hat{x}_{<t}\}$ . The likelihood of the predictions $\{\hat{x}_{<t}\}$ regarding the actual observations $\{x_{<t}\}$ is calculated and the networks are optimised via gradient-based methods. Figure \ref{fig:training} illustrates the setting of networks during training. The procedure of forecasting contains three steps: 1. feeding the historical data as the observations $\{x_{<t}\}$ into the inference network (in Fig. \ref{fig:infnet}) to infer the latent variables $\{z_{<t}\}$ that might have caused that observations, 2. evolving the latent dynamics using the autoregressive network (in Fig. \ref{fig:gennet-component2}) with the inferred sequence $\{z_{<t}\}$ to produce the latent variable $z_t$ for the next time step, and 3. invoking the conditional generative network (in Fig. \ref{fig:gennet-component2}) with $\{z_{1:t}\}$ and $\{x_{<t}\}$ to predict the next possible $x_t$. The procedure is shown in Fig. \ref{fig:forecasting}. \begin{algorithm} \centering \caption{Computation scheme for rank-$K$ perturbation}\label{alg:perturbation} \begin{algorithmic}[1] \Require{full-rank diagonal $D$; rank-$K$ matrix $V = [v_k]_1^K$} \Ensure{factor matrix $A$ s.t. $AA^\top = (D + VV^\top)^{-1} = \Sigma$} \State $A \gets D^{-\frac{1}{2}}$ \For{$k\gets 1, K$} \State $\gamma \gets v_k^\top AA^\top v_k$ \State $\eta \gets (1+\gamma)^{-1}$ \State $A \gets A - [(1-\sqrt{\eta})/\gamma] AA^\top v_k v_k^\top A$ \EndFor \end{algorithmic} \end{algorithm} \section{Covariance Parameterisation} So far we have kept the covariance matrix $\Sigma$ in our formulas to allow for multivariate forecasting. It entails a complexity of computation of order $\mathcal{O}(M^3)$ to maintain and update the full-size covariance matrix $\Sigma$ of $M$ dimensions; for cases in high dimensions, the computational cost for the full-size $\Sigma$ becomes unaffordable. Thus, we have to seek for alternatives that are more economic in terms of computation. A practical approach is to leverage low-rank perturbations on diagonal matrices $D$ such that $\Sigma^{-1} = D + VV^\top$, where $V = [v_{1:K}]$ is the rank-$K$ perturbation with each $v_k$ being independent $M$-dimensional column vector. The corresponding covariance matrix and its determinant can be readily calculated using \emph{Woodbury identity} and \emph{matrix determinant lemma}: \begin{align} \Sigma = D^{-1} &- D^{-1}V(I+V^\top D^{-1}V)^{-1}V^\top D^{-1}, \\ \ln\det{\Sigma} &= -\ln\det{(D + VV^\top)}\notag \\ &= -\ln\det{D} - \ln\det{(I+V^\top D^{-1}V)}. \end{align} To solve the standard deviation $A$ in the factorisation $\Sigma = AA^\top$, we start with rank-$1$ perturbation. Given $K=1$, matrix $V=[v]$ is basically a column vector, hence $I+V^\top D^{-1}V = 1+v^\top D^{-1}v$ returns a real number. A solution of $A$ reads \begin{align*} A = D^{-\frac{1}{2}} - [(1-\sqrt{\eta})/\gamma]D^{-1}vv^\top D^{-\frac{1}{2}}, \end{align*} where $\gamma = v^\top D^{-1}v$ and $\eta = (1+\gamma)^{-1}$. The complexity of computation is just of order $\mathcal{O}(M)$. Observe that $VV^\top = \sum_{k=1}^K v_k v_k^\top$, the perturbation of rank $K$ is merely the superposition of $K$ rank-$1$ perturbations. Thus, we can calculate $A$ in a recurrent fashion, where the complexity of computation for rank-$K$ perturbation remains $\mathcal{O}(M)$ when $K\ll M$ holds. Algorithm~\ref{alg:perturbation} describes the detailed scheme of computation. \end{document}
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
1610.05755
Figure 4: Utility and privacy of the semi-supervised students: each row is a variant of the student model trained with generative adversarial networks in a semi-supervised way, with a different number of label queries made to the teachers through the noisy aggregation mechanism. The last column reports the accuracy of the student and the second and third column the bound ε and failure probability δ of the (ε,δ) differential privacy guarantee.
[ "[BOLD] Dataset", "[ITALIC] ε", "[ITALIC] δ", "[BOLD] Queries", "[BOLD] Non-Private Baseline", "[BOLD] Student Accuracy" ]
[ [ "MNIST", "2.04", "10−5", "100", "99.18%", "98.00%" ], [ "MNIST", "8.03", "10−5", "1000", "99.18%", "98.10%" ], [ "SVHN", "5.04", "10−6", "500", "92.80%", "82.72%" ], [ "SVHN", "8.19", "10−6", "1000", "92.80%", "90.66%" ] ]
The MNIST student is able to learn a 98% accurate model, which is shy of 1% when compared to the accuracy of a model learned with the entire training set, with only 100 label queries. This results in a strict differentially private bound of ε=2.04 for a failure probability fixed at 10−5. The SVHN student achieves 90.66% accuracy, which is also comparable to the 92.80% accuracy of one teacher learned with the entire training set. The corresponding privacy bound is ε=8.19, which is higher than for the MNIST dataset, likely because of the larger number of queries made to the aggregation mechanism.
\section{Appendix: Training the student with minimal teacher queries} \label{ap:student-learning} In this appendix, we describe approaches that were considered to reduce the number of queries made to the teacher ensemble by the student during its training. As pointed out in Sections~\ref{sec:dp-analysis} and~\ref{sec:evaluation}, this effort is motivated by the direct impact of querying on the total privacy cost associated with student training. The first approach is based on \emph{distillation}, a technique used for knowledge transfer and model compression~\citep{hinton2015distilling}. The three other techniques considered were proposed in the context of \emph{active learning}, with the intent of identifying training examples most useful for learning. In Sections~\ref{sec:approach} and~\ref{sec:evaluation}, we described semi-supervised learning, which yielded the best results. The student models in this appendix differ from those in Sections~\ref{sec:approach} and~\ref{sec:evaluation}, which were trained using GANs. In contrast, all students in this appendix were learned in a fully supervised fashion from a subset of public, labeled examples. Thus, the learning goal was to identify the subset of labels yielding the best learning performance. \subsection{Training Students using Distillation} Distillation is a knowledge transfer technique introduced as a means of compressing large models into smaller ones, while retaining their accuracy~\citep{bucilua2006model,hinton2015distilling}. This is for instance useful to train models in data centers before deploying compressed variants in phones. The transfer is accomplished by training the smaller model on data that is labeled with probability vectors produced by the first model, which encode the knowledge extracted from training data. Distillation is parameterized by a \emph{temperature} parameter $T$, which controls the smoothness of probabilities output by the larger model: when produced at small temperatures, the vectors are discrete, whereas at high temperature, all classes are assigned non-negligible values. Distillation is a natural candidate to compress the knowledge acquired by the ensemble of teachers, acting as the large model, into a student, which is much smaller with $n$ times less trainable parameters compared to the $n$ teachers. To evaluate the applicability of distillation, we consider the ensemble of $n=50$ teachers for SVHN. In this experiment, we do not add noise to the vote counts when aggregating the teacher predictions. We compare the accuracy of three student models: the first is a baseline trained with labels obtained by plurality, the second and third are trained with distillation at $T\in\{1,5\}$. We use the first $10\mbox{,}000$ samples from the test set as unlabeled data. Figure~\ref{fig:svhn-student-distillation} reports the accuracy of the student model on the last $16\mbox{,}032$ samples from the test set, which were not accessible to the model during training. It is plotted with respect to the number of samples used to train the student (and hence the number of queries made to the teacher ensemble). Although applying distillation yields classifiers that perform more accurately, the increase in accuracy is too limited to justify the increased privacy cost of revealing the entire probability vector output by the ensemble instead of simply the class assigned the largest number of votes. Thus, we turn to an investigation of active learning. \subsection{Active Learning of the Student} Active learning is a class of techniques that aims to identify and prioritize points in the student's training set that have a high potential to contribute to learning~\citep{angluin1988queries,baum1991neural}. If the label of an input in the student's training set can be predicted confidently from what we have learned so far by querying the teachers, it is intuitive that querying it is not worth the privacy budget spent. In our experiments, we made several attempts before converging to a simpler final formulation. \boldpara{Siamese networks} Our first attempt was to train a pair of siamese networks, introduced by~\citet{bromley1993signature} in the context of one-shot learning and later improved by~\citet{kochsiamese}. The siamese networks take two images as input and return $1$ if the images are equal and $0$ otherwise. They are two identical networks trained with shared parameters to force them to produce similar representations of the inputs, which are then compared using a distance metric to determine if the images are identical or not. Once the siamese models are trained, we feed them a pair of images where the first is unlabeled and the second labeled. If the unlabeled image is confidently matched with a known labeled image, we can infer the class of the unknown image from the labeled image. In our experiments, the siamese networks were able to say whether two images are identical or not, but did not generalize well: two images of the same class did not receive sufficiently confident matches. We also tried a variant of this approach where we trained the siamese networks to output $1$ when the two images are of the same class and $0$ otherwise, but the learning task proved too complicated to be an effective means for reducing the number of queries made to teachers. \boldpara{Collection of binary experts} Our second attempt was to train a collection of binary experts, one per class. An expert for class $j$ is trained to output $1$ if the sample is in class $j$ and $0$ otherwise. We first trained the binary experts by making an initial batch of queries to the teachers. Using the experts, we then selected available unlabeled student training points that had a candidate label score below $0.9$ and at least $4$ other experts assigning a score above $0.1$. This gave us about $500$ unconfident points for $1700$ initial label queries. After labeling these unconfident points using the ensemble of teachers, we trained the student. Using binary experts improved the student's accuracy when compared to the student trained on arbitrary data with the same number of teacher queries. The absolute increases in accuracy were however too limited---between $1.5\%$ and $2.5\%$. \boldpara{Identifying unconfident points using the student} This last attempt was the simplest yet the most effective. Instead of using binary experts to identify student training points that should be labeled by the teachers, we used the student itself. We asked the student to make predictions on each unlabeled training point available. We then sorted these samples by increasing values of the maximum probability assigned to a class for each sample. We queried the teachers to label these unconfident inputs first and trained the student again on this larger labeled training set. This improved the accuracy of the student when compared to the student trained on arbitrary data. For the same number of teacher queries, the absolute increases in accuracy of the student trained on unconfident inputs first when compared to the student trained on arbitrary data were in the order of $4\%-10\%$. \documentclass{article} % For LaTeX2e \usepackage[utf8]{inputenc} \usepackage[mmddyy,hhmmss]{datetime} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{definition}{Definition} \newcommand{\pate}{PATE-G} \newcommand{\boldpara}[1] {\vspace*{0.075in}\noindent\textbf{#1:}} \title{Semi-supervised Knowledge Transfer \\ for Deep Learning from Private Training Data} \author{Nicolas Papernot\thanks{Work done while the author was at Google.} \\ Pennsylvania State University\\ \texttt{ngp5056@cse.psu.edu} \And Martín Abadi \\ Google Brain \\ \texttt{abadi@google.com} \And Úlfar Erlingsson \\ Google \\ \texttt{ulfar@google.com} \And Ian Goodfellow \\ Google Brain\thanks{Work done both at Google Brain and at OpenAI.}\\ \texttt{goodfellow@google.com} \And Kunal Talwar \\ Google Brain\\ \texttt{kunal@google.com} \And \hspace*{1.15in} \\ ~ \\ ~ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} \input{abstract} \end{abstract} \input{introduction} \input{approach} \input{analysis-merged} \input{evaluation} \input{related-work} \input{conclusions} \subsubsection*{Acknowledgments} Nicolas Papernot is supported by a Google PhD Fellowship in Security. The authors would like to thank Ilya Mironov and Li Zhang for insightful discussions about early drafts of this document. \begin{thebibliography}{40} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Abadi et~al.(2016)Abadi, Chu, Goodfellow, McMahan, Mironov, Talwar, and Zhang]{abadi2016deep} Martin Abadi, Andy Chu, Ian Goodfellow, H.~Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li~Zhang. \newblock Deep learning with differential privacy. \newblock In \emph{Proceedings of the 2016 ACM SIGSAC {Conference on Computer and Communications Security}}. ACM, 2016. \bibitem[Aggarwal(2005)]{aggarwal2005k} Charu~C Aggarwal. \newblock On k-anonymity and the curse of dimensionality. \newblock In \emph{Proceedings of the 31st {International Conference on Very Large Data Bases}}, pp.\ 901--909. VLDB Endowment, 2005. \bibitem[Alipanahi et~al.(2015)Alipanahi, Delong, Weirauch, and Frey]{alipanahi2015predicting} Babak Alipanahi, Andrew Delong, Matthew~T Weirauch, and Brendan~J Frey. \newblock Predicting the sequence specificities of {DNA}-and {RNA}-binding proteins by deep learning. \newblock \emph{Nature biotechnology}, 2015. \bibitem[Angluin(1988)]{angluin1988queries} Dana Angluin. \newblock Queries and concept learning. \newblock \emph{Machine learning}, 2\penalty0 (4):\penalty0 319--342, 1988. \bibitem[Bassily et~al.(2014)Bassily, Smith, and Thakurta]{bassily2014differentially} Raef Bassily, Adam Smith, and Abhradeep Thakurta. \newblock Differentially private empirical risk minimization: efficient algorithms and tight error bounds. \newblock \emph{arXiv preprint arXiv:1405.7085}, 2014. \bibitem[Baum(1991)]{baum1991neural} Eric~B Baum. \newblock Neural net algorithms that learn in polynomial time from examples and queries. \newblock \emph{IEEE Transactions on Neural Networks}, 2\penalty0 (1):\penalty0 5--19, 1991. \bibitem[Breiman(1994)]{ML:Breiman:bagging} Leo Breiman. \newblock Bagging predictors. \newblock \emph{Machine Learning}, 24\penalty0 (2):\penalty0 123--140, 1994. \bibitem[Bromley et~al.(1993)Bromley, Bentz, Bottou, Guyon, LeCun, Moore, S{\"a}ckinger, and Shah]{bromley1993signature} Jane Bromley, James~W Bentz, L{\'e}on Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard S{\"a}ckinger, and Roopak Shah. \newblock Signature verification using a “{Siamese}” time delay neural network. \newblock \emph{International Journal of Pattern Recognition and Artificial Intelligence}, 7\penalty0 (04):\penalty0 669--688, 1993. \bibitem[Bucilua et~al.(2006)Bucilua, Caruana, and Niculescu-Mizil]{bucilua2006model} Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. \newblock Model compression. \newblock In \emph{Proceedings of the 12th ACM {International Conference on Knowledge Discovery and Data mining}}, pp.\ 535--541. ACM, 2006. \bibitem[Bun \& Steinke(2016)Bun and Steinke]{BunS16} Mark Bun and Thomas Steinke. \newblock Concentrated differential privacy: simplifications, extensions, and lower bounds. \newblock In \emph{Proceedings of TCC}, 2016. \bibitem[Chaudhuri \& Monteleoni(2009)Chaudhuri and Monteleoni]{chaudhuri2009privacy} Kamalika Chaudhuri and Claire Monteleoni. \newblock Privacy-preserving logistic regression. \newblock In \emph{Advances in Neural Information Processing Systems}, pp.\ 289--296, 2009. \bibitem[Chaudhuri et~al.(2011)Chaudhuri, Monteleoni, and Sarwate]{chaudhuri2011differentially} Kamalika Chaudhuri, Claire Monteleoni, and Anand~D Sarwate. \newblock Differentially private empirical risk minimization. \newblock \emph{Journal of Machine Learning Research}, 12\penalty0 (Mar):\penalty0 1069--1109, 2011. \bibitem[Dietterich(2000)]{dietterich2000ensemble} Thomas~G Dietterich. \newblock Ensemble methods in machine learning. \newblock In \emph{International workshop on multiple classifier systems}, pp.\ 1--15. Springer, 2000. \bibitem[Dwork(2011)]{dwork2011firm} Cynthia Dwork. \newblock A firm foundation for private data analysis. \newblock \emph{Communications of the ACM}, 54\penalty0 (1):\penalty0 86--95, 2011. \bibitem[Dwork \& Roth(2014)Dwork and Roth]{dwork2014algorithmic} Cynthia Dwork and Aaron Roth. \newblock The algorithmic foundations of differential privacy. \newblock \emph{Foundations and Trends in Theoretical Computer Science}, 9\penalty0 (3-4):\penalty0 211--407, 2014. \bibitem[Dwork \& Rothblum(2016)Dwork and Rothblum]{DworkR16} Cynthia Dwork and Guy~N Rothblum. \newblock Concentrated differential privacy. \newblock \emph{arXiv preprint arXiv:1603.01887}, 2016. \bibitem[Dwork et~al.(2006{\natexlab{a}})Dwork, Kenthapadi, McSherry, Mironov, and Naor]{ODO} Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. \newblock Our data, ourselves: privacy via distributed noise generation. \newblock In \emph{Advances in Cryptology-EUROCRYPT 2006}, pp.\ 486--503. Springer, 2006{\natexlab{a}}. \bibitem[Dwork et~al.(2006{\natexlab{b}})Dwork, McSherry, Nissim, and Smith]{dwork2006calibrating} Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. \newblock Calibrating noise to sensitivity in private data analysis. \newblock In \emph{Theory of Cryptography}, pp.\ 265--284. Springer, 2006{\natexlab{b}}. \bibitem[Dwork et~al.(2010)Dwork, Rothblum, and Vadhan]{DRV10} Cynthia Dwork, Guy~N Rothblum, and Salil Vadhan. \newblock Boosting and differential privacy. \newblock In \emph{Proceedings of the 51st IEEE Symposium on Foundations of Computer Science}, pp.\ 51--60. IEEE, 2010. \bibitem[Erlingsson et~al.(2014)Erlingsson, Pihur, and Korolova]{erlingsson2014rappor} {\'U}lfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. \newblock {RAPPOR}: Randomized aggregatable privacy-preserving ordinal response. \newblock In \emph{Proceedings of the 2014 ACM SIGSAC {Conference on Computer and Communications Security}}, pp.\ 1054--1067. ACM, 2014. \bibitem[Fredrikson et~al.(2015)Fredrikson, Jha, and Ristenpart]{fredrikson2015model} Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. \newblock Model inversion attacks that exploit confidence information and basic countermeasures. \newblock In \emph{Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security}, pp.\ 1322--1333. ACM, 2015. \bibitem[Goodfellow et~al.(2014)Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio]{goodfellow2014generative} Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. \newblock Generative adversarial nets. \newblock In \emph{Advances in Neural Information Processing Systems}, pp.\ 2672--2680, 2014. \bibitem[Hamm et~al.(2016)Hamm, Cao, and Belkin]{hamm2016learning} Jihun Hamm, Paul Cao, and Mikhail Belkin. \newblock Learning privately from multiparty data. \newblock \emph{arXiv preprint arXiv:1602.03552}, 2016. \bibitem[Hinton et~al.(2015)Hinton, Vinyals, and Dean]{hinton2015distilling} Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. \newblock Distilling the knowledge in a neural network. \newblock \emph{arXiv preprint arXiv:1503.02531}, 2015. \bibitem[Jagannathan et~al.(2013)Jagannathan, Monteleoni, and Pillaipakkamnatt]{jagannathan2013semi} Geetha Jagannathan, Claire Monteleoni, and Krishnan Pillaipakkamnatt. \newblock A semi-supervised learning approach to differential privacy. \newblock In \emph{2013 IEEE 13th International Conference on Data Mining Workshops}, pp.\ 841--848. IEEE, 2013. \bibitem[Kannan et~al.(2016)Kannan, Kurach, Ravi, Kaufmann, Tomkins, Miklos, Corrado, et~al.]{kannan2016smart} Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Greg Corrado, et~al. \newblock Smart reply: Automated response suggestion for email. \newblock In \emph{Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data mining}, volume~36, pp.\ 495--503, 2016. \bibitem[Koch(2015)]{kochsiamese} Gregory Koch. \newblock \emph{Siamese neural networks for one-shot image recognition}. \newblock PhD thesis, University of Toronto, 2015. \bibitem[Kononenko(2001)]{kononenko2001machine} Igor Kononenko. \newblock Machine learning for medical diagnosis: history, state of the art and perspective. \newblock \emph{Artificial Intelligence in medicine}, 23\penalty0 (1):\penalty0 89--109, 2001. \bibitem[{L. Sweeney}(2002)]{sweeney2002k} {L. Sweeney}. \newblock k-anonymity: A model for protecting privacy. \newblock volume~10, pp.\ 557--570. World Scientific, 2002. \bibitem[Mironov(2016)]{Mironov16} Ilya Mironov. \newblock Renyi differential privacy. \newblock manuscript, 2016. \bibitem[Nissim et~al.(2007)Nissim, Raskhodnikova, and Smith]{nissim2007smooth} Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. \newblock Smooth sensitivity and sampling in private data analysis. \newblock In \emph{Proceedings of the 39th annual ACM Symposium on Theory of Computing}, pp.\ 75--84. ACM, 2007. \bibitem[Pathak et~al.(2010)Pathak, Rane, and Raj]{pathak2010multiparty} Manas Pathak, Shantanu Rane, and Bhiksha Raj. \newblock Multiparty differential privacy via aggregation of locally trained classifiers. \newblock In \emph{Advances in Neural Information Processing Systems}, pp.\ 1876--1884, 2010. \bibitem[Pathak et~al.(2011)Pathak, Rane, Sun, and Raj]{pathak2011privacy} Manas Pathak, Shantanu Rane, Wei Sun, and Bhiksha Raj. \newblock Privacy preserving probabilistic inference with hidden markov models. \newblock In \emph{2011 IEEE International Conference on Acoustics, Speech and Signal Processing}, pp.\ 5868--5871. IEEE, 2011. \bibitem[Poulos \& Valle(2016)Poulos and Valle]{poulos2016missing} Jason Poulos and Rafael Valle. \newblock Missing data imputation for supervised learning. \newblock \emph{arXiv preprint arXiv:1610.09075}, 2016. \bibitem[Salimans et~al.(2016)Salimans, Goodfellow, Zaremba, Cheung, Radford, and Chen]{salimans2016improved} Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi~Chen. \newblock Improved techniques for training {GAN}s. \newblock \emph{arXiv preprint arXiv:1606.03498}, 2016. \bibitem[Shokri \& Shmatikov(2015)Shokri and Shmatikov]{shokri2015privacy} Reza Shokri and Vitaly Shmatikov. \newblock Privacy-preserving deep learning. \newblock In \emph{Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security}. ACM, 2015. \bibitem[Song et~al.(2013)Song, Chaudhuri, and Sarwate]{song2013stochastic} Shuang Song, Kamalika Chaudhuri, and Anand~D Sarwate. \newblock Stochastic gradient descent with differentially private updates. \newblock In \emph{Global Conference on Signal and Information Processing}, pp.\ 245--248. IEEE, 2013. \bibitem[Sweeney(1997)]{sweeney1997weaving} Latanya Sweeney. \newblock Weaving technology and policy together to maintain confidentiality. \newblock \emph{The Journal of Law, Medicine \& Ethics}, 25\penalty0 (2-3):\penalty0 98--110, 1997. \bibitem[Wainwright et~al.(2012)Wainwright, Jordan, and Duchi]{wainwright2012privacy} Martin~J Wainwright, Michael~I Jordan, and John~C Duchi. \newblock Privacy aware learning. \newblock In \emph{Advances in Neural Information Processing Systems}, pp.\ 1430--1438, 2012. \bibitem[Warner(1965)]{warner1965randomized} Stanley~L Warner. \newblock Randomized response: A survey technique for eliminating evasive answer bias. \newblock \emph{Journal of the American Statistical Association}, 60\penalty0 (309):\penalty0 63--69, 1965. \end{thebibliography} \newpage \appendix \newpage \input{ap-privacy-analysis-short} \newpage \input{ap-learning-student} \newpage \input{ap-uci} \end{document} \section{Privacy analysis of the approach} \label{sec:dp-analysis} \newcommand{\ifnoisymax}[2]{#1} \newcommand{\eps}{\varepsilon} \newcommand{\M}{\mathcal{M}} \newcommand{\Domain}{\mathcal{D}} \newcommand{\Range}{\mathcal{R}} \newcommand{\outcome}{o} \newcommand{\eqdef}{\stackrel{\Delta}=} \newcommand{\E}{\mathbb{E}} \newcommand{\comment}[1]{} We now analyze the differential privacy guarantees of our PATE approach. Namely, we keep track of the privacy budget throughout the student's training using the moments accountant~\citep{abadi2016deep}. When teachers reach a strong quorum, this allows us to bound privacy costs more strictly. \subsection{Differential Privacy Preliminaries and a Simple Analysis of \mbox{PATE}} Differential privacy~\citep{dwork2006calibrating,dwork2011firm} has established itself as a strong standard. It provides privacy guarantees for algorithms analyzing databases, which in our case is a machine learning training algorithm processing a training dataset. Differential privacy is defined using pairs of adjacent databases: in the present work, these are datasets that only differ by one training example. Recall the following variant of differential privacy introduced in~\cite{ODO}. \begin{definition} A randomized mechanism $\M$ with domain $\Domain$ and range $\mathcal{R}$ satisfies $(\eps,\delta)$-differential privacy if for any two adjacent inputs $d, d'\in \Domain$ and for any subset of outputs $S\subseteq\Range$ it holds that: \begin{equation} \label{eq:dp} \Pr[\M(d)\in S]\leq e^{\eps}\Pr[\M(d')\in S]+\delta. \end{equation} \end{definition} It will be useful to define the \emph{privacy loss} and the \emph{privacy loss random variable}. They capture the differences in the probability distribution resulting from running $\M$ on $d$ and $d'$. \begin{definition} Let $\M \colon \Domain \rightarrow \Range$ be a randomized mechanism and $d, d'$ a pair of adjacent databases. Let \textsf{aux} denote an auxiliary input. For an outcome $\outcome \in \Range$, the privacy loss at $\outcome$ is defined as: \begin{equation} c(\outcome; \M, \textsf{aux}, d, d') \eqdef \log \frac{\Pr[\M( \textsf{aux}, d) = \outcome]}{\Pr[\M( \textsf{aux}, d') = \outcome]}. \end{equation} The privacy loss random variable $C(\M, \textsf{aux}, d, d')$ is defined as $c(\M(d); \M, \textsf{aux}, d, d')$, i.e. the random variable defined by evaluating the privacy loss at an outcome sampled from $\M(d)$. \end{definition} A natural way to bound our approach's privacy loss is to first bound the privacy cost of each label queried by the student, and then use the strong composition theorem~\citep{DRV10} to derive the total cost of training the student. For neighboring databases $d, d'$, each teacher gets the same training data partition (that is, the same for the teacher with $d$ and with $d'$, not the same across teachers), with the exception of one teacher whose corresponding training data partition differs. Therefore, the label counts $n_j(\vec{x})$ for any example $\vec{x}$, on $d$ and $d'$ differ by at most $1$ in at most two locations. In the next subsection, we show that this yields loose guarantees. \subsection{The moments accountant: A building block for better analysis} To better keep track of the privacy cost, we use recent advances in privacy cost accounting. The moments accountant was introduced by~\cite{abadi2016deep}, building on previous work~\citep{BunS16, DworkR16, Mironov16}. \begin{definition} Let $\M \colon \Domain \rightarrow \Range$ be a randomized mechanism and $d, d'$ a pair of adjacent databases. Let \textsf{aux} denote an auxiliary input. The moments accountant is defined as: \begin{equation}\label{eq:moments-accountant} \alpha_\M(\lambda) \eqdef \max_{\textsf{aux}, d, d'} \alpha_\M(\lambda;\textsf{aux},d,d') \end{equation} where $\alpha_\M(\lambda;\textsf{aux},d,d') \eqdef \log \E[\exp(\lambda C(\M, \textsf{aux}, d, d'))]$ is the moment generating function of the privacy loss random variable. \end{definition} The following properties of the moments accountant are proved in~\cite{abadi2016deep}. \begin{thm}\label{thm:property} 1. \textbf{[Composability]} Suppose that a mechanism $\M$ consists of a sequence of adaptive mechanisms $\M_1, \ldots, \M_k$ where $\M_i\colon \prod_{j=1}^{i-1}\Range_j\times \Domain \to\Range_i$. Then, for any output sequence $\outcome_1,\dots,\outcome_{k-1}$ and any $\lambda$ \[ \alpha_\M(\lambda;d,d') = \sum_{i=1}^k \alpha_{\M_i}(\lambda;\outcome_1,\dots,\outcome_{i-1},d,d')\,,\] where $\alpha_\M$ is conditioned on $\M_i$'s output being $\outcome_i$ for $i<k$. 2. \textbf{[Tail bound]} For any $\eps>0$, the mechanism $\M$ is $(\eps, \delta)$-differentially private for \[\delta=\min_{\lambda} \exp(\alpha_\M(\lambda) -\lambda \eps)\,.\] \end{thm} We write down two important properties of the aggregation mechanism from Section~\ref{sec:approach}. The first property is proved in~\cite{dwork2014algorithmic}, and the second follows from~\cite{BunS16}. \begin{thm} \label{thm:noisymax} Suppose that on neighboring databases $d, d'$, the label counts $n_j$ differ by at most 1 in each coordinate. Let $\M$ be the mechanism that reports $\arg\max_{j} \left\{n_j + Lap(\frac 1 \gamma) \right\}$. Then $\M$ satisfies $(2\gamma, 0)$-differential privacy. Moreover, for any $l$, \textsf{aux}, $d$ and $d'$, \begin{align} \alpha(l; \textsf{aux}, d, d') \leq 2\gamma^2 l(l+1) \label{eqn:lmgf_basic} \end{align} \end{thm} At each step, we use the \ifnoisymax{aggregation mechanism with noise $Lap(\frac 1 \gamma)$}, which is $(2\gamma,0)$-DP. Thus over $T$ steps, we get $(4T\gamma^2 + 2\gamma\sqrt{2T\ln \frac 1 \delta}, \delta)$-differential privacy. This can be rather large: plugging in values that correspond to our SVHN result, $\gamma=0.05, T=1000, \delta= 1\mathrm{e}{-6}$ gives us $\eps \approx 26$ or alternatively plugging in values that correspond to our MNIST result, $\gamma=0.05, T=100, \delta= 1\mathrm{e}{-5}$ gives us $\eps \approx 5.80$. \subsection{A precise, data-dependent privacy analysis of \mbox{PATE}} Our data-dependent privacy analysis takes advantage of the fact that when the quorum among the teachers is very strong, the majority outcome has overwhelming likelihood, in which case the privacy cost is small whenever this outcome occurs. The moments accountant allows us analyze the composition of such mechanisms in a unified framework. The following theorem, proved in Appendix~\ref{ap:privacy-analysis}, provides a data-dependent bound on the moments of any differentially private mechanism where some specific outcome is very likely. \begin{thm} \label{thm:lmgf_data_dep} Let $\M$ be $(2\gamma, 0)$-differentially private and $q \geq \Pr[\M(d) \neq \outcome^*]$ for some outcome $\outcome^*$. Let $l,\gamma \geq 0$ and $q <\frac{e^{2\gamma}-1}{e^{4\gamma}-1}$. Then for any $\textsf{aux}$ and any neighbor $d'$ of $d$, $\M$ satisfies \begin{align*} \alpha(l; \textsf{aux}, d, d') \leq \log ((1-q)\Big(\frac{1-q}{1-e^{2\gamma}q}\Big)^l + q\exp(2\gamma l)). \end{align*} \end{thm} To upper bound $q$ for our aggregation mechanism, we use the following simple lemma, also proved in Appendix~\ref{ap:privacy-analysis}. \begin{lem} \label{lem:bound_on_q} Let $\mathbf{n}$ be the label score vector for a database $d$ with $n_{j^*} \geq n_j$ for all $j$. Then \begin{align*} \Pr[\M(d) \neq j^*] \leq \sum_{j \neq j^*} \frac{2 + \gamma(n_{j^*} - n_j)}{4 \exp(\gamma(n_{j^*} - n_j))} \end{align*} \end{lem} This allows us to upper bound $q$ for a specific score vector $\mathbf{n}$, and hence bound specific moments. We take the smaller of the bounds we get from Theorems~\ref{thm:noisymax} and~\ref{thm:lmgf_data_dep}. We compute these moments for a few values of $\lambda$ (integers up to 8). Theorem~\ref{thm:property} allows us to add these bounds over successive steps, and derive an $(\eps,\delta)$ guarantee from the final $\alpha$. Interested readers are referred to the script that we used to empirically compute these bounds, which is released along with our code: {\small\url{https://github.com/tensorflow/models/tree/master/differential_privacy/multiple_teachers}} Since the privacy moments are themselves now data dependent, the final $\eps$ is itself data-dependent and should not be revealed. To get around this, we bound the {\em smooth sensitivity}~\citep{nissim2007smooth} of the moments and add noise proportional to it to the moments themselves. This gives us a differentially private estimate of the privacy cost. Our evaluation in Section~\ref{sec:evaluation} ignores this overhead and reports the un-noised values of $\eps$. Indeed, in our experiments on MNIST and SVHN, the scale of the noise one needs to add to the released $\varepsilon$ is smaller than 0.5 and 1.0 respectively. How does the number of teachers affect the privacy cost? Recall that the student uses a noisy label computed in (\ref{eq:noisy-max}) which has a parameter $\gamma$. To ensure that the noisy label is likely to be the correct one, the noise scale $\frac 1 \gamma$ should be small compared to the the additive gap between the two largest vales of $n_j$. While the exact dependence of $\gamma$ on the privacy cost in Theorem~\ref{thm:lmgf_data_dep} is subtle, as a general principle, a smaller $\gamma$ leads to a smaller privacy cost. Thus, a larger gap translates to a smaller privacy cost. Since the gap itself increases with the number of teachers, having more teachers would lower the privacy cost. This is true up to a point. With $n$ teachers, each teacher only trains on a $\frac 1 n$ fraction of the training data. For large enough $n$, each teachers will have too little training data to be accurate. To conclude, we note that our analysis is rather conservative in that it pessimistically assumes that, even if just one example in the training set for one teacher changes, the classifier produced by that teacher may change arbitrarily. One advantage of our approach, which enables its wide applicability, is that our analysis does not require any assumptions about the workings of the teachers. Nevertheless, we expect that stronger privacy guarantees may perhaps be established in specific settings---when assumptions can be made on the learning algorithm used to train the teachers. \section{Missing details on the analysis} \label{ap:privacy-analysis} We provide missing proofs from Section~\ref{sec:dp-analysis}. \newtheorem*{thm:lmgfdatadep}{Theorem \ref{thm:lmgf_data_dep}} \begin{thm:lmgfdatadep} Let $\M$ be $(2\gamma, 0)$-differentially private and $q \geq \Pr[\M(d) \neq \outcome^*]$ for some outcome $\outcome^*$. Let $l,\gamma \geq 0$ and $q <\frac{e^{2\gamma}-1}{e^{4\gamma}-1}$. Then for any $\textsf{aux}$ and any neighbor $d'$ of $d$, $\M$ satisfies \begin{align*} \alpha(l; \textsf{aux}, d, d') \leq \log ((1-q)\Big(\frac{1-q}{1-e^{2\gamma}q}\Big)^l + q\exp(2\gamma l)). \end{align*} \end{thm:lmgfdatadep} \begin{proof} Since $\M$ is $2\gamma$-differentially private, for every outcome $o$, $\frac{Pr[M(d)=\outcome]}{Pr[M(d')=\outcome]} \leq \exp(2\gamma)$. Let $q' = Pr[M(d) \neq \outcome^*]$. Then $Pr[M(d') \neq \outcome^*] \leq \exp(2\gamma) q'$. Thus \begin{align*} \exp(\alpha(l; \textsf{aux},d,d')) &= \sum_{\outcome} \Pr[M(d)=\outcome] \Big(\frac{\Pr[M(d)=\outcome]}{\Pr[M(d')=\outcome]}\Big)^l\\ &= \Pr[M(d)=\outcome^*] \Big(\frac{\Pr[M(d)=\outcome^*]}{\Pr[M(d')=\outcome^*]}\Big)^l + \sum_{\outcome \neq \outcome^*} \Pr[M(d)=\outcome] \Big(\frac{\Pr[M(d)=\outcome]}{\Pr[M(d')=\outcome]}\Big)^l\\ &\leq (1-q')\Big(\frac{1-q'}{1-e^{2\gamma}q'}\Big)^l + \sum_{\outcome \neq \outcome^*} \Pr[M(d)=\outcome] (e^{2\gamma})^l\\ &\leq (1-q')\big(\frac{1-q'}{1-e^{2\gamma}q'}\Big)^l + q'e^{2\gamma l}. \end{align*} Now consider the function \begin{align*} f(z) &= (1-z)\Big(\frac{1-z}{1-e^{2\gamma}z}\Big)^l + z e^{2\gamma l}. \end{align*} We next argue that this function is non-decreasing in $(0,\frac{e^{2\gamma}-1}{e^{4\gamma}-1})$ under the conditions of the lemma. Towards this goal, define \begin{align*} g(z,w) &= (1-z)\Big(\frac{1-w}{1-e^{2\gamma}w}\Big)^l + z e^{2\gamma l}, \end{align*} and observe that $f(z) = g(z,z)$. We can easily verify by differentiation that $g(z,w)$ is increasing individually in $z$ and in $w$ in the range of interest. This implies that $f(q') \leq f(q)$ completing the proof. \end{proof} \newtheorem*{lem:boundonq}{Lemma \ref{lem:bound_on_q}} \begin{lem:boundonq} Let $\mathbf{n}$ be the label score vector for a database $d$ with $n_{j^*} \geq n_j$ for all $j$. Then \begin{align*} \Pr[\M(d) \neq j^*] \leq \sum_{j \neq j^*} \frac{2 + \gamma(n_{j^*} - n_j)}{4 \exp(\gamma(n_{j^*} - n_j))} \end{align*} \end{lem:boundonq} \begin{proof} The probability that $n_{j^*} + Lap(\frac 1 \gamma) < n_j + Lap(\frac 1 \gamma)$ is equal to the probability that the sum of two independent $Lap(1)$ random variables exceeds $\gamma(n_{j^*} - n_j)$. The sum of two independent $Lap(1)$ variables has the same distribution as the difference of two $Gamma(2, 1)$ random variables. Recalling that the $Gamma(2,1)$ distribution has pdf $xe^{-x}$, we can compute the pdf of the difference via convolution as \begin{align*} \int_{y=0}^\infty (y+|x|)e^{-y-|x|} y e^{-y}~dy = \frac{1}{e^{|x|}}\int_{y=0}^\infty (y^2+y|x|) e^{-2y}~dy = \frac{1+|x|}{4e^{|x|}}. \end{align*} The probability mass in the tail can then be computed by integration as $\frac{2 + \gamma(n_{j^*} - n_j)}{4 \exp(\gamma(n_{j^*} - n_j)}$. Taking a union bound over the various candidate $j$'s gives the claimed bound. \end{proof} \section{Evaluation} \label{sec:evaluation} In our evaluation of PATE and its generative variant PATE-G, we first train a teacher ensemble for each dataset. The trade-off between the accuracy and privacy of labels predicted by the ensemble is greatly dependent on the number of teachers in the ensemble: being able to train a large set of teachers is essential to support the injection of noise yielding strong privacy guarantees while having a limited impact on accuracy. Second, we minimize the privacy budget spent on learning the student by training it with as few queries to the ensemble as possible. Our experiments use MNIST and the extended SVHN datasets. Our MNIST model stacks two convolutional layers with max-pooling and one fully connected layer with ReLUs. When trained on the entire dataset, the non-private model has a $99.18\%$ test accuracy. For SVHN, we add two hidden layers.\footnote{The model is adapted from \scriptsize{\url{https://www.tensorflow.org/tutorials/deep_cnn}}} The non-private model achieves a $92.8\%$ test accuracy, which is shy of the state-of-the-art. However, we are primarily interested in comparing the private student's accuracy with the one of a non-private model trained on the entire dataset, for different privacy guarantees. The source code for reproducing the results in this section is available on GitHub.\footnote{\scriptsize{\url{https://github.com/tensorflow/models/tree/master/differential_privacy/multiple_teachers}}} \subsection{Training an ensemble of teachers producing private labels} \label{ssec:eval-teacher} As mentioned above, compensating the noise introduced by the Laplacian mechanism presented in Equation~\ref{eq:noisy-max} requires large ensembles. We evaluate the extent to which the two datasets considered can be partitioned with a reasonable impact on the performance of individual teachers. Specifically, we show that for MNIST and SVHN, we are able to train ensembles of $250$ teachers. Their aggregated predictions are accurate despite the injection of large amounts of random noise to ensure privacy. The aggregation mechanism output has an accuracy of $93.18\%$ for MNIST and $87.79\%$ for SVHN, when evaluated on their respective test sets, while each query has a low privacy budget of $\varepsilon=0.05$. \boldpara{Prediction accuracy} All other things being equal, the number $n$ of teachers is limited by a trade-off between the classification task's complexity and the available data. We train $n$ teachers by partitioning the training data $n$-way. Larger values of $n$ lead to larger absolute gaps, hence potentially allowing for a larger noise level and stronger privacy guarantees. At the same time, a larger $n$ implies a smaller training dataset for each teacher, potentially reducing the teacher accuracy. We empirically find appropriate values of $n$ for the MNIST and SVHN datasets by measuring the test set accuracy of each teacher trained on one of the $n$ partitions of the training data. We find that even for $n=250$, the average test accuracy of individual teachers is $83.86\%$ for MNIST and $83.18\%$ for SVHN. The larger size of SVHN compensates its increased task complexity. \boldpara{Prediction confidence} As outlined in Section~\ref{sec:dp-analysis}, the privacy of predictions made by an ensemble of teachers intuitively requires that a quorum of teachers generalizing well agree on identical labels. This observation is reflected by our data-dependent privacy analysis, which provides stricter privacy bounds when the quorum is strong. We study the disparity of labels assigned by teachers. In other words, we count the number of votes for each possible label, and measure the difference in votes between the most popular label and the second most popular label, i.e., the \emph{gap}. If the gap is small, introducing noise during aggregation might change the label assigned from the first to the second. Figure~\ref{fig:nb-teachers-gaps} shows the gap normalized by the total number of teachers $n$. As $n$ increases, the gap remains larger than $60\%$ of the teachers, allowing for aggregation mechanisms to output the correct label in the presence of noise. \boldpara{Noisy aggregation} For MNIST and SVHN, we consider three ensembles of teachers with varying number of teachers $n\in\{10,100,250\}$. For each of them, we perturb the vote counts with Laplacian noise of inversed scale $\gamma$ ranging between $0.01$ and $1$. This choice is justified below in Section~\ref{ssec:eval-student}. We report in Figure~\ref{fig:lap-scale-accuracy} the accuracy of test set labels inferred by the noisy aggregation mechanism for these values of $\varepsilon$. Notice that the number of teachers needs to be large to compensate for the impact of noise injection on the accuracy. \subsection{Semi-supervised training of the student with privacy} \label{ssec:eval-student} The noisy aggregation mechanism labels the student's unlabeled training set in a privacy-preserving fashion. To reduce the privacy budget spent on student training, we are interested in making as few label queries to the teachers as possible. We therefore use the semi-supervised training approach described previously. Our MNIST and SVHN students with $(\varepsilon,\delta)$ differential privacy of $(2.04, 10^{-5})$ and $(8.19, 10^{-6})$ achieve accuracies of $98.00\%$ and $90.66\%$. These results improve the differential privacy state-of-the-art for these datasets. \citet{abadi2016deep} previously obtained $97\%$ accuracy with a $(8, 10^{-5})$ bound on MNIST, starting from an inferior baseline model without privacy. \citet{shokri2015privacy} reported about $92\%$ accuracy on SVHN with $\varepsilon>2$ per model parameter and a model with over $300\mbox{,}000$ parameters. Naively, this corresponds to a total $\varepsilon>600\mbox{,}000$. We apply semi-supervised learning with GANs to our problem using the following setup for each dataset. In the case of MNIST, the student has access to $9\mbox{,}000$ samples, among which a subset of either $100$, $500$, or $1\mbox{,}000$ samples are labeled using the noisy aggregation mechanism discussed in Section~\ref{ssec:teacher}. Its performance is evaluated on the $1\mbox{,}000$ remaining samples of the test set. Note that this may increase the variance of our test set accuracy measurements, when compared to those computed over the entire test data. For the MNIST dataset, we randomly shuffle the test set to ensure that the different classes are balanced when selecting the (small) subset labeled to train the student. For SVHN, the student has access to $10\mbox{,}000$ training inputs, among which it labels $500$ or $1\mbox{,}000$ samples using the noisy aggregation mechanism. Its performance is evaluated on the remaining $16\mbox{,}032$ samples. For both datasets, the ensemble is made up of $250$ teachers. We use Laplacian scale of $20$ to guarantee an individual query privacy bound of $\varepsilon=0.05$. These parameter choices are motivated by the results from Section~\ref{ssec:eval-teacher}. In Figure~\ref{fig:gans}, we report the values of the $(\varepsilon, \delta)$ differential privacy guarantees provided and the corresponding student accuracy, as well as the number of queries made by each student. The MNIST student is able to learn a $98\%$ accurate model, which is shy of $1\%$ when compared to the accuracy of a model learned with the entire training set, with only $100$ label queries. This results in a strict differentially private bound of $\varepsilon=2.04$ for a failure probability fixed at $10^{-5}$. The SVHN student achieves $90.66\%$ accuracy, which is also comparable to the $92.80\%$ accuracy of one teacher learned with the entire training set. The corresponding privacy bound is $\varepsilon=8.19$, which is higher than for the MNIST dataset, likely because of the larger number of queries made to the aggregation mechanism. We observe that our private student outperforms the aggregation's output in terms of accuracy, with or without the injection of Laplacian noise. While this shows the power of semi-supervised learning, the student may not learn as well on different kinds of data (e.g., medical data), where categories are not explicitly designed by humans to be salient in the input space. Encouragingly, as Appendix~\ref{ap:uci} illustrates, the PATE approach can be successfully applied to at least some examples of such data. \section{Introduction} \label{sec:introduction} Some machine learning applications with great benefits are enabled only through the analysis of sensitive data, such as users' personal contacts, private photographs or correspondence, or even medical records or genetic sequences~\citep{alipanahi2015predicting,kannan2016smart,kononenko2001machine,sweeney1997weaving}. Ideally, in those cases, the learning algorithms would protect the privacy of users' training data, e.g., by guaranteeing that the output model generalizes away from the specifics of any individual user. % Unfortunately, established machine learning algorithms make no such guarantee; indeed, though state-of-the-art algorithms generalize well to the test set, they continue to overfit on specific training examples in the sense that some of these examples are implicitly memorized. Recent attacks exploiting this implicit memorization in machine learning have demonstrated that private, sensitive training data can be recovered from models. Such attacks can proceed directly, by analyzing internal model parameters, but also indirectly, by repeatedly querying opaque models to gather data for the attack's analysis. For example, \citet{fredrikson2015model} used hill-climbing on the output probabilities of a computer-vision classifier to reveal individual faces from the training data. Because of those demonstrations---and because privacy guarantees must apply to worst-case outliers, not only the average---any strategy for protecting the privacy of training data should prudently assume that attackers have unfettered access to internal model parameters. To protect the privacy of training data, this paper improves upon a specific, structured application of the techniques of knowledge aggregation and transfer~\citep{ML:Breiman:bagging}, previously explored by~\citet{nissim2007smooth},~\citet{pathak2010multiparty}, and particularly~\citet{hamm2016learning}. In this strategy, first, an ensemble~\citep{dietterich2000ensemble} of teacher models is trained on disjoint subsets of the sensitive data. Then, using auxiliary, unlabeled non-sensitive data, a student model is trained on the aggregate output of the ensemble, such that the student learns to accurately mimic the ensemble. Intuitively, this strategy ensures that the student does not depend on the details of any single sensitive training data point (e.g., of any single user), and, thereby, the privacy of the training data is protected even if attackers can observe the student's internal model parameters. This paper shows how this strategy's privacy guarantees can be strengthened by restricting student training to a limited number of teacher votes, and by revealing only the topmost vote after carefully adding random noise. % We call this strengthened strategy PATE, for \emph{Private Aggregation of Teacher Ensembles}. Furthermore, we introduce an improved privacy analysis that makes the strategy generally applicable to machine learning algorithms with high utility and meaningful privacy guarantees---in particular, when combined with semi-supervised learning. To establish strong privacy guarantees, it is important to limit the student's access to its teachers, so that the student's exposure to teachers' knowledge can be meaningfully quantified and bounded. Fortunately, there are many techniques for speeding up knowledge transfer that can reduce the rate of student/teacher consultation during learning. We describe several techniques in this paper, the most effective of which makes use of generative adversarial networks (GANs)~\citep{goodfellow2014generative} applied to semi-supervised learning, using the implementation proposed by~\citet{salimans2016improved}. For clarity, we use the term PATE-G when our approach is combined with generative, semi-supervised methods. Like all semi-supervised learning methods, PATE-G assumes the student has access to additional, unlabeled data, which, in this context, must be public or non-sensitive. This assumption should not greatly restrict our method's applicability: even when learning on sensitive data, a non-overlapping, unlabeled set of data often exists, from which semi-supervised methods can extract distribution priors. For instance, public datasets exist for text and images, and for medical data. It seems intuitive, or even obvious, that a student machine learning model will provide good privacy when trained without access to sensitive training data, apart from a few, noisy votes from a teacher quorum. However, intuition is not sufficient because privacy properties can be surprisingly hard to reason about; for example, even a single data item can greatly impact machine learning models trained on a large corpus~\citep{chaudhuri2011differentially}. Therefore, to limit the effect of any single sensitive data item on the student's learning, precisely and formally, we apply the well-established, rigorous standard of differential privacy~\citep{dwork2014algorithmic}. Like all differentially private algorithms, our learning strategy carefully adds noise, so that the privacy impact of each data item can be analyzed and bounded. In particular, we dynamically analyze the sensitivity of the teachers' noisy votes; for this purpose, we use the state-of-the-art moments accountant technique from~\citet{abadi2016deep}, which tightens the privacy bound when the topmost vote has a large quorum. As a result, for MNIST and similar benchmark learning tasks, our methods allow students to provide excellent utility, while our analysis provides meaningful worst-case guarantees. In particular, we can bound the metric for privacy loss (the differential-privacy $\varepsilon$) to a range similar to that of existing, real-world privacy-protection mechanisms, such as Google's RAPPOR~\citep{erlingsson2014rappor}. Finally, it is an important advantage that our learning strategy and our privacy analysis do not depend on the details of the machine learning techniques used to train either the teachers or their student. Therefore, the techniques in this paper apply equally well for deep learning methods, or any such learning methods with large numbers of parameters, as they do for shallow, simple techniques. In comparison, \citet{hamm2016learning} guarantee privacy only conditionally, for a restricted class of student classifiers---in effect, limiting applicability to logistic regression with convex loss. Also, unlike the methods of~\citet{abadi2016deep}, which represent the state-of-the-art in differentially-private deep learning, our techniques make no assumptions about details such as batch selection, the loss function, or the choice of the optimization algorithm. Even so, as we show in experiments on MNIST and SVHN, our techniques provide a privacy/utility tradeoff that equals or improves upon bespoke learning methods such as those of~\citet{abadi2016deep}. Section~\ref{sec:related-work} further discusses the related work. Building on this related work, our contributions are as follows: \begin{itemize}[itemsep=1pt, topsep=0pt, partopsep=0pt] \item We demonstrate a general machine learning strategy, the PATE approach, that provides differential privacy for training data in a ``black-box'' manner, i.e., independent of the learning algorithm, as demonstrated by Section~\ref{sec:evaluation} and Appendix~\ref{ap:uci}. \item We improve upon the strategy outlined in~\citet{hamm2016learning} for learning machine models that protect training data privacy. In particular, our student only accesses the teachers' top vote and the model does not need to be trained with a restricted class of convex losses. \item We explore four different approaches for reducing the student's dependence on its teachers, and show how the application of GANs to semi-supervised learning of~\citet{salimans2016improved} can greatly reduce the privacy loss by radically reducing the need for supervision. %Thus, we distinguish as PATE-G \item We present a new application of the moments accountant technique from \citet{abadi2016deep} for improving the differential-privacy analysis of knowledge transfer, which allows the training of students with meaningful privacy bounds. \item We evaluate our framework on MNIST and SVHN, allowing for a comparison of our results with previous differentially private machine learning methods. Our classifiers achieve an $(\varepsilon,\delta)$ differential-privacy bound of $(2.04,10^{-5})$ for MNIST and $(8.19,10^{-6})$ for SVHN, respectively with accuracy of $98.00\%$ and $90.66\%$. In comparison, for MNIST,~\citet{,abadi2016deep} obtain a looser $(8, 10^{-5})$ privacy bound and $97\%$ accuracy. For SVHN,~\citet{shokri2015privacy} report approx.\ $92\%$ accuracy with $\varepsilon>2$ per each of $\mathrm{300\mbox{,}000}$ model parameters, naively making the total $\varepsilon>\mathrm{600\mbox{,}000}$, which guarantees no meaningful privacy. \item Finally, we show that the PATE approach can be successfully applied to other model structures and to datasets with different characteristics. In particular, in Appendix~\ref{ap:uci} PATE protects the privacy of medical data used to train a model based on random forests. \end{itemize} Our results are encouraging, and highlight the benefits of combining a learning strategy based on semi-supervised knowledge transfer with a precise, data-dependent privacy analysis. However, the most appealing aspect of this work is probably that its guarantees can be compelling to both an expert and a non-expert audience. % In combination, our techniques simultaneously provide both an intuitive and a rigorous guarantee of training data privacy, without sacrificing the utility of the targeted model. This gives hope that users will increasingly be able to confidently and safely benefit from machine learning models built from their sensitive data. Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: \emph{Private Aggregation of Teacher Ensembles} (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as ``teachers'' for a ``student'' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning. \section{Appendix: Additional experiments on the UCI Adult and Diabetes datasets} \label{ap:uci} In order to further demonstrate the general applicability of our approach, we performed experiments on two additional datasets. While our experiments on MNIST and SVHN in Section~\ref{sec:evaluation} used convolutional neural networks and GANs, here we use random forests to train our teacher and student models for both of the datasets. Our new results on these datasets show that, despite the differing data types and architectures, we are able to provide meaningful privacy guarantees. \boldpara{UCI Adult dataset} The UCI Adult dataset is made up of census data, and the task is to predict when individuals make over \$50k per year. Each input consists of 13 features (which include the age, workplace, education, occupation---see the UCI website for a full list\footnote{\scriptsize{\url{https://archive.ics.uci.edu/ml/datasets/Adult}}}). The only pre-processing we apply to these features is to map all categorical features to numerical values by assigning an integer value to each possible category. The model is a random forest provided by the \texttt{scikit-learn} Python package. When training both our teachers and student, we keep all the default parameter values, except for the number of estimators, which we set to $100$. The data is split between a training set of $32\mbox{,}562$ examples, and a test set of $16\mbox{,}282$ inputs. \boldpara{UCI Diabetes dataset} The UCI Diabetes dataset includes de-identified records of diabetic patients and corresponding hospital outcomes, which we use to predict whether diabetic patients were readmitted less than 30 days after their hospital release. To the best of our knowledge, no particular classification task is considered to be a standard benchmark for this dataset. Even so, it is valuable to consider whether our approach is applicable to the likely classification tasks, such as readmission, since this dataset is collected in a medical environment---a setting where privacy concerns arise frequently. We select a subset of $18$ input features from the $55$ available in the dataset (to avoid features with missing values) and form a dataset balanced between the two output classes (see the UCI website for more details\footnote{\scriptsize{\url{https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008}}}). In class $0$, we include all patients that were readmitted in a 30-day window, while class $1$ includes all patients that were readmitted after 30 days or never readmitted at all. Our balanced dataset contains $34\mbox{,}104$ training samples and $12\mbox{,}702$ evaluation samples. We use a random forest model identical to the one described above in the presentation of the Adult dataset. \boldpara{Experimental results} We apply our approach described in Section~\ref{sec:approach}. For both datasets, we train ensembles of $n=250$ random forests on partitions of the training data. We then use the noisy aggregation mechanism, where vote counts are perturbed with Laplacian noise of scale $0.05$ to privately label the first $500$ test set inputs. We train the student random forest on these $500$ test set inputs and evaluate it on the last $11\mbox{,}282$ test set inputs for the Adult dataset, and $6\mbox{,}352$ test set inputs for the Diabetes dataset. These numbers deliberately leave out some of the test set, which allowed us to observe how the student performance-privacy trade-off was impacted by varying the number of private labels, as well as the Laplacian scale used when computing these labels. For the Adult dataset, we find that our student model achieves an $83\%$ accuracy for an $(\varepsilon, \delta) = (2.66, 10^{-5})$ differential privacy bound. Our non-private model on the dataset achieves $85\%$ accuracy, which is comparable to the state-of-the-art accuracy of $86\%$ on this dataset~\citep{poulos2016missing}. For the Diabetes dataset, we find that our privacy-preserving student model achieves a $93.94\%$ accuracy for a $(\varepsilon, \delta) = (1.44, 10^{-5})$ differential privacy bound. Our non-private model on the dataset achieves $93.81\%$ accuracy. \section{Private learning with ensembles of teachers} \label{sec:approach} In this section, we introduce the specifics of the PATE approach, which is illustrated in Figure~\ref{fig:approach-overview}. We describe how the data is partitioned to train an ensemble of teachers, and how the predictions made by this ensemble are noisily aggregated. In addition, we discuss how GANs can be used in training the student, and distinguish PATE-G variants that improve our approach using generative, semi-supervised methods. \subsection{Training the ensemble of teachers} \label{ssec:teacher} \boldpara{Data partitioning and teachers} Instead of training a single model to solve the task associated with dataset $(X,Y)$, where $X$ denotes the set of inputs, and $Y$ the set of labels, we partition the data in $n$ disjoint sets $(X_n, Y_n)$ and train a model separately on each set. As evaluated in Section~\ref{ssec:eval-teacher}, assuming that $n$ is not too large with respect to the dataset size and task complexity, we obtain $n$ classifiers $f_i$ called teachers. We then deploy them as an ensemble making predictions on unseen inputs $x$ by querying each teacher for a prediction $f_i(x)$ and aggregating these into a single prediction. \boldpara{Aggregation} The privacy guarantees of this teacher ensemble stems from its aggregation. Let $m$ be the number of classes in our task. The label count for a given class $j\in [m]$ and an input $\vec{x}$ is the number of teachers that assigned class $j$ to input $\vec{x}$: $n_j(\vec{x}) = \left| \{ i: i\in [n], f_i(\vec{x}) = j \} \right|$. If we simply apply \emph{plurality}---use the label with the largest count---the ensemble's decision may depend on a single teacher's vote. Indeed, when two labels have a vote count differing by at most one, there is a tie: the aggregated output changes if one teacher makes a different prediction. We add random noise to the vote counts $n_j$ to introduce ambiguity: \begin{equation} \label{eq:noisy-max} f(x) = \arg\max_j \left\{n_j(\vec{x}) + Lap\left(\frac{1}{\gamma}\right)\right\} \end{equation} In this equation, $\gamma$ is a privacy parameter and $Lap(b)$ the Laplacian distribution with location $0$ and scale $b$. The parameter $\gamma$ influences the privacy guarantee we can prove. Intuitively, a large $\gamma$ leads to a strong privacy guarantee, but can degrade the accuracy of the labels, as the noisy maximum $f$ above can differ from the true plurality. While we could use an $f$ such as above to make predictions, the noise required would increase as we make more predictions, making the model useless after a bounded number of queries. Furthermore, privacy guarantees do not hold when an adversary has access to the model parameters. Indeed, as each teacher $f_i$ was trained without taking into account privacy, it is conceivable that they have sufficient capacity to retain details of the training data. To address these limitations, we train another model, the student, using a fixed number of labels predicted by the teacher ensemble. \subsection{Semi-supervised transfer of the knowledge from an ensemble to a student} We train a student on nonsensitive and unlabeled data, some of which we label using the aggregation mechanism. This student model is the one deployed, in lieu of the teacher ensemble, so as to fix the privacy loss to a value that does not grow with the number of user queries made to the student model. Indeed, the privacy loss is now determined by the number of queries made to the teacher ensemble during student training and does not increase as end-users query the deployed student model. Thus, the privacy of users who contributed to the original training dataset is preserved even if the student's architecture and parameters are public or reverse-engineered by an adversary. We considered several techniques to trade-off the student model's quality with the number of labels it needs to access: distillation, active learning, semi-supervised learning (see Appendix~\ref{ap:student-learning}). Here, we only describe the most successful one, used in PATE-G: semi-supervised learning with GANs. \boldpara{Training the student with GANs} The GAN framework involves two machine learning models, a \emph{generator} and a \emph{discriminator}. They are trained in a competing fashion, in what can be viewed as a two-player game~\citep{goodfellow2014generative}. The generator produces samples from the data distribution by transforming vectors sampled from a Gaussian distribution. The discriminator is trained to distinguish samples artificially produced by the generator from samples part of the real data distribution. Models are trained via simultaneous gradient descent steps on both players' costs. In practice, these dynamics are often difficult to control when the strategy set is non-convex (e.g., a DNN). In their application of GANs to semi-supervised learning, \citet{salimans2016improved} made the following modifications. The discriminator is extended from a binary classifier (data vs.~generator sample) to a multi-class classifier (one of $k$ classes of data samples, plus a class for generated samples). This classifier is then trained to classify labeled real samples in the correct class, unlabeled real samples in any of the $k$ classes, and the generated samples in the additional class. Although no formal results currently explain why yet, the technique was empirically demonstrated to greatly improve semi-supervised learning of classifiers on several datasets, especially when the classifier is trained with {\em feature matching} loss~\citep{salimans2016improved}. Training the student in a semi-supervised fashion makes better use of the entire data available to the student, while still only labeling a subset of it. Unlabeled inputs are used in unsupervised learning to estimate a good prior for the distribution. Labeled inputs are then used for supervised learning. \section{Conclusions} To protect the privacy of sensitive training data, this paper has advanced a learning strategy and a corresponding privacy analysis. The PATE approach is based on knowledge aggregation and transfer from ``teacher'' models, trained on disjoint data, to a ``student'' model whose attributes may be made public. In combination, the paper's techniques demonstrably achieve excellent utility on the MNIST and SVHN benchmark tasks, while simultaneously providing a formal, state-of-the-art bound on users' privacy loss. While our results are not without limits---e.g., they require disjoint training data for a large number of teachers (whose number is likely to increase for tasks with many output classes)---they are encouraging, and highlight the advantages of combining semi-supervised learning with precise, data-dependent privacy analysis, which will hopefully trigger further work. In particular, such future work may further investigate whether or not our semi-supervised approach will also reduce teacher queries for tasks other than MNIST and SVHN, for example when the discrete output categories are not as distinctly defined by the salient input space features. A key advantage is that this paper's techniques establish a precise guarantee of training data privacy in a manner that is both intuitive and rigorous. Therefore, they can be appealing, and easily explained, to both an expert and non-expert audience. However, perhaps equally compelling are the techniques' wide applicability. Both our learning approach and our analysis methods are ``black-box,'' i.e., independent of the learning algorithm for either teachers or students, and therefore apply, in general, to non-convex, deep learning, and other learning methods. Also, because our techniques do not constrain the selection or partitioning of training data, they apply when training data is naturally and non-randomly partitioned---e.g., because of privacy, regulatory, or competitive concerns---or when each teacher is trained in isolation, with a different method. We look forward to such further applications, for example on RNNs and other sequence-based models. \section{Discussion and related work} \label{sec:related-work} Several privacy definitions are found in the literature. For instance, \emph{k-anonymity} requires information about an individual to be indistinguishable from at least $k-1$ other individuals in the dataset~\citep{sweeney2002k}. However, its lack of randomization gives rise to caveats~\citep{dwork2014algorithmic}, and attackers can infer properties of the dataset~\citep{aggarwal2005k}. An alternative definition, \emph{differential privacy}, established itself as a rigorous standard for providing privacy guarantees~\citep{dwork2006calibrating}. In contrast to $k$-anonymity, differential privacy is a property of the randomized algorithm and not the dataset itself. A variety of approaches and mechanisms can guarantee differential privacy. \citet{erlingsson2014rappor} showed that randomized response, introduced by~\citet{warner1965randomized}, can protect crowd-sourced data collected from software users to compute statistics about user behaviors. Attempts to provide differential privacy for machine learning models led to a series of efforts on shallow machine learning models, including work by~\citet{bassily2014differentially,chaudhuri2009privacy,pathak2011privacy,song2013stochastic}, and~\citet{wainwright2012privacy}. A privacy-preserving distributed SGD algorithm was introduced by~\citet{shokri2015privacy}. It applies to non-convex models. However, its privacy bounds are given per-parameter, and the large number of parameters prevents the technique from providing a meaningful privacy guarantee. \citet{abadi2016deep} provided stricter bounds on the privacy loss induced by a noisy SGD by introducing the moments accountant. In comparison with these efforts, our work increases the accuracy of a private MNIST model from $97\%$ to $98\%$ while improving the privacy bound $\eps$ from $8$ to $1.9$. Furthermore, the PATE approach is independent of the learning algorithm, unlike this previous work. Support for a wide range of architecture and training algorithms allows us to obtain good privacy bounds on an accurate and private SVHN model. However, this comes at the cost of assuming that non-private unlabeled data is available, an assumption that is not shared by~\citep{abadi2016deep,shokri2015privacy}. \citet{pathak2010multiparty} first discussed secure multi-party aggregation of locally trained classifiers for a global classifier hosted by a trusted third-party. \citet{hamm2016learning} proposed the use of knowledge transfer between a collection of models trained on individual devices into a single model guaranteeing differential privacy. Their work studied linear student models with convex and continuously differentiable losses, bounded and $c$-Lipschitz derivatives, and bounded features. The PATE approach of this paper is not constrained to such applications, but is more generally applicable. Previous work also studied semi-supervised knowledge transfer from private models. For instance, \citet{jagannathan2013semi} learned privacy-preserving random forests. A key difference is that their approach is tailored to decision trees. PATE works well for the specific case of decision trees, as demonstrated in Appendix~\ref{ap:uci}, and is also applicable to other machine learning algorithms, including more complex ones. Another key difference is that \citet{jagannathan2013semi} modified the classic model of a decision tree to include the Laplacian mechanism. Thus, the privacy guarantee does not come from the disjoint sets of training data analyzed by different decision trees in the random forest, but rather from the modified architecture. In contrast, partitioning is essential to the privacy guarantees of the PATE approach.
Deep Probabilistic Programming
1701.03757
Table 2: HMC benchmark for large-scale logistic regression. Edward (GPU) is significantly faster than other systems. In addition, Edward has no overhead: it is as fast as handwritten TensorFlow.
[ "Probabilistic programming system", "Runtime (s)" ]
[ [ "Handwritten NumPy (1 CPU)", "534" ], [ "Stan (1 CPU) (Carpenter et al., 2016 )", "171" ], [ "PyMC3 (12 CPU) (Salvatier et al., 2015 )", "30.0" ], [ "[BOLD] Edward (12 CPU)", "[BOLD] 8.2" ], [ "Handwritten TensorFlow (GPU)", "5.0" ], [ "[BOLD] Edward (GPU)", "[BOLD] 4.9" ] ]
This showcases the value of building a \glsPPL on top of computational graphs. The speedup stems from fast matrix multiplication when calculating the model’s log-likelihood; GPUs can efficiently parallelize this computation. We expect similar speedups for models whose bottleneck is also matrix multiplication, such as deep neural networks.
\vspace{-1.0ex} \section{Introduction} \label{sec:introduction} \vspace{-0.5ex} The nature of deep neural networks is compositional. Users can connect layers in creative ways, without having to worry about how to perform testing (forward propagation) or inference (gradient-based optimization, with back propagation and automatic differentiation). In this paper, we design compositional representations for probabilistic programming. Probabilistic programming lets users specify generative probabilistic models as programs and then ``compile'' those models down into inference procedures. Probabilistic models are also compositional in nature, and much work has enabled rich probabilistic programs via compositions of random variables \citep{goodman2012church,ghahramani2015probabilistic,lake2016building}.\\[0.75ex] Less work, however, has considered an analogous compositionality for inference. Rather, many existing \glsreset{PPL}\acrlongpl{PPL} treat the inference engine as a black box, abstracted away from the model. These cannot capture probabilistic inferences that reuse the model's representation---a key idea in recent advances in variational inference~\citep{kingma2014autoencoding,rezende2015variational,tran2016variational}, \glsreset{GAN}\acrlongpl{GAN}~\citep{goodfellow2014generative}, and also in more classic inferences \citep{dayan1995helmholtz,gutmann2010noise}.\\[0.75ex] We propose Edward\footnote{% See \citet{tran2016edward} for details of the API. A companion webpage for this paper is available at \url{http://edwardlib.org/iclr2017}. It contains more complete examples with runnable code.}, a Turing-complete \acrlong{PPL} which builds on two compositional representations---one for random variables and one for inference. By treating inference as a first class citizen, on a par with modeling, we show that probabilistic programming can be as flexible and computationally efficient as traditional deep learning. For flexibility, we show how Edward makes it easy to fit the same model using a variety of composable inference methods, ranging from point estimation to variational inference to \acrshort{MCMC}. For efficiency, we show how to integrate Edward into existing computational graph frameworks such as TensorFlow \citep{abadi2016tensorflow}. Frameworks like TensorFlow provide computational benefits like distributed training, parallelism, vectorization, and \glsunset{GPU}\gls{GPU} support ``for free.'' For example, we show on a benchmark task that Edward's \acrlong{HMC} is many times faster than existing software. Further, Edward incurs no runtime overhead: it is as fast as handwritten TensorFlow. \section{Compositional Representations for Inference} \label{sec:inference} We described random variables as a representation for building rich probabilistic programs over computational graphs. We now describe a compositional representation for inference. We desire two criteria: (a) support for many classes of inference, where the form of the inferred posterior depends on the algorithm; and (b) invariance of inference under the computational graph, that is, the posterior can be further composed as part of another model. To explain our approach, we will use a simple hierarchical model as a running example. \Cref{fig:hierarchical_model_example} displays a joint distribution $p(\mbx, \mbz, \beta)$ of data $\mbx$, local variables $\mbz$, and global variables $\beta$. The ideas here extend to more expressive programs. \subsection{Inference as Stochastic Graph Optimization} \label{sub:inference} The goal of inference is to calculate the posterior distribution $p(\mathbf{z}, \beta\mid \mathbf{x}_{\text{train}}; \mbtheta)$ given data $\mbx_{\text{train}}$, where $\mbtheta$ are any model parameters that we will compute point estimates for.\footnote{% For example, we could replace \texttt{x}'s \texttt{sigma} argument with \texttt{tf.exp(tf.Variable(0.0))*tf.ones([N, D])}. This defines a model parameter initialized at 0 and positive-constrained.} We formalize this as the following optimization problem: \begin{align} \label{eq:inference-optimization} \min_{\mblambda,\mbtheta} \mathcal{L}( p(\mathbf{z}, \beta\mid \mathbf{x}_{\text{train}}; \mbtheta),~ q(\mathbf{z}, \beta; \mblambda) ), \end{align} where $q(\mathbf{z}, \beta; \mblambda)$ is an approximation to the posterior $p(\mathbf{z}, \beta\g \mbx_{\text{train}};\mbtheta)$, and $\mathcal{L}$ is a loss function with respect to $p$ and $q$. The choice of approximation $q$, loss $\mathcal{L}$, and rules to update parameters $\{\mbtheta,\mblambda\}$ are specified by an inference algorithm. (Note $q$ can be nonparametric, such as a point or a collection of samples.) In Edward, we write this problem as follows: \begin{lstlisting}[language=python] inference = ed.Inference({beta: qbeta, z: qz}, data={x: x_train}) \end{lstlisting} \texttt{Inference} is an abstract class which takes two inputs. The first is a collection of latent random variables \texttt{beta} and \texttt{z}, associated to their ``posterior variables'' \texttt{qbeta} and \texttt{qz} respectively. The second is a collection of observed random variables \texttt{x}, which is associated to their realizations \texttt{x_train}. The idea is that \texttt{Inference} defines and solves the optimization in \Cref{eq:inference-optimization}. It adjusts parameters of the distribution of \texttt{qbeta} and \texttt{qz} (and any model parameters) to be close to the posterior. Class methods are available to finely control the inference. Calling \texttt{inference.initialize()} builds a computational graph to update $\{\mbtheta,\mblambda\}$. Calling \texttt{inference.update()} runs this computation once to update $\{\mbtheta,\mblambda\}$; we call the method in a loop until convergence. Importantly, no efficiency is lost in Edward's language: the computational graph is the same as if it were handwritten for a specific model. This means the runtime is the same; also see our experiments in \Cref{sub:gpu}. A key concept in Edward is that there is no distinct ``model'' or ``inference'' block. A model is simply a collection of random variables, and inference is a way of modifying parameters in that collection subject to another. This reductionism offers significant flexibility. For example, we can infer only parts of a model (e.g., layer-wise training \citep{hinton2006fast}), infer parts used in multiple models (e.g., multi-task learning), or plug in a posterior into a new model (e.g., Bayesian updating). \subsection{Classes of Inference} The design of \texttt{Inference} is very general. We describe subclasses to represent many algorithms below: variational inference, Monte Carlo, and \acrlongpl{GAN}. Variational inference posits a family of approximating distributions and finds the closest member in the family to the posterior \citep{jordan1999introduction}. In Edward, we build the variational family in the graph; see \Cref{fig:inference} (left). For our running example, the family has mutable variables as parameters $\mblambda=\{\pi,\mu,\sigma\}$, where $q(\beta;\mu,\sigma) = \operatorname{Normal}(\beta; \mu,\sigma)$ and $q(\mbz;\pi) = \operatorname{Categorical}(\mbz;\pi)$. Specific variational algorithms inherit from the \texttt{VariationalInference} class. Each defines its own methods, such as a loss function and gradient. For example, we represent \gls{MAP} estimation with an approximating family (\texttt{qbeta} and \texttt{qz}) of \texttt{PointMass} random variables, i.e., with all probability mass concentrated at a point. \texttt{MAP} inherits from \texttt{VariationalInference} and defines the negative log joint density as the loss function; it uses existing optimizers inside TensorFlow. In \Cref{sub:recent}, we experiment with multiple gradient estimators for black box variational inference \citep{ranganath:2014}. Each estimator implements the same loss (an objective proportional to the divergence $\operatorname{KL}(q\gg p)$) and a different update rule (stochastic gradient). Monte Carlo approximates the posterior using samples \citep{robert1999monte}. Monte Carlo is an inference where the approximating family is an empirical distribution, $q(\beta; \{\beta^{(t)}\}) = \frac{1}{T}\sum_{t=1}^T \delta(\beta, \beta^{(t)})$ and $q(\mbz; \{\mbz^{(t)}\}) = \frac{1}{T}\sum_{t=1}^T \delta(\mbz, \mbz^{(t)})$. The parameters are $\mblambda=\{\beta^{(t)},\mbz^{(t)}\}$. See \Cref{fig:inference} (right). Monte Carlo algorithms proceed by updating one sample $\beta^{(t)},\mbz^{(t)}$ at a time in the empirical approximation. Specific \glsunset{MC}\gls{MC} samplers determine the update rules: they can use gradients such as in Hamiltonian Monte Carlo \citep{neal2011mcmc} and graph structure such as in sequential Monte Carlo \citep{doucet2001introduction}. Edward also supports non-Bayesian methods such as \glspl{GAN} \citep{goodfellow2014generative}. See \Cref{fig:gan}. The model posits random noise \texttt{eps} over $N$ data points, each with $d$ dimensions; this random noise feeds into a \texttt{generative_network} function, a neural network that outputs real-valued data \texttt{x}. In addition, there is a \texttt{discriminative_network} which takes data as input and outputs the probability that the data is real (in logit parameterization). We build \texttt{GANInference}; running it optimizes parameters inside the two neural network functions. This approach extends to many advances in \glspl{GAN} (e.g., \citet{denton2015deep,li2015generative}). Finally, one can design algorithms that would otherwise require tedious algebraic manipulation. With symbolic algebra on nodes of the computational graph, we can uncover conjugacy relationships between random variables. Users can then integrate out variables to automatically derive classical Gibbs \citep{gelfand1990sampling}, mean-field updates \citep{bishop2006pattern}, and exact inference. These algorithms are being currently developed in Edward. \subsection{Composing Inferences} Core to Edward's design is that inference can be written as a collection of separate inference programs. Below we demonstrate variational EM, with an (approximate) E-step over local variables and an M-step over global variables. We instantiate two algorithms, each of which conditions on inferences from the other, and we alternate with one update of each \citep{neal1993new}, \begin{lstlisting}[language=Python] qbeta = PointMass(params=tf.Variable(tf.zeros([K, D]))) qz = Categorical(logits=tf.Variable(tf.zeros([N, K]))) inference_e = ed.VariationalInference({z: qz}, data={x: x_train, beta: qbeta}) inference_m = ed.MAP({beta: qbeta}, data={x: x_train, z: qz}) ... for _ in range(10000): inference_e.update() inference_m.update() \end{lstlisting} This extends to many other cases such as exact EM for exponential families, contrastive divergence \citep{hinton2002training}, pseudo-marginal methods \citep{andrieu2009pseudo}, and Gibbs sampling within variational inference \citep{wang2012truncation,Hoffman:2015}. We can also write message passing algorithms, which solve a collection of local inference problems \citep{koller2009probabilistic}. For example, classical message passing uses exact local inference and expectation propagation locally minimizes the Kullback-Leibler divergence, $\text{KL}(p\gg q)$ \citep{minka2001expectation}. \subsection{Data Subsampling} \label{sub:batch_training} Stochastic optimization \citep{bottou2010large} scales inference to massive data and is key to algorithms such as stochastic gradient Langevin dynamics \citep{welling2011bayesian} and stochastic variational inference \citep{hoffman2013stochastic}. The idea is to cheaply estimate the model's log joint density in an unbiased way. At each step, one subsamples a data set $\{x_m\}$ of size $M$ and then scales densities with respect to local variables, \begin{align*} \log p(\mbx, \mbz, \beta) & = \log p(\beta) + \sum_{n=1}^N\Big[\log p(x_n \g z_n, \beta) + \log p(z_n \g \beta)\Big] \\ & \approx \log p(\beta) + \frac{N}{M}\sum_{m=1}^M\Big[\log p(x_m \g z_m, \beta) + \log p(z_m \g \beta)\Big]. \end{align*} To support stochastic optimization, we represent only a subgraph of the full model. This prevents reifying the full model, which can lead to unreasonable memory consumption \citep{tristan2014augur}. During initialization, we pass in a dictionary to properly scale the arguments. See \Cref{fig:hierachical_model_batch}. Conceptually, the scale argument represents scaling for each random variable's plate, as if we had seen that random variable $N / M$ as many times. As an example, \Cref{appendix:svi} shows how to implement stochastic variational inference in Edward. The approach extends naturally to streaming data \citep{doucet2000on,broderick2013streaming,mcinerney2015population}, dynamic batch sizes, and data structures in which working on a subgraph does not immediately apply \citep{binder1997space,johnson2014stochastic,foti2014stochastic}. \begin{abstract} We propose Edward, a Turing-complete \acrlong{PPL}. Edward defines two compositional representations---random variables and inference. By treating inference as a first class citizen, on a par with modeling, we show that probabilistic programming can be as flexible and computationally efficient as traditional deep learning. For flexibility, Edward makes it easy to fit the same model using a variety of composable inference methods, ranging from point estimation to variational inference to \acrshort{MCMC}. In addition, Edward can reuse the modeling representation as part of inference, facilitating the design of rich variational models and \acrlongpl{GAN}. For efficiency, Edward is integrated into TensorFlow, providing significant speedups over existing probabilistic systems. For example, we show on a benchmark logistic regression task that Edward is at least 35x faster than Stan and 6x faster than PyMC3. Further, Edward incurs no runtime overhead: it is as fast as handwritten TensorFlow. \end{abstract} \section{Discussion: Challenges \& Extensions} \label{sec:discussion} We described Edward, a Turing-complete \gls{PPL} with compositional representations for probabilistic models and inference. Edward expands the scope of probabilistic programming to be as flexible and computationally efficient as traditional deep learning. For flexibility, we showed how Edward can use a variety of composable inference methods, capture recent advances in variational inference and \acrlongpl{GAN}, and finely control the inference algorithms. For efficiency, we showed how Edward leverages computational graphs to achieve fast, parallelizable computation, scales to massive data, and incurs no runtime overhead over handwritten code. In present work, we are applying Edward as a research platform for developing new probabilistic models \citep{rudolph2016exponential,tran2017deep} and new inference algorithms \citep{dieng2016chi}. As with any language design, Edward makes tradeoffs in pursuit of its flexibility and speed for research. For example, an open challenge in Edward is to better facilitate programs with complex control flow and recursion. While possible to represent, it is unknown how to enable their flexible inference strategies. In addition, it is open how to expand Edward's design to dynamic computational graph frameworks---which provide more flexibility in their programming paradigm---but may sacrifice performance. A crucial next step for probabilistic programming is to leverage dynamic computational graphs while maintaining the flexibility and efficiency that Edward offers. \documentclass{article} % For LaTeX2e \input{preamble/preamble} \input{preamble/preamble_acronyms} \input{preamble/preamble_math} \input{preamble/preamble_tikz} \setlength{\marginparwidth}{1in} % for margins for sidenotes \newcommand{\eat}[1]{} \title{Deep Probabilistic Programming} \author{% Dustin Tran \\ Columbia University \And Matthew D. Hoffman \hspace{-0.7em} \\ Adobe Research \And Rif A. Saurous \hspace{8.65em} \\ Google Research \AND Eugene Brevdo \\ Google Brain \And Kevin Murphy \\ Google Research \And David M. Blei \\ Columbia University } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy \begin{document} \maketitle \input{abstract} \vspace{-1ex} \input{intro} \input{related} \input{model} \input{inference} \input{experiments} \input{discussion} \subsubsection*{Acknowledgements} We thank the probabilistic programming community---for sharing our enthusiasm and motivating further work---including developers of Church, Venture, Gamalon, Hakaru, and WebPPL. We also thank Stan developers for providing extensive feedback as we developed the language, as well as Thomas Wiecki for experimental details. We thank the Google BayesFlow team---Joshua Dillon, Ian Langmore, Ryan Sepassi, and Srinivas Vasudevan---as well as Amr Ahmed, Matthew Johnson, Hung Bui, Rajesh Ranganath, Maja Rudolph, and Francisco Ruiz for their helpful feedback. This work is supported by NSF IIS-1247664, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, DARPA N66001-15-C-4032, Adobe, Google, NSERC PGS-D, and the Sloan Foundation. \bibliographystyle{iclr2017_conference} \clearpage \appendix \input{sec_appendix} \end{document} \usetikzlibrary{shapes} \usetikzlibrary{fit} \usetikzlibrary{chains} \usetikzlibrary{arrows} \tikzstyle{latent} = [circle,fill=white,draw=black,inner sep=1pt, minimum size=20pt, font=\fontsize{10}{10}\selectfont, node distance=1] \tikzstyle{obs} = [latent,fill=gray!25] \tikzstyle{const} = [rectangle, inner sep=0pt, node distance=1] \tikzstyle{factor} = [rectangle, fill=black,minimum size=5pt, inner sep=0pt, node distance=0.4] \tikzstyle{det} = [latent, diamond] \tikzstyle{plate} = [draw, rectangle, rounded corners, fit=#1] \tikzstyle{wrap} = [inner sep=0pt, fit=#1] \tikzstyle{gate} = [draw, rectangle, dashed, fit=#1] \tikzstyle{caption} = [font=\footnotesize, node distance=0] % \tikzstyle{plate caption} = [caption, node distance=0, inner sep=0pt, below left=5pt and 0pt of #1.south east] % \tikzstyle{factor caption} = [caption] % \tikzstyle{every label} += [caption] % \tikzset{>={triangle 45}} \newcommand{\factoredge}[4][]{ % \foreach \f in {#3} { % \foreach \x in {#2} { % \path (\x) edge[-,#1] (\f) ; % } ; \foreach \y in {#4} { % \path (\f) edge[->,#1] (\y) ; % } ; } ; } \newcommand{\edge}[3][]{ % \foreach \x in {#2} { % \foreach \y in {#3} { % \path (\x) edge [->,#1] (\y) ;% } ; } ; } \newcommand{\factor}[5][]{ % \node[factor, label={[name=#2-caption]#3}, name=#2, #1, alias=#2-alias] {} ; % \factoredge {#4} {#2-alias} {#5} ; % } \newcommand{\plate}[4][]{ % \node[wrap=#3] (#2-wrap) {}; % \node[plate caption=#2-wrap] (#2-caption) {#4}; % \node[plate=(#2-wrap)(#2-caption), #1] (#2) {}; % } \newcommand{\gate}[4][]{ % \node[gate=#3, name=#2, #1, alias=#2-alias] {}; % \foreach \x in {#4} { % \draw [-*,thick] (\x) -- (#2-alias); % } ;% } \newcommand{\vgate}[6]{ % \node[wrap=#2] (#1-left) {}; % \node[wrap=#4] (#1-right) {}; % \node[gate=(#1-left)(#1-right)] (#1) {}; % \node[caption, below left=of #1.north ] (#1-left-caption) {#3}; % \node[caption, below right=of #1.north ] (#1-right-caption) {#5}; % \draw [-, dashed] (#1.north) -- (#1.south); % \foreach \x in {#6} { % \draw [-*,thick] (\x) -- (#1); % } ;% } \newcommand{\hgate}[6]{ % \node[wrap=#2] (#1-top) {}; % \node[wrap=#4] (#1-bottom) {}; % \node[gate=(#1-top)(#1-bottom)] (#1) {}; % \node[caption, above right=of #1.west ] (#1-top-caption) {#3}; % \node[caption, below right=of #1.west ] (#1-bottom-caption) {#5}; % \draw [-, dashed] (#1.west) -- (#1.east); % \foreach \x in {#6} { % \draw [-*,thick] (\x) -- (#1); % } ;% } \section{Experiments} \label{sec:experiments} In this section, we illustrate two main benefits of Edward: flexibility and efficiency. For the former, we show how it is easy to compare different inference algorithms on the same model. For the latter, we show how it is easy to get significant speedups by exploiting computational graphs. \subsection{Recent Methods in Variational Inference} \label{sub:recent} We demonstrate Edward's flexibility for experimenting with complex inference algorithms. We consider the \gls{VAE} setup from \Cref{fig:vae} and the binarized MNIST data set \citep{salakhutdinov2008quantitative}. We use $d=50$ latent variables per data point and optimize using ADAM. We study different components of the \gls{VAE} setup using different methods; \Cref{appendix:vae} is a complete script. After training we evaluate held-out log likelihoods, which are lower bounds on the true value. \Cref{table:mnist} shows the results. The first method uses the \gls{VAE} from \Cref{fig:vae}. The next three methods use the same \gls{VAE} but apply different gradient estimators: reparameterization gradient without an analytic KL; reparameterization gradient with an analytic entropy; and the score function gradient \citep{paisely2012variational,ranganath:2014}. This typically leads to the same optima but at different convergence rates. The score function gradient was slowest. Gradients with an analytic entropy produced difficulties around convergence: we switched to stochastic estimates of the entropy as it approached an optima. We also use \glspl{HVM} \citep{ranganath2016hierarchical} with a normalizing flow prior; it produced similar results as a normalizing flow on the latent variable space \citep{rezende2015variational}, and better than \glspl{IWAE} \citep{burda2016importance}. We also study novel combinations, such as \glspl{HVM} with the \acrshort{IWAE} objective, \gls{GAN}-based optimization on the decoder (with pixel intensity-valued data), and R\'{e}nyi divergence on the decoder. \gls{GAN}-based optimization does not enable calculation of the log-likelihood; R\'{e}nyi divergence does not directly optimize for log-likelihood so it does not perform well. The key point is that Edward is a convenient research platform: they are all easy modifications of a given script. \subsection{GPU-accelerated Hamiltonian Monte Carlo} \label{sub:gpu} We benchmark runtimes for a fixed number of Hamiltonian Monte Carlo \citep[\gls{HMC};][]{neal2011mcmc} iterations on modern hardware: a 12-core Intel i7-5930K CPU at 3.50GHz and an NVIDIA Titan X (Maxwell) GPU. We apply logistic regression on the Covertype dataset ($N=581012$, $D=54$; responses were binarized) using Edward, Stan (with PyStan) \citep{carpenter2016stan}, and PyMC3 \citep{salvatier2015probabilistic}. We ran 100 \gls{HMC} iterations, with 10 leapfrog updates per iteration, a step size of $0.5 / N$, and single precision. \Cref{fig:logistic_regression} illustrates the program in Edward. \Cref{table:hmc} displays the runtimes.% \footnote{In a previous version of this paper, we reported PyMC3 took 361s. This was caused by a bug preventing PyMC3 from correctly handling single-precision floating point. (PyMC3 with double precision is roughly 14x slower than Edward (GPU).) This has been fixed after discussion with Thomas Wiecki. The reported numbers also exclude compilation time, which is significant for Stan.} Edward (GPU) features a dramatic 35x speedup over Stan (1 CPU) and 6x speedup over PyMC3 (12 CPU). This showcases the value of building a \gls{PPL} on top of computational graphs. The speedup stems from fast matrix multiplication when calculating the model's log-likelihood; GPUs can efficiently parallelize this computation. We expect similar speedups for models whose bottleneck is also matrix multiplication, such as deep neural networks. There are various reasons for the speedup. Stan only used 1 CPU as it leverages multiple cores by running \gls{HMC} chains in parallel. Stan also used double-precision floating point as it does not allow single-precision. For PyMC3, we note Edward's speedup is not a result of PyMC3's Theano backend compared to Edward's TensorFlow. Rather, PyMC3 does not use Theano for all its computation, so it experiences communication overhead with NumPy. (PyMC3 was actually slower when using the GPU.) We predict that porting Edward's design to Theano would feature similar speedups. In addition to these speedups, we highlight that Edward has no runtime overhead: it is as fast as handwritten TensorFlow. Following \Cref{sub:inference}, this is because the computational graphs for inference are in fact the same for Edward and the handwritten code. \subsection{Probability Zoo} In addition to Edward, we also release the \emph{Probability Zoo}, a community repository for pre-trained probability models and their posteriors.\footnote{% The Probability Zoo is available at \url{http://edwardlib.org/zoo}. It includes model parameters and inferred posterior factors, such as local and global variables during training and any inference networks. } It is inspired by the model zoo in Caffe \citep{jia2014caffe}, which provides many pre-trained discriminative neural networks, and which has been key to making large-scale deep learning more transparent and accessible. It is also inspired by Forest \citep{stuhlmueller2012forest}, which provides examples of probabilistic programs. \section{Related Work} \label{sub:related} \vspace{-0.5ex} \Glspl{PPL} typically trade off the expressiveness of the language with the computational efficiency of inference. On one side, there are languages which emphasize expressiveness \citep{pfeffer2001ibal,milch2005blog,pfeffer2009figaro,goodman2012church}, representing a rich class beyond graphical models. Each employs a generic inference engine, but scales poorly with respect to model and data size. On the other side, there are languages which emphasize efficiency \citep{spiegelhalter1995bugs,murphy2001bayes,plummer2003jags,salvatier2015probabilistic,carpenter2016stan}. The \gls{PPL} is restricted to a specific class of models, and inference algorithms are optimized to be efficient for this class. For example, Infer.NET enables fast message passing for graphical models \citep{InferNET14}, and Augur enables data parallelism with \glspl{GPU} for Gibbs sampling in Bayesian networks \citep{tristan2014augur}. Edward bridges this gap. It is Turing complete---it supports any computable probability distribution---and it supports efficient algorithms, such as those that leverage model structure and those that scale to massive data. There has been some prior research on efficient algorithms in Turing-complete languages. Venture and Anglican design inference as a collection of local inference problems, defined over program fragments \citep{mansinghka2014venture,wood2014new}. This produces fast program-specific inference code, which we build on. Neither system supports inference methods such as programmable posterior approximations, inference models, or data subsampling. Concurrent with our work, WebPPL features amortized inference \citep{ritchie2016deep}. Unlike Edward, WebPPL does not reuse the model's representation; rather, it annotates the original program and leverages helper functions, which is a less flexible strategy. Finally, inference is designed as program transformations in \citet{kiselyov2009embedded,scibior2015practical,zinkov2016composing}. This enables the flexibility of composing inference inside other probabilistic programs. Edward builds on this idea to compose not only inference within modeling but also modeling within inference (e.g., variational models). \section{Model Examples} \label{appendix:model} There are many examples available at \url{http://edwardlib.org}, including models, inference methods, and complete scripts. Below we describe several model examples; \Cref{appendix:svi} describes an inference example (stochastic variational inference); \Cref{appendix:complete} describes complete scripts. All examples in this paper are comprehensive, only leaving out import statements and fixed values. See the companion webpage for this paper (\url{http://edwardlib.org/iclr2017}) for examples in a machine-readable format with runnable code. \subsection{Bayesian Neural Network for Classification} \label{appendix:bnn} A Bayesian neural network is a neural network with a prior distribution on its weights. Define the likelihood of an observation $(\mathbf{x}_n, y_n)$ with binary label $y_n\in\{0,1\}$ as \begin{align*} p(y_n \mid \mbW_0, \mbb_0, \mbW_1, \mbb_1 \;;\; \mathbf{x}_n) &= \operatorname{Bernoulli}(y_n \g \mathrm{NN}(\mathbf{x}_n\;;\; \mbW_0, \mbb_0, \mbW_1, \mbb_1)), \end{align*} where $\mathrm{NN}$ is a 2-layer neural network whose weights and biases form the latent variables $\mbW_0, \mbb_0, \mbW_1, \mbb_1$. Define the prior on the weights and biases to be the standard normal. See \Cref{fig:bnn}. There are $N$ data points, $D$ features, and $H$ hidden units. \subsection{Latent Dirichlet Allocation} \label{appendix:lda} See \Cref{fig:lda}. Note that the program is written for illustration. We recommend vectorization in practice: instead of storing scalar random variables in lists of lists, one should prefer to represent few random variables, each which have many dimensions. \subsection{Gaussian Matrix Factorizationn} \label{appendix:gaussian_mf} See \Cref{fig:gaussian_mf}. \subsection{Dirichlet Process Mixture Model} \label{appendix:dirichlet_process} See \Cref{fig:dp}. \section{Inference Example: Stochastic Variational Inference} \label{appendix:svi} In the subgraph setting, we do data subsampling while working with a subgraph of the full model. This setting is necessary when the data and model do not fit in memory. It is scalable in that both the algorithm's computational complexity (per iteration) and memory complexity are independent of the data set size. For the code, we use the running example, a mixture model described in \Cref{fig:hierarchical_model_example}. \begin{lstlisting}[language=Python] N = 10000000 # data set size D = 2 # data dimension K = 5 # number of clusters \end{lstlisting} The model is \begin{equation*} p(\mbx, \mathbf{z}, \beta) = p(\beta) \prod_{n=1}^N p(z_n \mid \beta) p(x_n \mid z_n, \beta). \end{equation*} To avoid memory issues, we work on only a subgraph of the model, \begin{equation*} p(\mbx, \mathbf{z}, \beta) = p(\beta) \prod_{m=1}^M p(z_m \mid \beta) p(x_m \mid z_m, \beta) \end{equation*} \begin{lstlisting}[language=Python] M = 128 # mini-batch size beta = Normal(mu=tf.zeros([K, D]), sigma=tf.ones([K, D])) z = Categorical(logits=tf.zeros([M, K])) x = Normal(mu=tf.gather(beta, z), sigma=tf.ones([M, D])) \end{lstlisting} Assume the variational model is \begin{equation*} q(\mathbf{z}, \beta) = q(\beta; \lambda) \prod_{n=1}^N q(z_n \mid \beta; \gamma_n), \end{equation*} parameterized by $\{\lambda, \{\gamma_n\}\}$. Again, we work on only a subgraph of the model, \begin{equation*} q(\mathbf{z}, \beta) = q(\beta; \lambda) \prod_{m=1}^M q(z_m \mid \beta; \gamma_m). \end{equation*} parameterized by $\{\lambda, \{\gamma_m\}\}$. Importantly, only $M$ parameters are stored in memory for $\{\gamma_m\}$ rather than $N$. \begin{lstlisting}[language=Python] qbeta = Normal(mu=tf.Variable(tf.zeros([K, D])), sigma=tf.nn.softplus(tf.Variable(tf.zeros[K, D]))) qz_variables = tf.Variable(tf.zeros([M, K])) qz = Categorical(logits=qz_variables) \end{lstlisting} We use \texttt{KLqp}, a variational method that minimizes the divergence measure $\operatorname{KL}(q\gg p)$ \citep{jordan1999introduction}. We instantiate two algorithms: a global inference over $\beta$ given the subset of $\mathbf{z}$ and a local inference over the subset of $\mathbf{z}$ given $\beta$. We also pass in a TensorFlow placeholder \texttt{x_ph} for the data, so we can change the data at each step. \begin{lstlisting}[language=Python] x_ph = tf.placeholder(tf.float32, [M]) inference_global = ed.KLqp({beta: qbeta}, data={x: x_ph, z: qz}) inference_local = ed.KLqp({z: qz}, data={x: x_ph, beta: qbeta}) \end{lstlisting} We initialize the algorithms with the \texttt{scale} argument, so that computation on \texttt{z} and \texttt{x} will be scaled appropriately. This enables unbiased estimates for stochastic gradients. \begin{lstlisting}[language=Python] inference_global.initialize(scale={x: float(N) / M, z: float(N) / M}) inference_local.initialize(scale={x: float(N) / M, z: float(N) / M}) \end{lstlisting} We now run the algorithm, assuming there is a \texttt{next_batch} function which provides the next batch of data. \begin{lstlisting}[language=Python] qz_init = tf.initialize_variables([qz_variables]) for _ in range(1000): x_batch = next_batch(size=M) for _ in range(10): # make local inferences inference_local.update(feed_dict={x_ph: x_batch}) # update global parameters inference_global.update(feed_dict={x_ph: x_batch}) # reinitialize the local factors qz_init.run() \end{lstlisting} After each iteration, we also reinitialize the parameters for $q(\mathbf{z}\mid\beta)$; this is because we do inference on a new set of local variational factors for each batch. This demo readily applies to other inference algorithms such as \texttt{SGLD} (stochastic gradient Langevin dynamics): simply replace \texttt{qbeta} and \texttt{qz} with \texttt{Empirical} random variables; then call \texttt{ed.SGLD} instead of \texttt{ed.KLqp}. Note that if the data and model fit in memory but you'd still like to perform data subsampling for fast inference, we recommend not defining subgraphs. You can reify the full model, and simply index the local variables with a placeholder. The placeholder is fed at runtime to determine which of the local variables to update at a time. (For more details, see the website's API.) \section{Complete Examples} \label{appendix:complete} \subsection{Variational Auto-encoder} \label{appendix:vae} See \Cref{fig:appendix_vae}. \subsection{Probabilistic Model for Word Embeddings} \label{appendix:ef_emb} See \Cref{fig:ef_emb}. This example uses data subsampling (\Cref{sub:batch_training}). The priors and conditional likelihoods are defined only for a minibatch of data. Similarly the variational model only models the embeddings used in a given minibatch. TensorFlow variables contain the embedding vectors for the entire vocabulary. TensorFlow placeholders ensure that the correct embedding vectors are used as variational parameters for a given minibatch. The Bernoulli variables \texttt{y_pos} and \texttt{y_neg} are fixed to be $1$'s and $0$'s respectively. They model whether a word is indeed the target word for a given context window or has been drawn as a negative sample. Without regularization (via priors), the objective we optimize is identical to negative sampling. \vspace{-0.5ex} \section{Compositional Representations for Probabilistic Models} \label{sec:modeling_language} \vspace{-0.5ex} We first develop compositional representations for probabilistic models. We desire two criteria: (a) integration with computational graphs, an efficient framework where nodes represent operations on data and edges represent data communicated between them \citep{culler1986dataflow}; and (b) invariance of the representation under the graph, that is, the representation can be reused during inference. Edward defines random variables as the key compositional representation. They are class objects with methods, for example, to compute the log density and to sample. Further, each random variable $\mbx$ is associated to a tensor (multi-dimensional array) $\mbx^*$, which represents a single sample $\mbx^*\sim p(\mbx)$. This association embeds the random variable onto a computational graph on tensors. The design's simplicity makes it easy to develop probabilistic programs in a computational graph framework. Importantly, all computation is represented on the graph. This enables one to compose random variables with complex deterministic structure such as deep neural networks, a diverse set of math operations, and third party libraries that build on the same framework. The design also enables compositions of random variables to capture complex stochastic structure. As an illustration, we use a Beta-Bernoulli model, $p(\mbx, \theta) = \operatorname{Beta}(\theta\g 1, 1) \prod_{n=1}^{50} \operatorname{Bernoulli}(x_n\g \theta)$, where $\theta$ is a latent probability shared across the 50 data points $\mbx\in\{0,1\}^{50}$. The random variable \texttt{x} is 50-dimensional, parameterized by the random tensor $\theta^*$. Fetching the object \texttt{x} runs the graph: it simulates from the generative process and outputs a binary vector of $50$ elements. All computation is registered symbolically on random variables and not over their execution. Symbolic representations do not require reifying the full model, which leads to unreasonable memory consumption for large models \citep{tristan2014augur}. Moreover, it enables us to simplify both deterministic and stochastic operations in the graph, before executing any code \citep{scibior2015practical,zinkov2016composing}. With computational graphs, it is also natural to build mutable states within the probabilistic program. As a typical use of computational graphs, such states can define model parameters; in TensorFlow, this is given by a \texttt{tf.Variable}. Another use case is for building discriminative models $p(\mby\g\mbx)$, where $\mbx$ are features that are input as training or test data. The program can be written independent of the data, using a mutable state (\texttt{tf.placeholder}) for $\mbx$ in its graph. During training and testing, we feed the placeholder the appropriate values. In \Cref{appendix:model}, we provide examples of a Bayesian neural network for classification (\ref{appendix:bnn}), latent Dirichlet allocation (\ref{appendix:lda}), and Gaussian matrix factorization (\ref{appendix:gaussian_mf}). We present others below. \subsection{Example: Variational Auto-encoder} \label{sub:vae} \Cref{fig:vae} implements a \gls{VAE} \citep{kingma2014autoencoding,rezende2014stochastic} in Edward. It comprises a probabilistic model over data and a variational model designed to approximate the former's posterior. Here we use random variables to construct both the probabilistic model and the variational model; they are fit during inference (more details in \Cref{sec:inference}). There are $N$ data points $x_n\in\{0,1\}^{28\cdot 28}$ each with $d$ latent variables, $z_n\in\mathbb{R}^d$. The program uses Keras \citep{chollet2015keras} to define neural networks. The probabilistic model is parameterized by a 2-layer neural network, with 256 hidden units (and ReLU activation), and generates $28\times 28$ pixel images. The variational model is parameterized by a 2-layer inference network, with 256 hidden units and outputs parameters of a normal posterior approximation. The probabilistic program is concise. Core elements of the \gls{VAE}---such as its distributional assumptions and neural net architectures---are all extensible. With model compositionality, we can embed it into more complicated models \citep{gregor2015draw,rezende2016one} and for other learning tasks \citep{kingma2014semi}. With inference compositionality (which we discuss in \Cref{sec:inference}), we can embed it into more complicated algorithms, such as with expressive variational approximations \citep{rezende2015variational,tran2016variational,kingma2016improving} and alternative objectives \citep{ranganath2016operator,li2016variational,dieng2016chi}. \subsection{\hspace{-0.225em}Example: Bayesian Recurrent Neural Network with Variable Length} Random variables can also be composed with control flow operations. As an example, \Cref{fig:bayesian_rnn} implements a Bayesian \glsreset{RNN}\gls{RNN} with variable length. The data is a sequence of inputs $\{\mbx_1,\ldots,\mbx_T\}$ and outputs $\{y_1,\ldots,y_T\}$ of length $T$ with $\mbx_t\in\mathbb{R}^{D}$ and $y_t\in\mathbb{R}$ per time step. For $t=1,\ldots,T$, a \gls{RNN} applies the update \begin{equation*} \mbh_t = \operatorname{tanh}(\mbW_h \mbh_{t-1} + \mbW_x \mbx_t + \mbb_h), \end{equation*} where the previous hidden state is $\mbh_{t-1}\in\mathbb{R}^H$. We feed each hidden state into the output's likelihood, $y_t \sim \operatorname{Normal}(\mbW_y \mbh_t + \mbb_y, 1)$, and we place a standard normal prior over all parameters $\{\mbW_h\in\mathbb{R}^{H\times H}, \mbW_x\in\mathbb{R}^{D\times H}, \mbW_y\in\mathbb{R}^{H\times 1}, \mbb_h\in\mathbb{R}^H,\mbb_y\in\mathbb{R}\}$. Our implementation is dynamic: it differs from a \gls{RNN} with fixed length, which pads and unrolls the computation. \subsection{Stochastic Control Flow and Model Parallelism} Random variables can also be placed in the control flow itself, enabling probabilistic programs with stochastic control flow. Stochastic control flow defines dynamic conditional dependencies, known in the literature as contingent or existential dependencies \citep{mansinghka2014venture,wu2016swift}. See \Cref{fig:dynamic}, where $\mbx$ may or may not depend on $\mba$ for a given execution. In \Cref{appendix:dirichlet_process}, we use stochastic control flow to implement a Dirichlet process mixture model. Tensors with stochastic shape are also possible: for example, \texttt{tf.zeros(Poisson(lam=5.0))} defines a vector of zeros with length given by a Poisson draw with rate $5.0$. Stochastic control flow produces difficulties for algorithms that use the graph structure because the relationship of conditional dependencies changes across execution traces. The computational graph, however, provides an elegant way of teasing out static conditional dependence structure ($\mbp$) from dynamic dependence structure ($\mba)$. We can perform model parallelism (parallel computation across components of the model) over the static structure with \glspl{GPU} and batch training. We can use more generic computations to handle the dynamic structure. \newacronym{PPL}{ppl}{probabilistic programming language} \newacronym{GPU}{\textnormal{\uppercase{gpu}}}{graphics processing unit} \newacronym{VAE}{vae}{variational auto-encoder} \newacronym{RBM}{rbm}{restricted Boltzmann machine} \newacronym{DLGM}{dlgm}{deep latent Gaussian model} \newacronym{GAN}{gan}{generative adversarial network} \newacronym{RNN}{rnn}{recurrent neural network} \newacronym{VI}{vi}{variational inference} \newacronym{MCMC}{mcmc}{Markov chain Monte Carlo} \newacronym{MC}{mc}{Monte Carlo} \newacronym{HMC}{hmc}{Hamiltonian Monte Carlo} \newacronym{MAP}{map}{maximum a posteriori} \newacronym{IWAE}{iwae}{importance-weighted auto-encoder} \newacronym{HVM}{hvm}{hierarchical variational model} \newacronym{KL}{kl}{Kullback-Leibler} \newacronym{ELBO}{elbo}{\emph{evidence lower bound}} \DeclareRobustCommand{\mb}[1]{\ensuremath{\boldsymbol{\mathbf{#1}}}} \newcommand{\mba}{\mathbold{a}} \newcommand{\mbb}{\mathbold{b}} \newcommand{\mbc}{\mathbold{c}} \newcommand{\mbd}{\mathbold{d}} \newcommand{\mbe}{\mathbold{e}} \newcommand{\mbg}{\mathbold{g}} \newcommand{\mbh}{\mathbold{h}} \newcommand{\mbi}{\mathbold{i}} \newcommand{\mbj}{\mathbold{j}} \newcommand{\mbk}{\mathbold{k}} \newcommand{\mbl}{\mathbold{l}} \newcommand{\mbm}{\mathbold{m}} \newcommand{\mbn}{\mathbold{n}} \newcommand{\mbo}{\mathbold{o}} \newcommand{\mbp}{\mathbold{p}} \newcommand{\mbq}{\mathbold{q}} \newcommand{\mbr}{\mathbold{r}} \newcommand{\mbs}{\mathbold{s}} \newcommand{\mbt}{\mathbold{t}} \newcommand{\mbu}{\mathbold{u}} \newcommand{\mbv}{\mathbold{v}} \newcommand{\mbw}{\mathbold{w}} \newcommand{\mbx}{\mathbold{x}} \newcommand{\mby}{\mathbold{y}} \newcommand{\mbz}{\mathbold{z}} \newcommand{\mbA}{\mathbold{A}} \newcommand{\mbB}{\mathbold{B}} \newcommand{\mbC}{\mathbold{C}} \newcommand{\mbD}{\mathbold{D}} \newcommand{\mbE}{\mathbold{E}} \newcommand{\mbF}{\mathbold{F}} \newcommand{\mbG}{\mathbold{G}} \newcommand{\mbH}{\mathbold{H}} \newcommand{\mbI}{\mathbold{I}} \newcommand{\mbJ}{\mathbold{J}} \newcommand{\mbK}{\mathbold{K}} \newcommand{\mbL}{\mathbold{L}} \newcommand{\mbM}{\mathbold{M}} \newcommand{\mbN}{\mathbold{N}} \newcommand{\mbO}{\mathbold{O}} \newcommand{\mbP}{\mathbold{P}} \newcommand{\mbQ}{\mathbold{Q}} \newcommand{\mbR}{\mathbold{R}} \newcommand{\mbS}{\mathbold{S}} \newcommand{\mbT}{\mathbold{T}} \newcommand{\mbU}{\mathbold{U}} \newcommand{\mbV}{\mathbold{V}} \newcommand{\mbW}{\mathbold{W}} \newcommand{\mbX}{\mathbold{X}} \newcommand{\mbY}{\mathbold{Y}} \newcommand{\mbZ}{\mathbold{Z}} \newcommand{\mbalpha}{\mathbold{\alpha}} \newcommand{\mbbeta}{\mathbold{\beta}} \newcommand{\mbdelta}{\mathbold{\delta}} \newcommand{\mbepsilon}{\mathbold{\epsilon}} \newcommand{\mbchi}{\mathbold{\chi}} \newcommand{\mbeta}{\mathbold{\eta}} \newcommand{\mbgamma}{\mathbold{\gamma}} \newcommand{\mbiota}{\mathbold{\iota}} \newcommand{\mbkappa}{\mathbold{\kappa}} \newcommand{\mblambda}{\mathbold{\lambda}} \newcommand{\mbmu}{\mathbold{\mu}} \newcommand{\mbnu}{\mathbold{\nu}} \newcommand{\mbomega}{\mathbold{\omega}} \newcommand{\mbphi}{\mathbold{\phi}} \newcommand{\mbpi}{\mathbold{\pi}} \newcommand{\mbpsi}{\mathbold{\psi}} \newcommand{\mbrho}{\mathbold{\rho}} \newcommand{\mbsigma}{\mathbold{\sigma}} \newcommand{\mbtau}{\mathbold{\tau}} \newcommand{\mbtheta}{\mathbold{\theta}} \newcommand{\mbupsilon}{\mathbold{\upsilon}} \newcommand{\mbvarepsilon}{\mathbold{\varepsilon}} \newcommand{\mbvarphi}{\mathbold{\varphi}} \newcommand{\mbvartheta}{\mathbold{\vartheta}} \newcommand{\mbvarrho}{\mathbold{\varrho}} \newcommand{\mbxi}{\mathbold{\xi}} \newcommand{\mbzeta}{\mathbold{\zeta}} \newcommand{\mbDelta}{\mathbold{\Delta}} \newcommand{\mbGamma}{\mathbold{\Gamma}} \newcommand{\mbLambda}{\mathbold{\Lambda}} \newcommand{\mbOmega}{\mathbold{\Omega}} \newcommand{\mbPhi}{\mathbold{\Phi}} \newcommand{\mbPi}{\mathbold{\Pi}} \newcommand{\mbPsi}{\mathbold{\Psi}} \newcommand{\mbSigma}{\mathbold{\Sigma}} \newcommand{\mbTheta}{\mathbold{\Theta}} \newcommand{\mbUpsilon}{\mathbold{\Upsilon}} \newcommand{\mbXi}{\mathbold{\Xi}} \renewcommand{\d}[1]{\ensuremath{\operatorname{d}\!{#1}}} \newcommand{\g}{\,|\,} \renewcommand{\gg}{\,\|\,} \newcommand\dif{\mathop{}\!\mathrm{d}} \newcommand{\diag}{\textrm{diag}} \newcommand{\supp}{\textrm{supp}} \newcommand{\Gam}{\textrm{Gam}} \newcommand{\InvGam}{\textrm{InvGam}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareRobustCommand{\KL}[2]{\ensuremath{\textrm{KL}\left(#1\;\|\;#2\right)}} \newcommand\indep{\protect\mathpalette{\protect\independenT}{\perp}} \def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}} \newcommand{\E}[1]{\mathbb{E}\left[ #1 \right]} \usetikzlibrary{bayesnet} \pgfdeclarelayer{edgelayer} \pgfdeclarelayer{nodelayer} \pgfsetlayers{edgelayer,nodelayer,main} \definecolor{hexcolor0xbfbfbf}{rgb}{0.749,0.749,0.749} \tikzset{>=latex} \tikzstyle{none} = [inner sep=0pt] \tikzstyle{line} = [ - ] \tikzstyle{arrow} = [ ->, shorten <=1pt, shorten >=1pt ] \tikzstyle{ardash} = [ dotted, ->, shorten <=1pt, shorten >=1pt ] \tikzstyle{empty}=[circle,opacity=0.0,text opacity=1.0,inner sep=0pt,minimum width=0pt,minimum height=0pt] \tikzstyle{box}=[rectangle,fill=White,draw=Black] \tikzstyle{filled}=[circle,fill=hexcolor0xbfbfbf,draw=Black] \tikzstyle{hollow}=[circle,fill=White,draw=Black] \tikzstyle{param}=[rectangle,fill=Black,draw=Black,inner sep=0pt,minimum width=4pt,minimum height=4pt] \usepackage[T1]{fontenc} \newcommand{\mathbold}[1]{\ensuremath{\boldsymbol{\mathbf{#1}}}} \usepackage[ paper = letterpaper, left = 1.25in, right = 1.25in, top = 1.0in, bottom = 1.0in, ]{geometry} \usepackage[english]{babel} \usepackage[parfill]{parskip} \renewcommand\linenumberfont{\normalfont \footnotesize \sffamily \color{SkyBlue}} \DeclareRobustCommand{\sidenote}[1]{\marginpar{ \RaggedRight \textcolor{Plum}{\textsf{#1}}}} \newcommand{\parnum}{\bfseries\P\arabic{parcount}} \newcounter{parcount} \newcommand\p{% \stepcounter{parcount}% \leavevmode\marginpar[\hfill\parnum]{\parnum}% } \DeclareRobustCommand{\parhead}[1]{\textbf{#1}~} \setlength{\marginparwidth}{1in} \usepackage[usenames,dvipsnames]{xcolor} \definecolor{shadecolor}{gray}{0.9} \newcommand{\red}[1]{\textcolor{BrickRed}{#1}} \newcommand{\orange}[1]{\textcolor{BurntOrange}{#1}} \newcommand{\green}[1]{\textcolor{OliveGreen}{#1}} \newcommand{\blue}[1]{\textcolor{MidnightBlue}{#1}} \newcommand{\sky}[1]{\textcolor{SkyBlue}{#1}} \newcommand{\gray}[1]{\textcolor{black!60}{#1}} \renewcommand{\labelenumi}{\color{black!67}{\arabic{enumi}.}} \renewcommand{\labelenumii}{{\color{black!67}(\alph{enumii})}} \renewcommand{\labelitemi}{{\color{black!67}\textbullet}} \usepackage[labelfont=bf]{caption} \usepackage[format=hang]{subcaption} \usepackage[algoruled]{algorithm2e} \setlength{\interspacetitleruled}{8pt} \fvset{fontsize=\small} \usepackage[colorlinks,linktoc=all]{hyperref} \usepackage[all]{hypcap} \hypersetup{citecolor=MidnightBlue} \hypersetup{linkcolor=MidnightBlue} \hypersetup{urlcolor=MidnightBlue} \usepackage[nameinlink]{cleveref} \creflabelformat{equation}{#2\textup{#1}#3} % <- remove parenthesis from equations \usepackage [acronym,smallcaps,nowarn,section,nonumberlist]{glossaries} \glsdisablehyper{} \definecolor{strings}{rgb}{.624,.251,.259} \definecolor{keywords}{rgb}{.224,.451,.686} \definecolor{comment}{rgb}{.322,.451,.322} \lstdefinelanguage{python}{ keywords=[3]{Normal, Bernoulli, Beta, Categorical, Dirichlet, Exponential, MultivariateNormalFull, RandomVariable, DirichletProcess, Empirical, PointMass, Gamma, MAP, Inference, KLqp, HMC, SGLD, KLpq, VariationalInference, MonteCarlo, ConjugateInference, GANInference, rnn_cell, dirichlet_process, cond, body, generative_network, discriminative_network, evaluate, ppc, copy, dot, get_session}, morecomment=[l]{\#}, morecomment=[s]{"""}{"""}, morestring=[b]', morestring=[b]", alsoletter={<>=-+/*}, sensitive=true } \lstset{ language=python, keywordstyle=\color{BrickRed}\bfseries\ttfamily, keywordstyle=[2]\color{Violet}\ttfamily, keywordstyle=[3]\color{keywords}\ttfamily, commentstyle=\color{comment}\ttfamily, stringstyle=\color{strings}\ttfamily, basicstyle=\fontsize{8pt}{8.25pt}\selectfont\ttfamily, basewidth=0.5em, numbers=left, numberstyle=\tiny, stepnumber=1, columns=fixed, xleftmargin=2ex,%0.25in, firstnumber=1, showstringspaces=false, mathescape=true, keepspaces=True, tabsize=2 } \renewcommand{\texttt}[1]{\lstinline[basicstyle=\fontsize{8pt}{8.25pt}\selectfont\ttfamily]{#1}} \newcommand{\pp}{\textcolor{Plum}{\P\,}} \newcommand{\draftdisclaimer}{\begin{center}\begin{framed} DRAFT: DO NOT CITE OR DISTRIBUTE \end{framed}\end{center}} \begin{tikzpicture} \node[latent] (z) {$\mbz_n$}; \node[obs, below=of z] (x) {$\mbx_n$}; \factor[empty, below=of z] {h} {} {} {}; \factor[right=of h, xshift=0.5cm] {theta} {$\theta$} {} {}; \factor[left=of h, xshift=-0.5cm] {phi} {$\phi$} {} {}; \edge{z}{x}; \draw[style=arrow, densely dotted, bend left] (x) to (z); \edge{theta}{x}; \draw[style=arrow, densely dotted] (phi) to (z); \plate[inner sep=0.4cm, yshift=0.05cm, label={[xshift=-14pt,yshift=14pt]south east:$N$}] {plate1} { (z)(x) } {}; \end{tikzpicture} \usetikzlibrary{shapes, calc, shapes, arrows, positioning} \tikzstyle{neuron}=[draw,circle,minimum size=20pt,inner sep=0pt, fill=white] \tikzstyle{stateTransition}=[thick] \tikzstyle{learned}=[text=red] \begin{tikzpicture}[scale=1.4] \node (v1)[neuron, fill=gray!30] at (0, 0) {$v_1$}; \node (v2)[neuron, fill=gray!30] at (1, 0) {$v_2$}; \node (v3)[neuron, fill=gray!30] at (2, 0) {$v_3$}; \node (vdots) at (3, 0) {$\cdots$}; \node (v4)[neuron, fill=gray!30] at (4, 0) {$v_N$}; \node (h1)[neuron] at (0.5, 1) {$h_1$}; \node (h2)[neuron] at (1.5, 1) {$h_2$}; \node (hdots) at (2.5, 1) {$\cdots$}; \node (h3)[neuron] at (3.5, 1) {$h_M$}; \draw[stateTransition] (v1) -- (h1) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v1) -- (h2) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v1) -- (hdots) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v1) -- (h3) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v2) -- (h1) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v2) -- (h2) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v2) -- (hdots) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v2) -- (h3) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v3) -- (h1) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v3) -- (h2) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v3) -- (hdots) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v3) -- (h3) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (vdots) -- (h1) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (vdots) -- (h2) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (vdots) -- (hdots) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (vdots) -- (h3) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v4) -- (h1) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v4) -- (h2) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v4) -- (hdots) node [midway,above=-0.06cm,sloped] {}; \draw[stateTransition] (v4) -- (h3) node [midway,above=-0.06cm,sloped] {}; \end{tikzpicture} \begin{tikzpicture} \node[latent] (phi) {$\phi_k$} ; \node[latent, below=1.0cm of phi] (theta) {$\theta_d$} ; \node[latent, right=1.0cm of theta] (z) {$z_{d,n}$} ; \node[obs, right=1.0cm of z] (w) {$w_{d,n}$} ; \draw[style=arrow, bend left] (phi) to (w); \edge{z}{w}; \edge{theta}{z}; \plate[inner sep=0.15cm, yshift=0.1cm, label={[xshift=-15pt,yshift=13pt]south east:$N$}] {plate1} { (w)(z) } {}; \plate[inner sep=0.4cm, label={[xshift=-15pt,yshift=13pt]south east:$D$}] {plate2} { (w)(z)(theta) } {}; \plate[inner sep=0.2cm, label={[xshift=-15pt,yshift=15pt]south east:$K$}] {plate3} { (phi) } {}; \end{tikzpicture} \begin{tikzpicture} \node[latent] (beta) {$\beta$} ; \node[left=0.4cm of beta] (betaph) {} ; \node[latent, below=1.0cm of betaph] (z) {$\mbz_n$} ; \node[obs, right=1.0cm of z] (x) {$\mbx_n$} ; \edge{beta}{x}; \edge{z}{x}; \plate[inner sep=0.3cm, label={[xshift=-15pt,yshift=15pt]south east:$N$}] {plate1} { (x)(z) } {}; \end{tikzpicture} \begin{tikzpicture}[x=1.7cm,y=1.8cm,scale=0.9] \node[latent] (theta) {$\theta$}; \factor[right=of theta, xshift=0.3cm] {thetastar} {$\theta^*$} {} {}; \factor[above=of thetastar] {n} {\texttt{tf.ones(50)}} {} {}; \node[latent, right=of thetastar, xshift=-0.5cm] (x) {$\mbx$}; \factor[right=of x, xshift=0.3cm] {xstar} {$\mbx^*$} {} {}; \edge{theta}{thetastar}; \edge{thetastar}{x}; \edge{n}{x}; \edge{x}{xstar}; \end{tikzpicture} \begin{tikzpicture} \node[latent] (beta) {$\beta$} ; \node[right=0.4cm of beta] (betaph) {} ; \node[obs, below=0.75cm of betaph] (y) {$\mby_n$} ; \node[obs, left=1.0cm of y] (x) {$\mbx_n$} ; \edge{beta}{y}; \edge{x}{y}; \plate[inner sep=0.25cm, label={[xshift=-15pt,yshift=15pt]south east:$N$}] {plate1} { (y)(x) } {}; \end{tikzpicture} \begin{tikzpicture}[x=1.7cm,y=1.8cm,scale=0.9] \node[latent] (p) {$\mbp$}; \factor[right=of p, xshift=0.3cm] {pstar} {$\mbp^*$} {} {}; \factor[above=of pstar] {n} {} {} {}; \factor[empty, right=of n, yshift=0.1cm] {nn} {\texttt{tf.while_loop(...)}} {} {}; \factor[left=of n, xshift=-0.5cm] {astar} {$\mba^*$} {} {}; \node[latent, left=of astar, xshift=0.5cm] (a) {$\mba$}; \node[latent, right=of pstar, xshift=-0.5cm] (x) {$\mbx$}; \factor[right=of x, xshift=0.3cm] {xstar} {$\mbx^*$} {} {}; \edge{p}{pstar}; \edge{pstar}{x}; \edge{n}{x}; \edge{x}{xstar}; \edge{a}{astar}; \edge{astar}{n}; \end{tikzpicture} \usetikzlibrary{shapes, calc, shapes, arrows, positioning} \tikzstyle{neuron}=[draw,circle,minimum size=20pt,inner sep=0pt, fill=white] \tikzstyle{stateTransition}=[style=arrow] \tikzstyle{learned}=[text=red] \begin{tikzpicture}[scale=1.25] \node (v1)[neuron, fill=gray!30] at (0, 0) {$v_1$}; \node (v2)[neuron, fill=gray!30] at (1, 0) {$v_2$}; \node (v3)[neuron, fill=gray!30] at (2, 0) {$v_3$}; \node (vdots) at (3, 0) {$\cdots$}; \node (v4)[neuron, fill=gray!30] at (4, 0) {$v_N$}; \node (h1)[neuron] at (0.5, 1) {$h_1$}; \node (h2)[neuron] at (1.5, 1) {$h_2$}; \node (hdots) at (2.5, 1) {$\cdots$}; \node (h3)[neuron] at (3.5, 1) {$h_M$}; \node (x1)[neuron, fill=gray!30] at (0, 2) {$v_1^{\rm (ph)}$}; \node (x2)[neuron, fill=gray!30] at (1, 2) {$v_2^{\rm (ph)}$}; \node (x3)[neuron, fill=gray!30] at (2, 2) {$v_3^{\rm (ph)}$}; \node (xdots) at (3, 2) {$\cdots$}; \node (x4)[neuron, fill=gray!30] at (4, 2) {$v_N^{\rm (ph)}$}; \draw[stateTransition] (x1) -- (h1); \draw[stateTransition] (x1) -- (h2); \draw[stateTransition] (x1) -- (hdots); \draw[stateTransition] (x1) -- (h3); \draw[stateTransition] (x2) -- (h1); \draw[stateTransition] (x2) -- (h2); \draw[stateTransition] (x2) -- (hdots); \draw[stateTransition] (x2) -- (h3); \draw[stateTransition] (x3) -- (h1); \draw[stateTransition] (x3) -- (h2); \draw[stateTransition] (x3) -- (hdots); \draw[stateTransition] (x3) -- (h3); \draw[stateTransition] (xdots) -- (h1); \draw[stateTransition] (xdots) -- (h2); \draw[stateTransition] (xdots) -- (hdots); \draw[stateTransition] (xdots) -- (h3); \draw[stateTransition] (x4) -- (h1); \draw[stateTransition] (x4) -- (h2); \draw[stateTransition] (x4) -- (hdots); \draw[stateTransition] (x4) -- (h3); \draw[stateTransition] (h1) -- (v1); \draw[stateTransition] (h1) -- (v2); \draw[stateTransition] (h1) -- (vdots); \draw[stateTransition] (h1) -- (v3); \draw[stateTransition] (h1) -- (v4); \draw[stateTransition] (h2) -- (v1); \draw[stateTransition] (h2) -- (v2); \draw[stateTransition] (h2) -- (vdots); \draw[stateTransition] (h2) -- (v3); \draw[stateTransition] (h2) -- (v4); \draw[stateTransition] (h3) -- (v1); \draw[stateTransition] (h3) -- (v2); \draw[stateTransition] (h3) -- (vdots); \draw[stateTransition] (h3) -- (v3); \draw[stateTransition] (h3) -- (v4); \draw[stateTransition] (hdots) -- (v1); \draw[stateTransition] (hdots) -- (v2); \draw[stateTransition] (hdots) -- (vdots); \draw[stateTransition] (hdots) -- (v3); \draw[stateTransition] (hdots) -- (v4); \end{tikzpicture} \begin{tikzpicture}[x=1.7cm,y=1.8cm,scale=0.9] \node[latent] (x) {$\mathbf{x}$}; \factor[right=of x, xshift=0.3cm] {xstar} {$\mathbf{x}^*$} {} {}; \factor[right=of xstar, xshift=0.3cm] {y} {$\mathbf{y}$} {} {}; \factor[below=of xstar, xshift=0.5cm] {plus} {} {} {}; \node at (plus) [right=0.175cm of plus] {$\mathbf{x}^*+\mathbf{y}$}; \edge{x}{xstar}; \edge{xstar}{plus}; \edge{y}{plus}; \end{tikzpicture} \begin{tikzpicture} \node[empty] (dot) {} ; \node[obs, right=of dot, xshift=-0.75cm] (xt) {$\mbx_t$} ; \node[latent, left=of xt, xshift=0.75cm, yshift=-0.75cm] (bh) {$\mbb_h$} ; \node[latent, above=of bh, yshift=-0.9cm] (Wx) {$\mbW_x$} ; \node[latent, above=of Wx, yshift=-0.9cm] (Wh) {$\mbW_h$} ; \node[latent, below=of bh, yshift=0.9cm] (Wy) {$\mbW_y$} ; \node[latent, below=of Wy, yshift=1.0cm] (by) {$\mbb_y$} ; \factor[below=0.75cm of xt] {ht} {} {} {}; \node[right=0.03cm of ht, yshift=-0.3cm] (htteyt) {$\mbh_t$}; \node[empty, left=of ht] (htminus) {$\cdots$} ; \node[empty, right=of ht] (htplus) {$\cdots$} ; \node[obs, below=0.75cm of ht] (yt) {$\mby_t$} ; \node[empty, left=of yt] (ytminus) {} ; \node[empty, right=of yt] (ytplus) {} ; \edge{Wh}{ht}; \edge{Wx}{ht}; \edge{bh}{ht}; \edge{xt}{ht}; \edge{Wy}{yt}; \edge{by}{yt}; \edge{ht}{yt}; \edge{htminus}{ht}; \edge{ht}{htplus}; \end{tikzpicture} \begin{tikzpicture} \node[latent] (beta) {$\beta$} ; \node[left=0.4cm of beta] (betaph) {} ; \node[latent, below=1.0cm of betaph] (z) {$\mbz_m$} ; \node[obs, right=1.0cm of z] (x) {$\mbx_m$} ; \edge{beta}{x}; \edge{z}{x}; \plate[inner sep=0.3cm, label={[xshift=-15pt,yshift=15pt]south east:$M$}] {plate1} { (x)(z) } {}; \end{tikzpicture} \begin{tikzpicture} \node[latent] (W0) {$\mbW_0$} ; \node[latent, right=of W0, xshift=-0.75cm] (b0) {$\mbb_0$} ; \node[latent, right=of b0, xshift=-0.75cm] (W1) {$\mbW_1$} ; \node[latent, right=of W1, xshift=-0.75cm] (b1) {$\mbb_1$} ; \node[obs, below=0.75cm of b0, xshift=0.5cm] (y) {$y_n$} ; \node[obs, left=1.0cm of y] (x) {$x_n$} ; \edge{W0}{y}; \edge{b0}{y}; \edge{W1}{y}; \edge{b1}{y}; \edge{x}{y}; \plate[inner sep=0.3cm, label={[xshift=-15pt,yshift=15pt]south east:$N$}] {plate1} { (y)(x) } {}; \end{tikzpicture} \begin{tikzpicture} \node[latent] (U) {$\mbU_{m}$} ; \node[obs, right=0.3cm of U, yshift=-1.5cm] (Y) {$\mbY_{n,m}$} ; \node[latent, right=1.5cm of U] (V) {$\mbV_{n}$} ; \edge{U}{Y}; \edge{V}{Y}; \plate[inner sep=0.1cm,yshift=-0.05cm,xshift=-0.05cm, label={[xshift=17pt,yshift=14pt]south west:$M$}] {plate1} { (U)(Y) } {}; \plate[inner sep=0.1cm,yshift=0.1cm, label={[xshift=-15pt,yshift=13pt]south east:$N$}] {plate2} { (V)(Y) } {}; \end{tikzpicture} \begin{tikzpicture} \node[latent] (eps0) {$\mbepsilon_n$}; \factor[minimum size=0.3cm, below=1.2cm of eps0] {x} {} {} {}; \factor[empty, below=of eps0] {h} {} {} {}; \factor[right=of h, xshift=0.7cm] {theta} {$\theta$} {} {}; \node[right=of x, xshift=-0.9cm] (xlabel) {$\mbx_n$}; \edge{eps0}{x}; \edge{theta}{x}; \plate[inner sep=0.4cm, yshift=0.05cm,xshift=0.15cm, label={[xshift=-14pt,yshift=14pt]south east:$N$}] {plate1} { (eps0)(x) } {}; \end{tikzpicture}
Tighter bounds lead to improved classifiers
1606.09202
Table 3: Test error for the models reaching the best validation error for various values of T on the MNist dataset. The results for all values of T strictly greater than 1 are comparable and significantly better than for T=1.
[ "T", "[ITALIC] Z", "Test error ±3 [ITALIC] σ (%)" ]
[ [ "1000", "1e5", "7.00±0.08" ], [ "100", "1e6", "7.01±0.05" ], [ "10", "1e7", "6.97±0.08" ], [ "1", "1e8", "7.46±0.11" ] ]
The MNist dataset is a digit recognition dataset with 70000 samples. The first 60000 were used for the cross-validation and the last 10000 for testing. Inputs have dimension 784 but 67 of them are always equal to 0. Despite overfitting occurring quickly, values of T greater than 1 yield significant improvements over the log-loss. Training and validation curves are presented in Fig.
\documentclass{article} % more modern \bibliographystyle{abbrvnat} \graphicspath{{../images/}} \usepackage[]{algorithm2e} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newcommand{\theHalgorithm}{\arabic{algorithm}} \newcommand{\R}{\mathbb{R}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \SetKwInput{KwData}{The data} \SetKwInput{KwResult}{The result} \renewcommand{\thefootnote}{\alph{footnote}} \addtolength{\oddsidemargin}{-.25cm} \addtolength{\evensidemargin}{-.25cm} \addtolength{\textwidth}{.5cm} \addtolength{\topmargin}{-.5cm} \addtolength{\textheight}{1cm} \author{Nicolas Le Roux\\Criteo Research\\\texttt{nicolas@le-roux.name}} \date{\today} \begin{document} \title{Tighter bounds lead to improved classifiers} \maketitle \begin{abstract} The standard approach to supervised classification involves the minimization of a log-loss as an upper bound to the classification error. While this is a tight bound early on in the optimization, it overemphasizes the influence of incorrectly classified examples far from the decision boundary. Updating the upper bound during the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible to link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints. \end{abstract} \section{Introduction} Classification aims at mapping inputs $X \in \mathcal{X}$ to one or several classes $y \in \mathcal{Y}$. For instance, in object categorization, $\mathcal{X}$ will be the set of images depicting an object, usually represented by the RGB values of each of their pixels, and $\mathcal{Y}$ will be a set of object classes, such as ``car'' or ``dog''. We shall assume we are given a training set comprised of $N$ independent and identically distributed labeled pairs $(X_i, y_i)$. The standard approach to solve the problem is to define a parameterized class of functions $p(y | X, \theta)$ indexed by $\theta$ and to find the parameter $\theta^*$ which minimizes the log-loss, i.e. \begin{align} \label{eq:log_loss} \theta^*&= \argmin_{\theta} -\frac{1}{N}\sum_i \log p(y_i | X_i, \theta)\\ &= \argmin_{\theta} L_{\log}(\theta) \; , \nonumber \end{align} with \begin{align} L_{\log}(\theta) &= -\frac{1}{N}\sum_i \log p(y_i | X_i, \theta) \; . \end{align} One justification for minimizing $L_{\log}(\theta)$ is that $\theta^*$ is the maximum likelihood estimator, i.e. the parameter which maximizes \begin{align*} \theta^* &= \argmax_{\theta}p(\mathcal{D} | \theta)\\ &= \argmax_{\theta}\prod_i p(y_i | X_i, \theta) \; . \end{align*} There is another reason to use Eq.~\ref{eq:log_loss}. Indeed, the goal we are interested in is minimizing the classification error. If we assume that our classifiers are stochastic and outputs a class according to $p(y_i | X_i, \theta)$, then the expected classification error is the probability of choosing the incorrect class\footnote{In practice, we choose the class deterministically and output $\argmax_y p(y | X_i, \theta)$.}. This translates to \begin{align} L(\theta) &= \frac{1}{N}\sum_i (1 - p(y_i | X_i, \theta))\nonumber\\ &= 1 - \frac{1}{N}\sum_i p(y_i | X_i, \theta) \; . \label{eq:g_supervised_prob} \end{align} This is a highly nonconvex function of $\theta$, which makes its minimization difficult. However, we have \begin{align*} L(\theta) &= 1 - \frac{1}{N}\sum_i p(y_i | X_i, \theta) \\ &\leq 1 - \frac{1}{N}\sum_i \frac{1}{K}\left(1 + \log p(y_i | X_i, \theta) + \log K\right)\\ &= \frac{(K - 1 - \log K)}{K} + \frac{L_{\log}(\theta)}{K} \; , \end{align*} where $K = |\mathcal{Y}|$ is the number of classes (assumed finite), using the fact that, for every nonnegative $t$, we have $t \geq 1 + \log t$. Thus, minimizing $L_{\log}(\theta)$ is equivalent to minimizing an upper bound of $L(\theta)$. % Put here a figure of both functions. Further, this bound is tight when $p(y_i | X_i , \theta) = \frac{1}{K}$ for all $y_i$. As a model with randomly initialized parameters will assign probabilities close to $1/K$ to each class, it makes sense to minimize $L_{\log}(\theta)$ rather than $L(\theta)$ early on in the optimization. However, this bound becomes looser as $\theta$ moves away from its initial value. In particular, poorly classified examples, for which $p(y_i | X_i, \theta)$ is close to 0, have a strong influence on the gradient of $L_{\log} (\theta)$ despite having very little influence on the gradient of $L(\theta)$. The model will thus waste capacity trying to bring these examples closer to the decision boundary rather than correctly classifying those already close to the boundary. This will be especially noticeable when the model has limited capacity, i.e. in the underfitting setting. Section~\ref{sec:tighter_bounds} proposes a tighter bound of the classification error as well as an iterative scheme to easily optimize it. Section~\ref{sec:experiments} experiments this iterative scheme using generalized linear models over a variety of datasets to estimate its impact. Section~\ref{sec:rl} then proposes a link between supervised learning and reinforcement learning, revisiting common techniques in a new light. Finally, Section~\ref{sec:conclusion} concludes and proposes future directions. \section{Tighter bounds on the classification error} \label{sec:tighter_bounds} We now present a general class of upper bounds of the classification error which will prove useful when the model is far from its initialization. \begin{lemma} \label{lemma:general_lower_bound} Let \begin{align} p_{\nu}(y | X, \theta) &= p(y | X, \nu) \left(1 + \log \frac{p(y | X, \theta)}{p(y | X, \nu)} \right) \label{eq:p_nu} \end{align} with $\nu$ any value of the parameters. Then we have \begin{align} p_{\nu}(y | X, \theta) &\leq p(y | X, \theta) \; . \end{align} Further, if $\nu = \theta$, we have \begin{align} p_{\theta}(y | X, \theta) &= p(y | X, \theta) \; ,\\ \frac{\partial p_{\nu}(y | X, \theta)}{\partial \theta}\bigg|_{\nu = \theta} &= \frac{\partial p(y | X, \theta)}{\partial \theta} \; . \end{align} \end{lemma} \begin{proof} \begin{align*} p(y | X, \theta) &= p(y | X, \nu)\frac{p(y | X, \theta)}{p(y | X, \nu)}\\ &\geq p(y | X, \nu) \left(1 + \log \frac{p(y | X, \theta)}{p(y | X, \nu)} \right)\\ &= p_{\nu}(y | X, \theta)\; . \end{align*} The second line stems from the inequality $ t \geq 1 + \log t$. $p_{\nu}(y | X, \theta) = p(y | X, \theta)$ is immediate when setting $\theta = \nu$ in Eq.~\ref{eq:p_nu}. Deriving $p_{\nu}(y | X, \theta)$ with respect to $\theta$ yields \begin{align*} \frac{\partial p_{\nu}(y | X, \theta)}{\partial \theta} &= p(y | X, \nu)\frac{\partial \log p(y | X, \theta)}{\partial \theta}\\ &= \frac{p(y | X, \nu)}{p(y | X, \theta)}\frac{\partial p(y | X, \theta)}{\partial \theta} \; . \end{align*} Taking $\theta = \nu$ on both sides yields $\frac{\partial p_{\nu}(y | X, \theta)}{\partial \theta}\bigg|_{\nu=\theta} = \frac{\partial p(y | X, \theta)}{\partial \theta}$. \end{proof} Lemma~\ref{lemma:general_lower_bound} suggests that, if the current set of parameters is $\theta_t$, an appropriate upper bound on the probability that an example will be correctly classified is \begin{align*} L(\theta) &= 1 - \frac{1}{N}\sum_i p(y_i | X_i, \theta)\\ &\leq 1 - \frac{1}{N}\sum_i p(y_i | X_i, \theta_t)\left(1 + \log\frac{p(y_i | X_i, \theta)}{p(y_i | X_i, \theta_t)}\right)\\ &= C - \frac{1}{N}\sum_i p(y_i | X_i, \theta_t)\log p(y_i | X_i, \theta)\; , \end{align*} where $C$ is a constant independent of $\theta$. We shall denote \begin{align} L_{\theta_t}(\theta) &= -\frac{1}{N}\sum_i p(y_i | X_i, \theta_t)\log p(y_i | X_i, \theta) \; . \label{eq:g_theta_t} \end{align} One possibility is to recompute the bound after every gradient step. This is exactly equivalent to directly minimizing $L$. Such a procedure is brittle. In particular, Eq.~\ref{eq:g_theta_t} indicates that, if an example is poorly classified early on, its gradient will be close to 0 and it will difficult to recover from this situation. Thus, we propose using Algorithm~\ref{alg:iter_supervised} for supervised learning: \begin{algorithm} \caption{Iterative supervised learning} \label{alg:iter_supervised} \KwData{A dataset $\mathcal{D}$ comprising of $(X_i, y_i)$ pairs, initial parameters $\theta_0$} \KwResult{Final parameters $\theta_T$} \For{t = 0 \KwTo T-1} {$\theta_{t+1} = \argmin_{\theta} L_{\theta_t} = -\sum_i p(y_i | X_i, \theta_t) \log p(y_i | X_i, \theta)$} \end{algorithm} In regularly recomputing the bound, we ensure that it remains close to the quantity we are interested in and that we do not waste time optimizing a loose bound. The idea of computing tighter bounds during optimization is not new. In particular, several authors used a CCCP-based~\citep{yuille2003concave} procedure to achieve tighter bounds for SVMs~\citep{xu2006robust,collobert2006trading,bottou2011nonconvex}. Though~\citet{collobert2006trading} show a small improvement of the test error, the primary goal was to reduce the number of support vectors to keep the testing time manageable. Also, the algorithm proposed by~\citet{bottou2011nonconvex} required the setting of an hyperparameter, $s$, which has a strong influence on the final solution (see Fig.~5 in their paper). Finally, we are not aware of similar ideas in the context of the logistic loss. Additionally, our idea extends naturally to the case where $p$ is a complicated function of $\theta$ and not easily written as a sum of a convex and a concave function. This might lead to nonconvex inner optimizations but we believe that this can still yield lower classification error. A longer study in the case of deep networks is planned. \subsection*{Regularization} As this model further optimizes the training classification accuracy, regularization is often needed. The standard optimization procedure minimizes the following regularized objective: \begin{align*} \theta^* &= \argmin_{\theta} -\sum_i \log p(y_i | X_i, \theta) + \lambda \Omega(\theta)\\ &= \argmin_{\theta} -\sum_i \frac{1}{K}\log p(y_i | X_i, \theta) + \frac{\lambda}{K} \Omega(\theta) \; . \end{align*} Thus, we can view this as an upper bound of the following ``true'' objective: \begin{align*} \theta^* &= \argmin_{\theta} -\sum_i p(y_i | X_i, \theta) + \frac{\lambda}{K} \Omega(\theta) \; , \end{align*} which can then be optimized using Algorithm~\ref{alg:iter_supervised}. \subsection*{Online learning} Because of its iterative nature, Algorithm~\ref{alg:iter_supervised} is adapted to a batch setting. However, in many cases, we have access to a stream of data and we cannot recompute the importance weights on all the points. A natural way around this problem is to select a parameter vector $\theta$ and to use $\nu = \theta$ for the subsequent examples. One can see this as ``crystallizing'' the current solution as the value of $\nu$ chosen will affect all subsequent gradients. \section{Experiments} \label{sec:experiments} We experimented the impact of using tighter bounds to the expected misclassification rate on several datasets, which will each be described in their own section. The experimental setup for all datasets was as follows. We first set aside part of the dataset to compose the test set. We then performed k-fold cross-validation, using a generalized linear model, on the remaining datapoints for different values of $T$, the number of times the importance weights were recomputed, and the $\ell_2$-regularizer $\lambda$. For each value of $T$, we then selected the set of hyperparameters ($\lambda$ and the number of iterations) which achieved the lowest validation classification error. We computed the test error for each of the $k$ models (one per fold) with these hyperparameters. This allowed us to get a confidence intervals on the test error, where the random variable is the training set but not the test set. For a fair comparison, each internal optimization was run for $Z$ updates so that $ZT$ was constant. Each update was computed on a randomly chosen minibatch of 50 datapoints using the SAG algorithm~\citep{leroux2012stochastic}. Since we used a generalized linear model, each internal optimization was convex and thus had no optimization hyperparameter. Fig.~\ref{fig:all_trains} presents the training classification errors on all the datasets. \subsection{Covertype binary dataset} The Covertype binary dataset~\citep{collobert2002parallel} has 581012 datapoints in dimension 54 and 2 classes. We used the first 90\% for the cross-validation and the last 10\% for testing. Due to the small dimension of the input, linear models strongly underfit, a regime in which tighter bounds are most beneficial. We see in Fig.~\ref{fig:covtype_valid} that using $T > 1$ leads to much lower training and validation classification errors. Training and validation curves are presented in Fig.~\ref{fig:covtype_valid} and the test classification error is listed in Table~\ref{table:covtype_test}. \subsection{Alpha dataset} The Alpha dataset is a binary classification dataset used in the Pascal Large-Scale challenge and contains 500000 samples in dimension 500. We used the first 400000 examples for the cross-validation and the last 100000 for testing. A logistic regression trained on this dataset overfits quickly and, as a result, the results for all values of $T$ are equivalent. Training and validation curves are presented in Fig.~\ref{fig:alpha_valid} and the test classification error is listed in Table~\ref{table:alpha_test}. \subsection{MNist dataset} The MNist dataset is a digit recognition dataset with 70000 samples. The first 60000 were used for the cross-validation and the last 10000 for testing. Inputs have dimension 784 but 67 of them are always equal to 0. Despite overfitting occurring quickly, values of $T$ greater than 1 yield significant improvements over the log-loss. Training and validation curves are presented in Fig.~\ref{fig:mnist_valid} and the test classification error is listed in Table~\ref{table:mnist_test}. \subsection{IJCNN dataset} The IJCNN dataset is a dataset with 191681 samples. The first 80\% of the dataset were used for training and validation (70\% for training, 10\% for validation, using random splits), and the last 20\% were used for testing samples. Inputs have dimension 23, which means we are likely to be in the underfitting regime. Indeed, larger values of $T$ lead to significant improvements over the log-loss. Training and validation curves are presented in Fig.~\ref{fig:ijcnn_valid} and the test classification error is listed in Table~\ref{table:ijcnn_test}. \section{Supervised learning as policy optimization} \label{sec:rl} We now propose an interpretation of supervised learning which closely matches that of direct policy optimization in reinforcement learning. This allows us to naturally address common issues in the literature, such as optimizing ROC curves or allowing a classifier to withhold taking a decision. A machine learning algorithm is often only one component of a larger system whose role is to make decisions, whether it is choosing which ad to display or deciding if a patient needs a specific treatment. Some of these systems also involve humans. Such systems are complex to optimize and it is often appealing to split them into smaller components which are optimized independently. However, such splits might lead to poor decisions, even when each component is carefully optimized~\citep{bottou2015stakes}. This issue can be alleviated by making each component optimize the full system with respect to its own parameters. Doing so requires taking into account the reaction of the other components in the system to the changes made, which cannot in general be modeled. However, one may cast it as a reinforcement learning problem where the environment is represented by everything outside of our component, including the other components of the system~\citep{bottou2013counterfactual}. Pushing the analogy further, we see that in one-step policy learning, we try to find a policy $p( y | X, \theta)$ over actions $y$ given the state $X$~\footnote{In standard policy learning, we actually consider full rollouts which include not only actions but also state changes due to these actions.} to minimize the expected loss defined as \begin{align} \bar{L}(\theta) &= -\sum_i \sum_y R(y, X_i) p(y | X_i, \theta) \; . \label{eq:policy_learning} \end{align} $\bar{L}(\theta)$ is equivalent to $L(\theta)$ from Eq.~\ref{eq:g_supervised_prob} where all actions have a reward of 0 except for the action choosing the correct class $y_i$ yielding $R(y_i, X_i) = 1$. One major difference between policy learning and supervised learning is that, in policy learning, we only observe the reward for the actions we have taken, while in supervised learning, the reward for all the actions is known. Casting the classification problem as a specific policy learning problem yields a loss function commensurate with a reward. In particular, it allows us to explicit the rewards associated with each decision, which was difficult with Eq.~\ref{eq:log_loss}. We will now review several possibilities opened by this formulation. \subsection*{Optimizing the ROC curve} In some scenarios, we might be interested in other performance metrics than the average classification error. In search advertising, for instance, we are often interested in maximizing the precision at a given recall.~\citet{mozer2001prodding} address the problem by emphasizing the training points whose output is within a certain interval.~\citet{gasso2011batch,parambath2014optimizing}, on the other hand, assign a different cost to type I and type II errors, learning which values lead to the desired false positive rate. Finally,~\citet{bach2006considering} propose a procedure to find the optimal solution for all costs efficiently in the context of SVMs and showed that the resulting models are not the optimal models in the class. To test the impact of optimizing the probabilities rather than a surrogate loss, we reproduced the binary problem of~\citet{bach2006considering}. We computed the average training and testing performance over 10 splits. An example of the training set and the results are presented in Fig.~\ref{fig:cost_asymmetry}. Even though working directly with probabilities solved the non-concavity issue, we still had to explore all possible cost asymmetries to draw this curve. In particular, if we had been asked to maximize the true positive rate for a given false positive rate, we would have needed to draw the whole curve then find the appropriate point. However, expressing the loss directly as a function of the probabilities of choosing each class allows us to cast this requirement as a constraint and solve the following constrained optimization problem: \begin{align*} \theta^* &= \argmin_{\theta} -\frac{1}{N_1}\sum_{i / y_i = 1} p(1 | x_i, \theta) \textrm{ such that } \frac{1}{N_0}\sum_{i/ y_i = 0} p(1 | x_i, \theta) \leq c_{FP} \; , \end{align*} with $N_0$ (resp. $N_1$) the number of examples belonging to class $0$ (resp. class $1$). Since $p(1 | x_i, \theta) = 1 - p(0 | x_i, \theta)$ , we can solve the following Lagrangian problem \begin{align*} \min_{\theta} \max_{\lambda \geq 0} L(\theta, \lambda) &= \min_{\theta} \max_{\lambda \geq 0} \frac{1}{N_1}\sum_{i / y_i = 1} p(1 | x_i, \theta) + \lambda \left(1 - \frac{1}{N_0}\sum_{i / y_i = 0} p(0 | x_i, \theta) - c_{FP}\right) \; . \end{align*} This is an approach proposed by~\citet{mozer2001prodding} who then minimize this function directly. We can however replace $L(\theta, \lambda)$ with the following upper bound: \begin{align*} L(\theta, \lambda) &\leq \frac{1}{N_1}\sum_{i / y_i = 1} p(1 | x_i, \nu)\left(1 + \log \frac{p(1 | x_i, \theta)}{p(1 | x_i, \nu)}\right)\\ &\quad + \lambda \left(1 - \frac{1}{N_0}\sum_{i / y_i = 0} p(0 | x_i, \nu)\left(1 + \log \frac{p(0 | x_i, \theta)}{p(0 | x_i, \nu)}\right) - c_{FP}\right) \end{align*} and jointly optimize over $\theta$ and $\lambda$. Even though the constraint is on the upper bound and thus will not be exactly satisfied during the optimization, the increasing tightness of the bound with the convergence will lead to a satisfied constraint at the end of the optimization. We show in Fig.~\ref{fig:test_fpr} the obtained false positive rate as a function of the required false positive rate and see that the constraint is close to being perfectly satisfied. One must note, however, that the ROC curve obtained using the constrained optimization problems matches that of $T=1$, i.e. is not concave. We do not have an explanation as to why the behaviour is not the same when solving the constrained optimization problem and when optimizing an asymmetric cost for all values of the asymmetry. \subsection*{Allowing uncertainty in the decision} Let us consider a cancer detection algorithm which would automatically classify patients in two categories: healthy or ill. In practice, this algorithm will not be completely accurate and, given the high price of a misclassification, we would like to include the possibility for the algorithm to hand over the decision to the practitioner. In other words, it needs to include the possibility of being ``Undecided''. The standard way of handling this situation is to manually set a threshold on the output of the classifier and, should the maximum score across all classes be below that threshold, deem the example too hard to classify. However, it is generally not obvious how to set the value of that threshold nor how it relates to the quantity we care about. The difficulty is heightened when the prior probabilities of each class are very different. Eq.~\ref{eq:policy_learning} allows us to naturally include an extra ``action'', the ``Undecided'' action, which has its own reward. This reward should be equal to the reward of choosing the correct class (i.e., 1) minus the cost $c_h$ of resorting to external intervention~\footnote{This is assuming that the external intervention always leads to the correct decision. Any other setting can easily be used.}, which is less than 1 since we would otherwise rather have an error than be undecided. Let us denote by $r_h = 1 - c_h$ the reward obtained when the model chooses the ``Undecided'' class. Then, the reward obtained when the input is $X_i$ is: \begin{align*} R(y_i | X_i) &= 1\\ R(``Undecided'' | X_i) &= r_h \; , \end{align*} and the average under the policy is $p(y_i | X_i, \theta) + r_h p(``Undecided'' | X_i, \theta)$. Learning this model on a training set is equivalent to minimizing the following quantity: \begin{align} \theta^* &= \argmin_{\theta} -\frac{1}{N}\sum_i \left(p(y_i | X_i, \theta) + r_h p(\textrm{``Undecided''} | X_i, \theta)\right) \; . \end{align} For each training example, we have added another example with importance weight $r_h$ and class ``Undecided''. If we were to solve this problem through a minimization of the log-loss, it is well-known that the optimal solution would be, for each example $X_i$, to predict $y_i$ with probability $1 / (1 + r_h)$ and ``Undecided'' with probability $r_h / (1 + r_h)$. However, when optimizing the weighted sum of probabilities, the optimal solution is still to predict $y_i$ with probability 1. In other words, adding the ``Undecided'' class does not change the model if it has enough capacity to learn the training set accurately. \iffalse \subsection*{Choosing amongst a set of classes} Some classification tasks involve a large number of classes and it is unreasonable to expect the algorithm to choose the correct class with high accuracy. Rather than maximize the top-1 classification rate, another approach is to maximize the top-$k$ rate, that is the probability that the correct class belongs to the set of $k$ classes provided by the algorithm. However, classifiers built to maximize this rate often solve Eq.~\ref{eq:log_loss}, despite the change in performance metric. Again, casting the classification problem as a policy learning problem provides a natural alternative. In that setting, an action is not a class anymore but rather a set of $k$ classes. The reward is again 1 when the correct class is in the set and 0 otherwise. We do not address here the issue of finding a suitable distribution over sets of $k$ elements which is another important topic. \fi \section{Discussion and conclusion} \label{sec:conclusion} Using a general class of upper bounds of the expected classification error, we showed how a sequence of minimizations could lead to reduced classification error rates. However, there are still a lot of questions to be answered. As using $T > 1$ increases overfitting, one might wonder whether the standard regularizers are still adapted. Also, current state-of-the-art models, especially in image classification, already use strong regularizers such as dropout. The question remains whether using $T > 1$ with these models would lead to an improvement. Additionally, it makes less and less sense to think of machine learning models in isolation. They are increasingly often part of large systems and one must think of the proper way of optimizing them in this setting. The modification proposed here led to an explicit formulation for the true impact of a classifier. This facilitates the optimization of such a classifier in the context of a larger production system where additional costs and constraints may be readily incorporated. We believe this is a critical venue of research to be explored further. \subsubsection*{Acknowledgments} We thank Francis Bach, L\'eon Bottou, Guillaume Obozinski, and Vianney Perchet for helpful discussions. \end{document}
Tighter bounds lead to improved classifiers
1606.09202
Table 4: Test error for the models reaching the best validation error for various values of T on the IJCNN dataset. Larger values of T lead significantly lower test errors.
[ "T", "[ITALIC] Z", "Test error ±3 [ITALIC] σ (%)" ]
[ [ "1000", "1e5", "4.62±0.12" ], [ "100", "1e6", "5.26±0.33" ], [ "10", "1e7", "5.87±0.13" ], [ "1", "1e8", "6.19±0.12" ] ]
The IJCNN dataset is a dataset with 191681 samples. The first 80% of the dataset were used for training and validation (70% for training, 10% for validation, using random splits), and the last 20% were used for testing samples. Inputs have dimension 23, which means we are likely to be in the underfitting regime. Indeed, larger values of T lead to significant improvements over the log-loss. Training and validation curves are presented in Fig.
\documentclass{article} % more modern \bibliographystyle{abbrvnat} \graphicspath{{../images/}} \usepackage[]{algorithm2e} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newcommand{\theHalgorithm}{\arabic{algorithm}} \newcommand{\R}{\mathbb{R}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \SetKwInput{KwData}{The data} \SetKwInput{KwResult}{The result} \renewcommand{\thefootnote}{\alph{footnote}} \addtolength{\oddsidemargin}{-.25cm} \addtolength{\evensidemargin}{-.25cm} \addtolength{\textwidth}{.5cm} \addtolength{\topmargin}{-.5cm} \addtolength{\textheight}{1cm} \author{Nicolas Le Roux\\Criteo Research\\\texttt{nicolas@le-roux.name}} \date{\today} \begin{document} \title{Tighter bounds lead to improved classifiers} \maketitle \begin{abstract} The standard approach to supervised classification involves the minimization of a log-loss as an upper bound to the classification error. While this is a tight bound early on in the optimization, it overemphasizes the influence of incorrectly classified examples far from the decision boundary. Updating the upper bound during the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible to link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints. \end{abstract} \section{Introduction} Classification aims at mapping inputs $X \in \mathcal{X}$ to one or several classes $y \in \mathcal{Y}$. For instance, in object categorization, $\mathcal{X}$ will be the set of images depicting an object, usually represented by the RGB values of each of their pixels, and $\mathcal{Y}$ will be a set of object classes, such as ``car'' or ``dog''. We shall assume we are given a training set comprised of $N$ independent and identically distributed labeled pairs $(X_i, y_i)$. The standard approach to solve the problem is to define a parameterized class of functions $p(y | X, \theta)$ indexed by $\theta$ and to find the parameter $\theta^*$ which minimizes the log-loss, i.e. \begin{align} \label{eq:log_loss} \theta^*&= \argmin_{\theta} -\frac{1}{N}\sum_i \log p(y_i | X_i, \theta)\\ &= \argmin_{\theta} L_{\log}(\theta) \; , \nonumber \end{align} with \begin{align} L_{\log}(\theta) &= -\frac{1}{N}\sum_i \log p(y_i | X_i, \theta) \; . \end{align} One justification for minimizing $L_{\log}(\theta)$ is that $\theta^*$ is the maximum likelihood estimator, i.e. the parameter which maximizes \begin{align*} \theta^* &= \argmax_{\theta}p(\mathcal{D} | \theta)\\ &= \argmax_{\theta}\prod_i p(y_i | X_i, \theta) \; . \end{align*} There is another reason to use Eq.~\ref{eq:log_loss}. Indeed, the goal we are interested in is minimizing the classification error. If we assume that our classifiers are stochastic and outputs a class according to $p(y_i | X_i, \theta)$, then the expected classification error is the probability of choosing the incorrect class\footnote{In practice, we choose the class deterministically and output $\argmax_y p(y | X_i, \theta)$.}. This translates to \begin{align} L(\theta) &= \frac{1}{N}\sum_i (1 - p(y_i | X_i, \theta))\nonumber\\ &= 1 - \frac{1}{N}\sum_i p(y_i | X_i, \theta) \; . \label{eq:g_supervised_prob} \end{align} This is a highly nonconvex function of $\theta$, which makes its minimization difficult. However, we have \begin{align*} L(\theta) &= 1 - \frac{1}{N}\sum_i p(y_i | X_i, \theta) \\ &\leq 1 - \frac{1}{N}\sum_i \frac{1}{K}\left(1 + \log p(y_i | X_i, \theta) + \log K\right)\\ &= \frac{(K - 1 - \log K)}{K} + \frac{L_{\log}(\theta)}{K} \; , \end{align*} where $K = |\mathcal{Y}|$ is the number of classes (assumed finite), using the fact that, for every nonnegative $t$, we have $t \geq 1 + \log t$. Thus, minimizing $L_{\log}(\theta)$ is equivalent to minimizing an upper bound of $L(\theta)$. % Put here a figure of both functions. Further, this bound is tight when $p(y_i | X_i , \theta) = \frac{1}{K}$ for all $y_i$. As a model with randomly initialized parameters will assign probabilities close to $1/K$ to each class, it makes sense to minimize $L_{\log}(\theta)$ rather than $L(\theta)$ early on in the optimization. However, this bound becomes looser as $\theta$ moves away from its initial value. In particular, poorly classified examples, for which $p(y_i | X_i, \theta)$ is close to 0, have a strong influence on the gradient of $L_{\log} (\theta)$ despite having very little influence on the gradient of $L(\theta)$. The model will thus waste capacity trying to bring these examples closer to the decision boundary rather than correctly classifying those already close to the boundary. This will be especially noticeable when the model has limited capacity, i.e. in the underfitting setting. Section~\ref{sec:tighter_bounds} proposes a tighter bound of the classification error as well as an iterative scheme to easily optimize it. Section~\ref{sec:experiments} experiments this iterative scheme using generalized linear models over a variety of datasets to estimate its impact. Section~\ref{sec:rl} then proposes a link between supervised learning and reinforcement learning, revisiting common techniques in a new light. Finally, Section~\ref{sec:conclusion} concludes and proposes future directions. \section{Tighter bounds on the classification error} \label{sec:tighter_bounds} We now present a general class of upper bounds of the classification error which will prove useful when the model is far from its initialization. \begin{lemma} \label{lemma:general_lower_bound} Let \begin{align} p_{\nu}(y | X, \theta) &= p(y | X, \nu) \left(1 + \log \frac{p(y | X, \theta)}{p(y | X, \nu)} \right) \label{eq:p_nu} \end{align} with $\nu$ any value of the parameters. Then we have \begin{align} p_{\nu}(y | X, \theta) &\leq p(y | X, \theta) \; . \end{align} Further, if $\nu = \theta$, we have \begin{align} p_{\theta}(y | X, \theta) &= p(y | X, \theta) \; ,\\ \frac{\partial p_{\nu}(y | X, \theta)}{\partial \theta}\bigg|_{\nu = \theta} &= \frac{\partial p(y | X, \theta)}{\partial \theta} \; . \end{align} \end{lemma} \begin{proof} \begin{align*} p(y | X, \theta) &= p(y | X, \nu)\frac{p(y | X, \theta)}{p(y | X, \nu)}\\ &\geq p(y | X, \nu) \left(1 + \log \frac{p(y | X, \theta)}{p(y | X, \nu)} \right)\\ &= p_{\nu}(y | X, \theta)\; . \end{align*} The second line stems from the inequality $ t \geq 1 + \log t$. $p_{\nu}(y | X, \theta) = p(y | X, \theta)$ is immediate when setting $\theta = \nu$ in Eq.~\ref{eq:p_nu}. Deriving $p_{\nu}(y | X, \theta)$ with respect to $\theta$ yields \begin{align*} \frac{\partial p_{\nu}(y | X, \theta)}{\partial \theta} &= p(y | X, \nu)\frac{\partial \log p(y | X, \theta)}{\partial \theta}\\ &= \frac{p(y | X, \nu)}{p(y | X, \theta)}\frac{\partial p(y | X, \theta)}{\partial \theta} \; . \end{align*} Taking $\theta = \nu$ on both sides yields $\frac{\partial p_{\nu}(y | X, \theta)}{\partial \theta}\bigg|_{\nu=\theta} = \frac{\partial p(y | X, \theta)}{\partial \theta}$. \end{proof} Lemma~\ref{lemma:general_lower_bound} suggests that, if the current set of parameters is $\theta_t$, an appropriate upper bound on the probability that an example will be correctly classified is \begin{align*} L(\theta) &= 1 - \frac{1}{N}\sum_i p(y_i | X_i, \theta)\\ &\leq 1 - \frac{1}{N}\sum_i p(y_i | X_i, \theta_t)\left(1 + \log\frac{p(y_i | X_i, \theta)}{p(y_i | X_i, \theta_t)}\right)\\ &= C - \frac{1}{N}\sum_i p(y_i | X_i, \theta_t)\log p(y_i | X_i, \theta)\; , \end{align*} where $C$ is a constant independent of $\theta$. We shall denote \begin{align} L_{\theta_t}(\theta) &= -\frac{1}{N}\sum_i p(y_i | X_i, \theta_t)\log p(y_i | X_i, \theta) \; . \label{eq:g_theta_t} \end{align} One possibility is to recompute the bound after every gradient step. This is exactly equivalent to directly minimizing $L$. Such a procedure is brittle. In particular, Eq.~\ref{eq:g_theta_t} indicates that, if an example is poorly classified early on, its gradient will be close to 0 and it will difficult to recover from this situation. Thus, we propose using Algorithm~\ref{alg:iter_supervised} for supervised learning: \begin{algorithm} \caption{Iterative supervised learning} \label{alg:iter_supervised} \KwData{A dataset $\mathcal{D}$ comprising of $(X_i, y_i)$ pairs, initial parameters $\theta_0$} \KwResult{Final parameters $\theta_T$} \For{t = 0 \KwTo T-1} {$\theta_{t+1} = \argmin_{\theta} L_{\theta_t} = -\sum_i p(y_i | X_i, \theta_t) \log p(y_i | X_i, \theta)$} \end{algorithm} In regularly recomputing the bound, we ensure that it remains close to the quantity we are interested in and that we do not waste time optimizing a loose bound. The idea of computing tighter bounds during optimization is not new. In particular, several authors used a CCCP-based~\citep{yuille2003concave} procedure to achieve tighter bounds for SVMs~\citep{xu2006robust,collobert2006trading,bottou2011nonconvex}. Though~\citet{collobert2006trading} show a small improvement of the test error, the primary goal was to reduce the number of support vectors to keep the testing time manageable. Also, the algorithm proposed by~\citet{bottou2011nonconvex} required the setting of an hyperparameter, $s$, which has a strong influence on the final solution (see Fig.~5 in their paper). Finally, we are not aware of similar ideas in the context of the logistic loss. Additionally, our idea extends naturally to the case where $p$ is a complicated function of $\theta$ and not easily written as a sum of a convex and a concave function. This might lead to nonconvex inner optimizations but we believe that this can still yield lower classification error. A longer study in the case of deep networks is planned. \subsection*{Regularization} As this model further optimizes the training classification accuracy, regularization is often needed. The standard optimization procedure minimizes the following regularized objective: \begin{align*} \theta^* &= \argmin_{\theta} -\sum_i \log p(y_i | X_i, \theta) + \lambda \Omega(\theta)\\ &= \argmin_{\theta} -\sum_i \frac{1}{K}\log p(y_i | X_i, \theta) + \frac{\lambda}{K} \Omega(\theta) \; . \end{align*} Thus, we can view this as an upper bound of the following ``true'' objective: \begin{align*} \theta^* &= \argmin_{\theta} -\sum_i p(y_i | X_i, \theta) + \frac{\lambda}{K} \Omega(\theta) \; , \end{align*} which can then be optimized using Algorithm~\ref{alg:iter_supervised}. \subsection*{Online learning} Because of its iterative nature, Algorithm~\ref{alg:iter_supervised} is adapted to a batch setting. However, in many cases, we have access to a stream of data and we cannot recompute the importance weights on all the points. A natural way around this problem is to select a parameter vector $\theta$ and to use $\nu = \theta$ for the subsequent examples. One can see this as ``crystallizing'' the current solution as the value of $\nu$ chosen will affect all subsequent gradients. \section{Experiments} \label{sec:experiments} We experimented the impact of using tighter bounds to the expected misclassification rate on several datasets, which will each be described in their own section. The experimental setup for all datasets was as follows. We first set aside part of the dataset to compose the test set. We then performed k-fold cross-validation, using a generalized linear model, on the remaining datapoints for different values of $T$, the number of times the importance weights were recomputed, and the $\ell_2$-regularizer $\lambda$. For each value of $T$, we then selected the set of hyperparameters ($\lambda$ and the number of iterations) which achieved the lowest validation classification error. We computed the test error for each of the $k$ models (one per fold) with these hyperparameters. This allowed us to get a confidence intervals on the test error, where the random variable is the training set but not the test set. For a fair comparison, each internal optimization was run for $Z$ updates so that $ZT$ was constant. Each update was computed on a randomly chosen minibatch of 50 datapoints using the SAG algorithm~\citep{leroux2012stochastic}. Since we used a generalized linear model, each internal optimization was convex and thus had no optimization hyperparameter. Fig.~\ref{fig:all_trains} presents the training classification errors on all the datasets. \subsection{Covertype binary dataset} The Covertype binary dataset~\citep{collobert2002parallel} has 581012 datapoints in dimension 54 and 2 classes. We used the first 90\% for the cross-validation and the last 10\% for testing. Due to the small dimension of the input, linear models strongly underfit, a regime in which tighter bounds are most beneficial. We see in Fig.~\ref{fig:covtype_valid} that using $T > 1$ leads to much lower training and validation classification errors. Training and validation curves are presented in Fig.~\ref{fig:covtype_valid} and the test classification error is listed in Table~\ref{table:covtype_test}. \subsection{Alpha dataset} The Alpha dataset is a binary classification dataset used in the Pascal Large-Scale challenge and contains 500000 samples in dimension 500. We used the first 400000 examples for the cross-validation and the last 100000 for testing. A logistic regression trained on this dataset overfits quickly and, as a result, the results for all values of $T$ are equivalent. Training and validation curves are presented in Fig.~\ref{fig:alpha_valid} and the test classification error is listed in Table~\ref{table:alpha_test}. \subsection{MNist dataset} The MNist dataset is a digit recognition dataset with 70000 samples. The first 60000 were used for the cross-validation and the last 10000 for testing. Inputs have dimension 784 but 67 of them are always equal to 0. Despite overfitting occurring quickly, values of $T$ greater than 1 yield significant improvements over the log-loss. Training and validation curves are presented in Fig.~\ref{fig:mnist_valid} and the test classification error is listed in Table~\ref{table:mnist_test}. \subsection{IJCNN dataset} The IJCNN dataset is a dataset with 191681 samples. The first 80\% of the dataset were used for training and validation (70\% for training, 10\% for validation, using random splits), and the last 20\% were used for testing samples. Inputs have dimension 23, which means we are likely to be in the underfitting regime. Indeed, larger values of $T$ lead to significant improvements over the log-loss. Training and validation curves are presented in Fig.~\ref{fig:ijcnn_valid} and the test classification error is listed in Table~\ref{table:ijcnn_test}. \section{Supervised learning as policy optimization} \label{sec:rl} We now propose an interpretation of supervised learning which closely matches that of direct policy optimization in reinforcement learning. This allows us to naturally address common issues in the literature, such as optimizing ROC curves or allowing a classifier to withhold taking a decision. A machine learning algorithm is often only one component of a larger system whose role is to make decisions, whether it is choosing which ad to display or deciding if a patient needs a specific treatment. Some of these systems also involve humans. Such systems are complex to optimize and it is often appealing to split them into smaller components which are optimized independently. However, such splits might lead to poor decisions, even when each component is carefully optimized~\citep{bottou2015stakes}. This issue can be alleviated by making each component optimize the full system with respect to its own parameters. Doing so requires taking into account the reaction of the other components in the system to the changes made, which cannot in general be modeled. However, one may cast it as a reinforcement learning problem where the environment is represented by everything outside of our component, including the other components of the system~\citep{bottou2013counterfactual}. Pushing the analogy further, we see that in one-step policy learning, we try to find a policy $p( y | X, \theta)$ over actions $y$ given the state $X$~\footnote{In standard policy learning, we actually consider full rollouts which include not only actions but also state changes due to these actions.} to minimize the expected loss defined as \begin{align} \bar{L}(\theta) &= -\sum_i \sum_y R(y, X_i) p(y | X_i, \theta) \; . \label{eq:policy_learning} \end{align} $\bar{L}(\theta)$ is equivalent to $L(\theta)$ from Eq.~\ref{eq:g_supervised_prob} where all actions have a reward of 0 except for the action choosing the correct class $y_i$ yielding $R(y_i, X_i) = 1$. One major difference between policy learning and supervised learning is that, in policy learning, we only observe the reward for the actions we have taken, while in supervised learning, the reward for all the actions is known. Casting the classification problem as a specific policy learning problem yields a loss function commensurate with a reward. In particular, it allows us to explicit the rewards associated with each decision, which was difficult with Eq.~\ref{eq:log_loss}. We will now review several possibilities opened by this formulation. \subsection*{Optimizing the ROC curve} In some scenarios, we might be interested in other performance metrics than the average classification error. In search advertising, for instance, we are often interested in maximizing the precision at a given recall.~\citet{mozer2001prodding} address the problem by emphasizing the training points whose output is within a certain interval.~\citet{gasso2011batch,parambath2014optimizing}, on the other hand, assign a different cost to type I and type II errors, learning which values lead to the desired false positive rate. Finally,~\citet{bach2006considering} propose a procedure to find the optimal solution for all costs efficiently in the context of SVMs and showed that the resulting models are not the optimal models in the class. To test the impact of optimizing the probabilities rather than a surrogate loss, we reproduced the binary problem of~\citet{bach2006considering}. We computed the average training and testing performance over 10 splits. An example of the training set and the results are presented in Fig.~\ref{fig:cost_asymmetry}. Even though working directly with probabilities solved the non-concavity issue, we still had to explore all possible cost asymmetries to draw this curve. In particular, if we had been asked to maximize the true positive rate for a given false positive rate, we would have needed to draw the whole curve then find the appropriate point. However, expressing the loss directly as a function of the probabilities of choosing each class allows us to cast this requirement as a constraint and solve the following constrained optimization problem: \begin{align*} \theta^* &= \argmin_{\theta} -\frac{1}{N_1}\sum_{i / y_i = 1} p(1 | x_i, \theta) \textrm{ such that } \frac{1}{N_0}\sum_{i/ y_i = 0} p(1 | x_i, \theta) \leq c_{FP} \; , \end{align*} with $N_0$ (resp. $N_1$) the number of examples belonging to class $0$ (resp. class $1$). Since $p(1 | x_i, \theta) = 1 - p(0 | x_i, \theta)$ , we can solve the following Lagrangian problem \begin{align*} \min_{\theta} \max_{\lambda \geq 0} L(\theta, \lambda) &= \min_{\theta} \max_{\lambda \geq 0} \frac{1}{N_1}\sum_{i / y_i = 1} p(1 | x_i, \theta) + \lambda \left(1 - \frac{1}{N_0}\sum_{i / y_i = 0} p(0 | x_i, \theta) - c_{FP}\right) \; . \end{align*} This is an approach proposed by~\citet{mozer2001prodding} who then minimize this function directly. We can however replace $L(\theta, \lambda)$ with the following upper bound: \begin{align*} L(\theta, \lambda) &\leq \frac{1}{N_1}\sum_{i / y_i = 1} p(1 | x_i, \nu)\left(1 + \log \frac{p(1 | x_i, \theta)}{p(1 | x_i, \nu)}\right)\\ &\quad + \lambda \left(1 - \frac{1}{N_0}\sum_{i / y_i = 0} p(0 | x_i, \nu)\left(1 + \log \frac{p(0 | x_i, \theta)}{p(0 | x_i, \nu)}\right) - c_{FP}\right) \end{align*} and jointly optimize over $\theta$ and $\lambda$. Even though the constraint is on the upper bound and thus will not be exactly satisfied during the optimization, the increasing tightness of the bound with the convergence will lead to a satisfied constraint at the end of the optimization. We show in Fig.~\ref{fig:test_fpr} the obtained false positive rate as a function of the required false positive rate and see that the constraint is close to being perfectly satisfied. One must note, however, that the ROC curve obtained using the constrained optimization problems matches that of $T=1$, i.e. is not concave. We do not have an explanation as to why the behaviour is not the same when solving the constrained optimization problem and when optimizing an asymmetric cost for all values of the asymmetry. \subsection*{Allowing uncertainty in the decision} Let us consider a cancer detection algorithm which would automatically classify patients in two categories: healthy or ill. In practice, this algorithm will not be completely accurate and, given the high price of a misclassification, we would like to include the possibility for the algorithm to hand over the decision to the practitioner. In other words, it needs to include the possibility of being ``Undecided''. The standard way of handling this situation is to manually set a threshold on the output of the classifier and, should the maximum score across all classes be below that threshold, deem the example too hard to classify. However, it is generally not obvious how to set the value of that threshold nor how it relates to the quantity we care about. The difficulty is heightened when the prior probabilities of each class are very different. Eq.~\ref{eq:policy_learning} allows us to naturally include an extra ``action'', the ``Undecided'' action, which has its own reward. This reward should be equal to the reward of choosing the correct class (i.e., 1) minus the cost $c_h$ of resorting to external intervention~\footnote{This is assuming that the external intervention always leads to the correct decision. Any other setting can easily be used.}, which is less than 1 since we would otherwise rather have an error than be undecided. Let us denote by $r_h = 1 - c_h$ the reward obtained when the model chooses the ``Undecided'' class. Then, the reward obtained when the input is $X_i$ is: \begin{align*} R(y_i | X_i) &= 1\\ R(``Undecided'' | X_i) &= r_h \; , \end{align*} and the average under the policy is $p(y_i | X_i, \theta) + r_h p(``Undecided'' | X_i, \theta)$. Learning this model on a training set is equivalent to minimizing the following quantity: \begin{align} \theta^* &= \argmin_{\theta} -\frac{1}{N}\sum_i \left(p(y_i | X_i, \theta) + r_h p(\textrm{``Undecided''} | X_i, \theta)\right) \; . \end{align} For each training example, we have added another example with importance weight $r_h$ and class ``Undecided''. If we were to solve this problem through a minimization of the log-loss, it is well-known that the optimal solution would be, for each example $X_i$, to predict $y_i$ with probability $1 / (1 + r_h)$ and ``Undecided'' with probability $r_h / (1 + r_h)$. However, when optimizing the weighted sum of probabilities, the optimal solution is still to predict $y_i$ with probability 1. In other words, adding the ``Undecided'' class does not change the model if it has enough capacity to learn the training set accurately. \iffalse \subsection*{Choosing amongst a set of classes} Some classification tasks involve a large number of classes and it is unreasonable to expect the algorithm to choose the correct class with high accuracy. Rather than maximize the top-1 classification rate, another approach is to maximize the top-$k$ rate, that is the probability that the correct class belongs to the set of $k$ classes provided by the algorithm. However, classifiers built to maximize this rate often solve Eq.~\ref{eq:log_loss}, despite the change in performance metric. Again, casting the classification problem as a policy learning problem provides a natural alternative. In that setting, an action is not a class anymore but rather a set of $k$ classes. The reward is again 1 when the correct class is in the set and 0 otherwise. We do not address here the issue of finding a suitable distribution over sets of $k$ elements which is another important topic. \fi \section{Discussion and conclusion} \label{sec:conclusion} Using a general class of upper bounds of the expected classification error, we showed how a sequence of minimizations could lead to reduced classification error rates. However, there are still a lot of questions to be answered. As using $T > 1$ increases overfitting, one might wonder whether the standard regularizers are still adapted. Also, current state-of-the-art models, especially in image classification, already use strong regularizers such as dropout. The question remains whether using $T > 1$ with these models would lead to an improvement. Additionally, it makes less and less sense to think of machine learning models in isolation. They are increasingly often part of large systems and one must think of the proper way of optimizing them in this setting. The modification proposed here led to an explicit formulation for the true impact of a classifier. This facilitates the optimization of such a classifier in the context of a larger production system where additional costs and constraints may be readily incorporated. We believe this is a critical venue of research to be explored further. \subsubsection*{Acknowledgments} We thank Francis Bach, L\'eon Bottou, Guillaume Obozinski, and Vianney Perchet for helpful discussions. \end{document}
Neural Graph Machines: Learning Neural Networks Using Graphs
1703.04818
Table 3: Results for news article categorization using character-level CNNs. Our method gives better predictive accuracy, despite using a much smaller CNN compared to the “small CNN” baseline from (Zhang et al., 2015)‡.
[ "Network", "Accuracy %" ]
[ [ "Baseline‡", "84.35" ], [ "Baseline with thesaurus augmentation‡", "85.20" ], [ "Our “tiny” CNN", "85.07" ], [ "Our “tiny” CNN with NGM", "[BOLD] 86.90" ] ]
Our approach outperforms the baseline by provides a 1.8% absolute and 2.1% relative improvement in accuracy, despite using a much smaller network.
\section{Introduction} \label{sec:intro} Semi-supervised learning is a powerful machine learning paradigm that can improve the prediction performance compared to techniques that use only labeled data, by leveraging a large amount of unlabeled data. The need of semi-supervised learning arises in many problems in computer vision, natural language processing or social networks, in which getting labeled datapoints is expensive or unlabeled data is abundant and readily available. There exist a plethora of semi-supervised learning methods. The simplest one uses bootstrapping techniques to generate pseudo-labels for unlabeled data generated from a system trained on labeled data. However, this suffers from label error feedbacks \citep{lee2013pseudo}. In a similar vein, autoencoder based methods often need to rely on a two-stage approach: train an autoencoder using unlabeled data to generate an embedding mapping, and use the learnt embeddings for prediction. In practice, this procedure is often costly and inaccurate. Another example is transductive SVMs \citep{joachims1999transductive}, which is too computationally expensive to be used for large datasets. Methods that are based on generative models and amortized variational inference \citep{kingma2014semi} can work well for images and videos, but it is not immediately clear on how to extend such techniques to handle sparse and multi-modal inputs or graphs over the inputs. In contrast to the methods above, graph-based techniques such as label propagation \citep{zhu2002learning, bengio2006label} often provide a versatile, scalable, and yet effective solution to a wide range of problems. These methods construct a smooth graph over the unlabeled and labeled data. Graphs are also often a natural way to describe the relationships between nodes, such as similarities between embeddings, phrases or images, or connections between entities on the web or relations in a social network. Edges in the graph connect semantically similar nodes or datapoints, and if present, edge weights reflect how strong such similarities are. By providing a set of labeled nodes, such techniques iteratively refine the node labels by aggregating information from neighbours and propagate these labels to the nodes' neighbours. In practice, these methods often converge quickly and can be scaled to large datasets with a large label space \citep{ravi2016large}. We build upon the principle behind label propagation for our method. Another key motivation of our work is the recent advances in neural networks and their performance on a wide variety of supervised learning tasks such as image and speech recognition or sequence-to-sequence learning \citep{krizhevsky2012imagenet, hinton2012deep, sutskever2014sequence}. Such results are however conditioned on training very large networks on large datasets, which may need millions of labeled training input-output pairs. This begs the question: can we harness previous state-of-the-art semi-supervised learning techniques, to jointly train neural networks using limited labeled data and unlabeled data to improve its performance? \vspace{-0.1in} \paragraph{Contributions:} We propose a discriminative training objective for neural networks with graph augmentation, that can be trained with stochastic gradient descent and efficiently scaled to large graphs. The new objective has a regularization term for generic neural network architectures that enforces similarity between nodes in the graphs, which is inspired by the objective function of label propagation. In particular, we show that: \begin{itemize} \item Graph-augmented neural network training can work for a wide range of neural networks, such as feed-forward, convolutional and recurrent networks. Additionally, this technique can be used in both inductive and transductive settings. It also helps learning in low-sample regime (small number of labeled nodes), which cannot be handled by vanilla neural network training. \item The framework can handle multiple forms of graphs, either naturally given or constructed based on embeddings and knowledge bases. \item Using graphs and neighbourhood information alone as direct inputs to neural networks in this joint training framework permits fast and simple inference, yet provides competitive performance with current state-of-the-art approaches which employ a two-step method of first training a node embedding representation from the graph and then using it as feature input to train a classifer separately (see \cref{sec:exp_graph}). \item As a by-product, our proposed framework provides a simple technique to finding smaller and faster neural networks that offer competitive performance with larger and slower non graph-augmented alternatives (see \cref{sec:exp_cnn}). \end{itemize} We experimentally show that the proposed training framework outperforms state-of-the-art or perform favourably on a variety of prediction tasks and datasets, involving text features and/or graph inputs and on many different neural network architectures (see \cref{sec:experiments}). The paper is organized as follows: we first review some background and literature, and relate them to our approach in \cref{sec:background}; we then detail the training objective and its properties in \cref{sec:ngm}; and finally we validate our approach on a range of experiments in \cref{sec:experiments}.\section{Conclusions} \label{sec:summary} We have revisited graph-augmentation training of neural networks and proposed Neural Graph Machines as a general framework for doing so. Its objective function encourages the neural networks to make accurate node-level predictions, as in vanilla neural network training, as well as constrains the networks to learn similar hidden representations for nodes connected by an edge in the graph. Importantly, the objective can be trained by stochastic gradient descent and scaled to large graphs. We validated the efficacy of the graph-augmented objective on various tasks including bloggers' interest, text category and semantic intent classification problems, using a wide range of neural network architectures (FFNNs, CNNs and LSTM RNNs). The experimental results demonstrated that graph-augmented training almost always helps to find better neural networks that outperforms other techniques in predictive performance or even much smaller networks that are faster and easier to train. Additionally, the node-level input features can be combined with graph features as inputs to the neural networks. We showed that a neural network that simply takes the adjacency matrix of a graph and produces node labels, can perform better than a recently proposed two-stage approach using sophisticated graph embeddings and a linear classifier. Our framework also excels when the neural network is small, or when there is limited supervision available. While our objective can be applied to multiple graphs which come from different domains, we have not fully explored this aspect and leave this as future work. We expect the domain-specific networks can interact with the graphs to determine the importance of each domain/graph source in prediction. We also did not explore using graph regularisation for different hidden layers of the neural networks; we expect this is key for the multi-graph transfer setting \cite{yosinski2014transferable}. Another possible future extension is to use our objective on directed graphs, that is to control the direction of influence between nodes during training.\section{Neural graph machines} \label{sec:ngm} In this section, we devise a discriminative training objective for neural networks, that is inspired by the label propagation objective and uses both labeled and unlabeled data, and can be trained by stochastic gradient descent. First, we take a close look at the two objective functions discussed in section \ref{sec:background}. The label propagation objective equation \ref{eqn:cost_lp} ensures the predicted label distributions of neighbouring nodes to be similar, while those of labeled nodes to be close to the ground truth. For example: if a {\it cat} image and a {\it dog} image are strongly connected in a graph, and if the {\it cat} node is labeled as {\it animal}, the predicted probability of the {\it dog} node being {\it animal} is also high. In contrast, the neural network training objective equation \ref{eqn:cost_nn} only takes into account the labeled instances, and ensure correct predictions on the training set. As a consequence, a neural network trained on the {\it cat} image alone will not make an accurate prediction on the {\it dog} image. Such shortcoming of neural network training can be rectified by biasing the network using prior knowledge about the relationship between instances in the dataset. In particular, for the domains we are interested in, training instances (either labeled or unlabeled) that are connected in a graph, for example, {\it dog} and {\it cat} in the above example, should have similar predictions. This can be done by encouraging neighboring data points to have a similar hidden representation learnt by a neural network, resulting in a modified objective function for training neural network architectures using both labeled and unlabeled datapoints. We call architectures trained using this objective {\it Neural Graph Machines}, and schematically illustrate the concept in figure \ref{fig:ngm}. The proposed objective function is a weighted sum of the neural network cost and the label propagation cost as follows, \begin{align} \mathcal{C}_{\mathrm{NGM}}(\theta) &= \sum_{n=1}^{V_l} c (g_\theta(x_n), y_n) \nonumber \\ & \quad + \alpha_1 \sum_{(u,v)\in \mathcal{E}_{LL}} w_{uv} d(h_\theta(x_u), h_\theta(x_v)) \nonumber \\ & \quad + \alpha_2 \sum_{(u,v)\in \mathcal{E}_{LU}} w_{uv} d(h_\theta(x_u), h_\theta(x_v)) \nonumber \\ & \quad + \alpha_3 \sum_{(u,v)\in \mathcal{E}_{UU}} w_{uv} d(h_\theta(x_u), h_\theta(x_v), \label{eqn:cost_ngm} \end{align} where $\mathcal{E}_{LL}$, $\mathcal{E}_{LU}$, and $\mathcal{E}_{UU}$ are sets of labeled-labeled, labeled-unlabeled and unlabeled-unlabeled edges correspondingly, $h(\cdot)$ represents the hidden representations of the inputs produced by the neural network, and $d(\cdot)$ is a distance metric, and $\{\alpha_1, \alpha_2, \alpha_3\}$ are hyperparameters. Note that we have separated the terms based on the edge types, as these can affect the training differently. In practice, we choose an $l$-1 or $l$-2 distance metric for $d(\cdot)$, and $h(x)$ to be the last layer of the neural network. However, these choices can be changed, to a customized metric, or to using an intermediate hidden layer instead. \subsection{Connections to previous methods} The graph-dependent $\alpha$ hyperparameters control the balance of the contributions of different edge types. When $\{\alpha_i=0\}_{i=1}^{3}$, the proposed objective ignores the similarity constraint and becomes a supervised-only neural network objective as in equation \ref{eqn:cost_nn}. When only $\alpha_1 \neq 0$, the training cost has an additional term for labeled nodes, that acts as a regularizer. When $g_\theta(x) = h_\theta(x) = \hat{y}$, where $\hat{y}$ is the label distribution, the individual cost functions ($c$ and $d$) are squared $l$-2 norm, and the objective is trained using $\hat{y}$ directly instead of $\theta$, we arrive at the label propagation objective in equation \ref{eqn:cost_lp}. Therefore, the proposed objective could be thought of as a {\it non-linear} version of the label propagation objective, and a {\it graph-regularized} version of the neural network training objective. \subsection{Network inputs and graph construction\label{sec:graph_construction}} Similar to graph-based label propagation, the choice of the input graphs is critical, to correctly bias the neural network's prediction. Depending on the type of the graphs and nodes in the graph, they can be readily available to use such as social networks or protein linking networks, or they can be constructed (a)~using generic graphs such as Knowledge Bases, that consists of relationship links between entities, (b)~using embeddings learnt by an unsupervised learning technique, or, (c)~using sparse feature representations for each vertex. Additionally, the proposed training objective can be easily modified for directed graphs. We have discussed using node-level features as inputs to the neural network. In the absence of such inputs, our training scheme can still be deployed using input features derived from the graph itself. We show in figure \ref{fig:adj} and in experiments that the neighbourhood information such as rows in the adjacency matrix are simple to construct, yet powerful inputs to the network. These features can also be combined with existing features. When the number of graph nodes is high, this construction can have a high complexity and result in a large number of input features. This can be avoided by several ways: (i)~clustering the nodes and using the cluster assignments and similarities, (ii)~learning an embedding function of nodes \cite{perozzi2014deepwalk}, or (iii)~sampling the neighbourhood/context \cite{yang2016revisiting}. In practice, we observe that the input space can be bounded by a constant, even for massive graphs, with efficient scalable methods like unsupervised propagation (i.e., propagating node identity labels across the graph and selecting ones with highest support as input features to neural graph machines). \subsection{Optimization} The proposed objective function in equation \ref{eqn:cost_ngm} has several summations over the labeled points and edges, and can be equivalently written as follows, \begin{align} \mathcal{C}_{\mathrm{NGM}}(\theta) &= \sum_{(u,v)\in \mathcal{E}_{LL}} \alpha_1 w_{uv} d(h_\theta(x_u), h_\theta(x_v)) + c_{uv} \nonumber \\ & \;\;\; + \sum_{(u,v)\in \mathcal{E}_{LU}} \alpha_2 w_{uv} d(h_\theta(x_u), h_\theta(x_v)) + c_{u} \nonumber \\ & \;\;\;+ \sum_{(u,v)\in \mathcal{E}_{UU}} \alpha_3 w_{uv} d(h_\theta(x_u), h_\theta(x_v) \label{eqn:cost_ngm_new}, \end{align} where \begin{align} c_{uv} &= \frac{1}{|u|} c (g_\theta(x_u), y_u) + \frac{1}{|v|}c (g_\theta(x_v), y_v) \nonumber\\ c_{u} &= \frac{1}{|u|} c (g_\theta(x_u), y_u), \nonumber \end{align} $|u|$ and $|v|$ are the number of edges incident to vertices $u$ and $v$, respectively. The objective in its new form enables stochastic training to be deployed by sampling edges. In particular, in each training iteration, we use a minibatch of edges and obtain the stochastic gradients of the objective. To further reduce noise and speedup learning, we sample edges within a neighbourhood region, that is to make sure some sampled edges have shared end nodes. \subsection{Complexity} The complexity of each epoch in training using equation \ref{eqn:cost_ngm_new} is $\mathcal{O}(M)$ where $M=|\mathcal{E}|$ is the number of edges in the graph. In the case where there is a large number of unlabeled-unlabeled edges, they potentially do not help learning and could be ignored, leading to a lower complexity. One strategy to include them is self-training, that is to grow seeds or labeled nodes as we train the networks. We experimentally demonstrate this technique in \cref{sec:doc_class}. Predictions at inference time can be made at the same cost as that of vanilla neural networks. \begin{abstract} Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural networks, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training framework with a graph-regularised objective, namely {\it Neural Graph Machines}, that can combine the power of neural networks and label propagation. This work generalises previous literature on graph-augmented training of neural networks, enabling it to be applied to multiple neural architectures (Feed-forward NNs, CNNs and LSTM RNNs) and a wide range of graphs. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a)~allowing the network to train using labeled data as in the supervised setting, (b)~biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs, with a runtime that is linear in the number of edges. The proposed joint training approach convincingly outperforms many existing methods on a wide range of tasks (multi-label classification on social graphs, news categorization, document classification and semantic intent classification), with multiple forms of graph inputs (including graphs with and without node-level features) and using different types of neural networks. \end{abstract}\section{Experiments} \label{sec:experiments} In this section, we provide several experiments showing the efficacy of the proposed training objective on a wide range of tasks, datasets and network architectures. All the experiments are done using a TensorFlow implementation \citep{tensorflow2015-whitepaper}. \subsection{Multi-label Classification of Nodes on Graphs} \label{sec:exp_graph} We first demonstrate our approach using a multi-label classification problem on nodes in a relationship graph. In particular, the {\it BlogCatalog} dataset \citep{agarwal2009social}, a network of social relationships between bloggers is considered. This graph has 10,312 nodes, 333,983 edges and 39 labels per node, which represent the bloggers, their social connections and the bloggers' interests, respectively. Following previous approaches in the literature \citep{grover2016node2vec, agarwal2009social}, we train and make predictions using multiple one-vs-rest classifiers. Since there are no provided features for each node, we use the rows of the adjacency matrix as input features, as discussed in section \ref{sec:graph_construction}. Feed-forward neural networks (FFNNs) with one hidden layer of 50 units are employed to map the constructed inputs to the node labels. As we use the test set to construct the graph and augment the training objective, the training in this experiment is transductive. Critically, to combat the unbalanced training set, we employ weighted sampling during training, i.e.~making sure each minibatch has both positive and negative examples. In this experiment, we fix $\alpha_i$ to be equal, and experiment with $\alpha=0.1$ and use the $l$-2 metric to compute the distance $d$ between the hidden representations of the networks. In addition, we create a range of train/test splits by varying the number of training points being presented to the networks. We compare our method (NGM-FFNN) against a two-stage approach that first uses node2vec \citep{grover2016node2vec} to generate node embeddings and then uses a linear one-vs-rest classifier for classification. The methods are evaluated using two metrics Macro F1 and Micro F1. The average results for different train/test splits using our method and the baseline are included in table \ref{tab:mlp}. In addition, we compare NGM-FFNN with a non-augmented FFNN in which $\alpha=0$, i.e.~no edge information is used during training. We observe that the graph-augmented training scheme performs better (6\% relative improvement on Macro F1 when the training set size is 20\% and 50\% of the dataset) or comparatively (when the training size is 80\%) compared to the vanilla neural networks trained with no edge information. Both methods significantly outperform the approach that uses node embeddings and linear classifiers. We observe the same improvement over node2vec on the Micro F1 metric and NGM-FFNN is comparable to vanilla FFNN ($\alpha=0$) but outperforms other methods on the recall metric. \footnotetext{These results are different compared to \cite{grover2016node2vec}, since we treat the classifiers (one per label) independently. Both methods shown here use the exact same setting and training/test data splits.} These results demonstrate that using the graph itself as direct inputs to the neural network and letting the network figure out a non-linear mapping directly from the raw graph is more effective than the two-stage approach considered. More importantly, the results also show that using the graph information improves the performance in the limited data regime (for example: when training set is only 20\% or 50\% of the dataset). \subsection{Text Classification using Character-level CNNs} \label{sec:exp_cnn} We evaluate the proposed objective function on a multi-class text classification task using a character-level convolutional neural network (CNN). We use the AG news dataset from \cite{zhang2015text}, where the task is to classify a news article into one of 4 categories. Each category has 30,000 examples for training and 1,900 examples for testing. In addition to the train and test sets, there are 111,469 examples that are treated as unlabeled examples. As there is no provided graph structure linking the articles, we create such a graph based on the embeddings of the articles. We restrict the graph construction to only the train set and the unlabeled examples and keep the test set only for evaluation. We use the Google News word2vec corpus to calculate the average embedding for each news article and use the cosine similarity of document embeddings as a similarity metric. Each node is restricted to have a maximum of 5 neighbors. We construct the CNN in the same way as \citep{zhang2015text} and pick their competitive {\it ``small CNN''} as our baseline for a more reasonable comparison to our set-up. Our approach employs the same network, but with significantly smaller number of convolutional layers and layer sizes, as shown in table \ref{tab:cnn_config}. The networks are trained with the same hyper-parameters as reported in \cite{zhang2015text}. We observed that the model converged within 20 epochs (the model loss did not change much) and hence used this as a stopping criterion for this task. Experiments also showed that running the network for longer also did not change the qualitative performance. We use the cross entropy loss on the final outputs of the network, that is $d = \mathrm{cross\_entropy}(g(x_u), g(x_v))$, to compute the distance between nodes on an edge. In addition, we also experiment with a data augmentation technique using an English thesaurus, as done in \cite{zhang2015text}. We compare the ``tiny CNN'' trained using the proposed objective function with the baseline using the accuracy on the test set in table \ref{tab:cnn}. Our approach outperforms the baseline by provides a 1.8\% absolute and 2.1\% relative improvement in accuracy, despite using a much smaller network. In addition, our model with graph augmentation trains much faster and produces results on par or better than the performance of a significantly larger network, {\it ``large CNN"} \cite{zhang2015text}, which has an accuracy of 87.18 without using a thesaurus, and 86.61 with the thesaurus. \subsection{Semantic Intent Classification using LSTM RNNs} \label{sec:exp_rnn} We compare the performance of our approach for training RNN sequence models (LSTM) for a semantic intent classification task as described in the recent work on SmartReply \citep{smartreply2016} for automatically generating short email responses. One of the underlying tasks in SmartReply is to discover and map short response messages to semantic intent clusters.\footnote{For details regarding SmartReply and how the semantic intent clusters are generated, refer \cite{smartreply2016}.} We choose 20 intent classes and created a dataset comprised of 5,483 samples (3,832 for training, 560 for validation and 1,091 for testing). Each sample instance corresponds to a short response message text paired with a semantic intent category that was manually verified by human annotators. For example, {\it``That sounds awesome!''} and {\it``Sounds fabulous''} belong to the {\it sounds good} intent cluster. We construct a sparse graph in a similar manner as the news categorization task using word2vec embeddings over the message text and computing similarity to generate a response message graph with fixed node degree (k=10). We use $l$-2 for the distance metric $d(\cdot)$ and choose $\alpha$ based on the development set. We run the experiments for a fixed number of time steps and pick the best results on the development set. A multilayer LSTM architecture (2 layers, 100 dimensions) is used for the RNN sequence model. The LSTM model and its NGM variant are also compared against other baseline systems---{\it Random} baseline ranks the intent categories randomly and {\it Frequency} baseline ranks them in order of their frequency in the training corpus. To evaluate the intent prediction quality of different approaches, for each test instance, we compute the rank of the actual intent category $\mathrm{rank}_i$ with respect to the ranking produced by the method and use this to calculate the Mean Reciprocal Rank:\\ \begin{equation*} \nonumber \mathrm{MRR}=\frac{1}{N} \sum_{i=1}^N \frac{1}{\mathrm{rank}_i} \end{equation*} We show in table \ref{tab:rnn} that LSTM RNNs with our proposed graph-augmented training objective function outperform standard baselines by achieving a better MRR. \subsection{Low-supervision Document Classification} \label{sec:doc_class} Finally, we compare our method on a task with very limited supervision---the PubMed document classification problem \cite{sen2008collective}. The task is to classify each document into one of 3 classes, with each document being described by a TF-IDF weighted word vector. The graph is available as a citation network: two documents are connected to each other if one cites the other. The graph has 19,717 nodes and 44,338 edges, with each class having 20 seed nodes and 1000 test nodes. In our experiments we exclude the test nodes from the graph entirely, training only on the labeled and unlabeled nodes. We train a feed-forward neural network (FFNN) with two hidden layers with 250 and 100 neurons, using the $l$-2 distance metric on the last hidden layer. The NGM-FFNN model is trained with $\alpha_i = 0.2$, while the baseline FFNN is trained with $\alpha_i = 0$ (i.e., a supervised-only model). We use self-training to train the model, starting with just the 60 seed nodes (20 per class) as training data. The amount of training data is iteratively increased by assigning labels to the immediate neighbors of the labeled nodes and retraining the model. For the self-trained NGM-FFNN model, this strategy results in incrementally growing the neighborhood and thereby, $LL$ and $LU$ edges in equation \ref{eqn:cost_ngm_new} objective. We compare the final NGM-FFNN model against the FFNN baseline and other techniques reported in \cite{yang2016revisiting} including the Planetoid models \cite{yang2016revisiting}, semi-supervised embedding \cite{weston2012deep}, manifold regression \cite{belkin2006manifold}, transductive SVM \cite{joachims1999transductive}, label propagation \cite{zhu2003semi}, graph embeddings \cite{perozzi2014deepwalk} and a linear softmax model. Full results are included in \cref{tab:pubmed}. \vspace{-0.1in} \vspace{-0.1in} The results show that the NGM model (without any tuning) outperforms many baselines including FFNN, semi-supervised embedding, manifold regularization and Planetoid-G/Planetoid-T, and compares favorably to Planetoid-I. Most importantly, this result demonstrates the graph augmentation scheme can lead to better regularised neural networks, especially in low sample regime (20 samples per class in this case). We believe that with tuning, NGM accuracy can be improved even further. \section{Background and related works} \label{sec:background} In this section, we will lay out the groundwork for our proposed training objective in section \ref{sec:ngm}. \subsection{Neural network learning} Neural networks are a class of non-linear mapping from inputs to outputs and comprised of multiple layers that can potentially learn useful representations for predicting the outputs. We will view various models such as feed-forward neural networks, recurrent neural networks and convolutional networks under the same umbrella. Given a set of $N$ training input-output pairs $\{x_n, y_n\}_{n=1}^{N}$, such neural networks are often trained by performing maximum likelihood learning, that is, tuning their parameters so that the networks' outputs are close to the ground truth under some criterion, \begin{align} \mathcal{C}_{\mathrm{NN}}(\theta) = \sum_{n} c(g_\theta(x_n), y_n), \label{eqn:cost_nn} \end{align} where $g_\theta(\cdot)$ denotes the overall mapping, parameterized by $\theta$, and $c(\cdot)$ denotes a loss function such as $l$-2 for regression or cross entropy for classification. The cost function $c$ and the mapping $g$ are typically differentiable w.r.t~$\theta$, which facilitates optimisation via gradient descent. Importantly, this can be scaled to a large number of training instances by employing stochastic training using minibatches of data. However, it is not clear how unlabeled data, if available, can be treated using this objective, or if extra information about the training set, such as relational structures can be used. \subsection{Graph-based semi-supervised learning} In this section, we provide a concise introduction to graph-based semi-supervised learning using {\it label propagation} and its training objective. Suppose we are given a graph $G=(V, E, W)$ where $V$ is the set of nodes, $E$ the set of edges and $W$ the edge weight matrix. Let $V_l, V_u$ be the labeled and unlabeled nodes in the graph. The goal is to predict a soft assignment of labels for each node in the graph, $\hat{Y}$, given the training label distribution for the seed nodes, $Y$. Mathematically, label propagation performs minimization of the following convex objective function, for $L$ labels, \begin{align} \mathcal{C}_{\mathrm{LP}}(\hat{Y}) &= \mu_1 \sum_{v \in V_l} \norm{\hat{Y}_v - Y_v}_2^2 \nonumber \\ &\quad + \mu_2 \sum_{v \in V, u \in \mathcal{N}(v)} w_{u,v} \norm{\hat{Y}_v - \hat{Y}_u}_2^2 \nonumber \\ &\quad + \mu_3 \sum_{v \in V} \norm{\hat{Y}_v - U}_2^2, \label{eqn:cost_lp} \end{align} subject to $\sum_{l=1}^{L}\hat{Y}_{vl} = 1$, where $\mathcal{N}(v)$ is the neighbour node set of the node $v$, and $U$ is the prior distribution over all labels, $w_{u,v}$ is the edge weight between nodes $u$ and $v$, and $\mu_1$, $\mu_2$, and $\mu_3$ are hyperparameters that balance the contribution of individual terms in the objective. The terms in the objective function above encourage that: (a)~the label distribution of seed nodes should be close to the ground truth, (b)~the label distribution of neighbouring nodes should be similar, and, (c)~if relevant, the label distribution should stay close to our prior belief. This objective function can be solved efficiently using iterative methods such as the Jacobi procedure. That is, in each step, each node aggregates the label distributions from its neighbours and adjusts its own distribution, which is then repeated until convergence. In practice, the iterative updates can be done in parallel or in a distributed fashion which then allows large graphs with a large number of nodes and labels to be trained efficiently. \citet{bengio2006label} and \citet{ravi2016large} provide good surveys on the topic for interested readers. There are many variants of label propagation that could be viewed as optimising modified versions of \cref{eqn:cost_lp}. For example, manifold regularization \cite{belkin2006manifold} replaces the label distribution $\hat{Y}$ by a Reproducing Kernel Hilbert Space mapping from input features. Similarly, \citet{weston2012deep} also employs such mapping but uses a feed-forward neural network instead. Both methods can be classified as inductive learning algorithms; whereas the original label propagation algorithm is transductive \cite{yang2016revisiting}. These aforementioned methods are closest to our proposed approach; however, there are key differences. Our work generalizes previously proposed frameworks for graph-augmented training of neural networks (e.g., \citet{weston2012deep}) and extends it to new settings, for example, when there is only graph input and no features are available. Unlike the previous works, we show that the graph augmented training method can work with multiple neural network architectures (Feed-forward NNs, CNNs, RNNs) and on multiple prediction tasks and datasets using {\it natural} as well as {\it constructed} graphs. The experiment results (see \cref{sec:experiments}) clearly validate the effectiveness of this method in all these different settings, in both inductive and transductive learning paradigms. Besides the methodology, our study also presents an important contribution towards assessing the effectiveness of graph combined neural networks as a generic training mechanism for different architectures and problems, which was not well studied in previous work. More recently, graph embedding techniques have been used to create node embedding that encode local structures of the graph and the provided node labels \cite{perozzi2014deepwalk, yang2016revisiting}. These techniques target learning better node representations to be used for other tasks such as node classification. In this work, we aim to directly learn better predictive models from the graph. We compare our method to these two-stage (embedding + classifier) techniques in several experiments in \cref{sec:experiments}. Our work is also different and orthogonal to recent works on using neural networks {\it on} graphs, for example: \citet{defferrard2016convolutional} employs spectral graph convolution to create a neural-network like classifier. However, these approaches requires many approximations to arrive at a practical implementation. Here, we advocate a training objective that {\it uses} graphs to augment neural network learning, and works with many forms of graphs and with any type of neural network. \documentclass{article} % more modern \newcommand\norm[1]{\left\lVert#1\right\rVert} \makeatletter \def\@xfootnote[#1]{% \protected@xdef\@thefnmark{#1}% \@footnotemark\@footnotetext} \makeatother \newcommand{\theHalgorithm}{\arabic{algorithm}} \usepackage[accepted]{icml2016} \icmltitlerunning{Neural Graph Machines: Learning Neural Networks Using Graphs} \begin{document} \twocolumn[ \icmltitle{Neural Graph Machines: Learning Neural Networks Using Graphs} \icmlauthor{Thang D. Bui$^*$\footnotemark}{tdb40@cam.ac.uk} \icmlauthor{Sujith Ravi$^\dagger$}{sravi@google.com} \icmlauthor{Vivek Ramavajjala$^\dagger$}{vramavaj@google.com} \vskip 0.1in \icmladdress{$^*$ University of Cambridge, United Kingdom} \vskip -0.15in \icmladdress{$^\dagger$ Google Research, Mountain View, CA, USA} \vskip 0.3in ] \footnotetext{Work done during an internship at Google.} \input{abstract} \input{intro} \input{background} \input{ngm} \input{experiments} \input{summary} \section*{Acknowledgements} We would like to thank the Google Expander team for insightful feedback. \bibliographystyle{icml2017} \end{document}
Neural Graph Machines: Learning Neural Networks Using Graphs
1703.04818
Table 1: Macro F1 results for BlogCatalog dataset averaged over 10 random splits. The higher is better. Graph regularized neural networks outperform node2vec embedding and a linear classifer in all training size settings.
[ "|Train| / |Dataset|", "NGM-FFNN", "node2vec" ]
[ [ "20%", "[BOLD] 0.191", "0.168" ], [ "50%", "[BOLD] 0.242", "0.174" ], [ "80%", "[BOLD] 0.262", "0.177" ] ]
The methods are evaluated using two metrics Macro F1 and Micro F1. In addition, we compare NGM-FFNN with a non-augmented FFNN in which α=0, i.e. no edge information is used during training. We observe that the graph-augmented training scheme performs better (6% relative improvement on Macro F1 when the training set size is 20% and 50% of the dataset) or comparatively (when the training size is 80%) compared to the vanilla neural networks trained with no edge information. Both methods significantly outperform the approach that uses node embeddings and linear classifiers. We observe the same improvement over node2vec on the Micro F1 metric and NGM-FFNN is comparable to vanilla FFNN (α=0) but outperforms other methods on the recall metric.
\section{Introduction} \label{sec:intro} Semi-supervised learning is a powerful machine learning paradigm that can improve the prediction performance compared to techniques that use only labeled data, by leveraging a large amount of unlabeled data. The need of semi-supervised learning arises in many problems in computer vision, natural language processing or social networks, in which getting labeled datapoints is expensive or unlabeled data is abundant and readily available. There exist a plethora of semi-supervised learning methods. The simplest one uses bootstrapping techniques to generate pseudo-labels for unlabeled data generated from a system trained on labeled data. However, this suffers from label error feedbacks \citep{lee2013pseudo}. In a similar vein, autoencoder based methods often need to rely on a two-stage approach: train an autoencoder using unlabeled data to generate an embedding mapping, and use the learnt embeddings for prediction. In practice, this procedure is often costly and inaccurate. Another example is transductive SVMs \citep{joachims1999transductive}, which is too computationally expensive to be used for large datasets. Methods that are based on generative models and amortized variational inference \citep{kingma2014semi} can work well for images and videos, but it is not immediately clear on how to extend such techniques to handle sparse and multi-modal inputs or graphs over the inputs. In contrast to the methods above, graph-based techniques such as label propagation \citep{zhu2002learning, bengio2006label} often provide a versatile, scalable, and yet effective solution to a wide range of problems. These methods construct a smooth graph over the unlabeled and labeled data. Graphs are also often a natural way to describe the relationships between nodes, such as similarities between embeddings, phrases or images, or connections between entities on the web or relations in a social network. Edges in the graph connect semantically similar nodes or datapoints, and if present, edge weights reflect how strong such similarities are. By providing a set of labeled nodes, such techniques iteratively refine the node labels by aggregating information from neighbours and propagate these labels to the nodes' neighbours. In practice, these methods often converge quickly and can be scaled to large datasets with a large label space \citep{ravi2016large}. We build upon the principle behind label propagation for our method. Another key motivation of our work is the recent advances in neural networks and their performance on a wide variety of supervised learning tasks such as image and speech recognition or sequence-to-sequence learning \citep{krizhevsky2012imagenet, hinton2012deep, sutskever2014sequence}. Such results are however conditioned on training very large networks on large datasets, which may need millions of labeled training input-output pairs. This begs the question: can we harness previous state-of-the-art semi-supervised learning techniques, to jointly train neural networks using limited labeled data and unlabeled data to improve its performance? \vspace{-0.1in} \paragraph{Contributions:} We propose a discriminative training objective for neural networks with graph augmentation, that can be trained with stochastic gradient descent and efficiently scaled to large graphs. The new objective has a regularization term for generic neural network architectures that enforces similarity between nodes in the graphs, which is inspired by the objective function of label propagation. In particular, we show that: \begin{itemize} \item Graph-augmented neural network training can work for a wide range of neural networks, such as feed-forward, convolutional and recurrent networks. Additionally, this technique can be used in both inductive and transductive settings. It also helps learning in low-sample regime (small number of labeled nodes), which cannot be handled by vanilla neural network training. \item The framework can handle multiple forms of graphs, either naturally given or constructed based on embeddings and knowledge bases. \item Using graphs and neighbourhood information alone as direct inputs to neural networks in this joint training framework permits fast and simple inference, yet provides competitive performance with current state-of-the-art approaches which employ a two-step method of first training a node embedding representation from the graph and then using it as feature input to train a classifer separately (see \cref{sec:exp_graph}). \item As a by-product, our proposed framework provides a simple technique to finding smaller and faster neural networks that offer competitive performance with larger and slower non graph-augmented alternatives (see \cref{sec:exp_cnn}). \end{itemize} We experimentally show that the proposed training framework outperforms state-of-the-art or perform favourably on a variety of prediction tasks and datasets, involving text features and/or graph inputs and on many different neural network architectures (see \cref{sec:experiments}). The paper is organized as follows: we first review some background and literature, and relate them to our approach in \cref{sec:background}; we then detail the training objective and its properties in \cref{sec:ngm}; and finally we validate our approach on a range of experiments in \cref{sec:experiments}.\section{Conclusions} \label{sec:summary} We have revisited graph-augmentation training of neural networks and proposed Neural Graph Machines as a general framework for doing so. Its objective function encourages the neural networks to make accurate node-level predictions, as in vanilla neural network training, as well as constrains the networks to learn similar hidden representations for nodes connected by an edge in the graph. Importantly, the objective can be trained by stochastic gradient descent and scaled to large graphs. We validated the efficacy of the graph-augmented objective on various tasks including bloggers' interest, text category and semantic intent classification problems, using a wide range of neural network architectures (FFNNs, CNNs and LSTM RNNs). The experimental results demonstrated that graph-augmented training almost always helps to find better neural networks that outperforms other techniques in predictive performance or even much smaller networks that are faster and easier to train. Additionally, the node-level input features can be combined with graph features as inputs to the neural networks. We showed that a neural network that simply takes the adjacency matrix of a graph and produces node labels, can perform better than a recently proposed two-stage approach using sophisticated graph embeddings and a linear classifier. Our framework also excels when the neural network is small, or when there is limited supervision available. While our objective can be applied to multiple graphs which come from different domains, we have not fully explored this aspect and leave this as future work. We expect the domain-specific networks can interact with the graphs to determine the importance of each domain/graph source in prediction. We also did not explore using graph regularisation for different hidden layers of the neural networks; we expect this is key for the multi-graph transfer setting \cite{yosinski2014transferable}. Another possible future extension is to use our objective on directed graphs, that is to control the direction of influence between nodes during training.\section{Neural graph machines} \label{sec:ngm} In this section, we devise a discriminative training objective for neural networks, that is inspired by the label propagation objective and uses both labeled and unlabeled data, and can be trained by stochastic gradient descent. First, we take a close look at the two objective functions discussed in section \ref{sec:background}. The label propagation objective equation \ref{eqn:cost_lp} ensures the predicted label distributions of neighbouring nodes to be similar, while those of labeled nodes to be close to the ground truth. For example: if a {\it cat} image and a {\it dog} image are strongly connected in a graph, and if the {\it cat} node is labeled as {\it animal}, the predicted probability of the {\it dog} node being {\it animal} is also high. In contrast, the neural network training objective equation \ref{eqn:cost_nn} only takes into account the labeled instances, and ensure correct predictions on the training set. As a consequence, a neural network trained on the {\it cat} image alone will not make an accurate prediction on the {\it dog} image. Such shortcoming of neural network training can be rectified by biasing the network using prior knowledge about the relationship between instances in the dataset. In particular, for the domains we are interested in, training instances (either labeled or unlabeled) that are connected in a graph, for example, {\it dog} and {\it cat} in the above example, should have similar predictions. This can be done by encouraging neighboring data points to have a similar hidden representation learnt by a neural network, resulting in a modified objective function for training neural network architectures using both labeled and unlabeled datapoints. We call architectures trained using this objective {\it Neural Graph Machines}, and schematically illustrate the concept in figure \ref{fig:ngm}. The proposed objective function is a weighted sum of the neural network cost and the label propagation cost as follows, \begin{align} \mathcal{C}_{\mathrm{NGM}}(\theta) &= \sum_{n=1}^{V_l} c (g_\theta(x_n), y_n) \nonumber \\ & \quad + \alpha_1 \sum_{(u,v)\in \mathcal{E}_{LL}} w_{uv} d(h_\theta(x_u), h_\theta(x_v)) \nonumber \\ & \quad + \alpha_2 \sum_{(u,v)\in \mathcal{E}_{LU}} w_{uv} d(h_\theta(x_u), h_\theta(x_v)) \nonumber \\ & \quad + \alpha_3 \sum_{(u,v)\in \mathcal{E}_{UU}} w_{uv} d(h_\theta(x_u), h_\theta(x_v), \label{eqn:cost_ngm} \end{align} where $\mathcal{E}_{LL}$, $\mathcal{E}_{LU}$, and $\mathcal{E}_{UU}$ are sets of labeled-labeled, labeled-unlabeled and unlabeled-unlabeled edges correspondingly, $h(\cdot)$ represents the hidden representations of the inputs produced by the neural network, and $d(\cdot)$ is a distance metric, and $\{\alpha_1, \alpha_2, \alpha_3\}$ are hyperparameters. Note that we have separated the terms based on the edge types, as these can affect the training differently. In practice, we choose an $l$-1 or $l$-2 distance metric for $d(\cdot)$, and $h(x)$ to be the last layer of the neural network. However, these choices can be changed, to a customized metric, or to using an intermediate hidden layer instead. \subsection{Connections to previous methods} The graph-dependent $\alpha$ hyperparameters control the balance of the contributions of different edge types. When $\{\alpha_i=0\}_{i=1}^{3}$, the proposed objective ignores the similarity constraint and becomes a supervised-only neural network objective as in equation \ref{eqn:cost_nn}. When only $\alpha_1 \neq 0$, the training cost has an additional term for labeled nodes, that acts as a regularizer. When $g_\theta(x) = h_\theta(x) = \hat{y}$, where $\hat{y}$ is the label distribution, the individual cost functions ($c$ and $d$) are squared $l$-2 norm, and the objective is trained using $\hat{y}$ directly instead of $\theta$, we arrive at the label propagation objective in equation \ref{eqn:cost_lp}. Therefore, the proposed objective could be thought of as a {\it non-linear} version of the label propagation objective, and a {\it graph-regularized} version of the neural network training objective. \subsection{Network inputs and graph construction\label{sec:graph_construction}} Similar to graph-based label propagation, the choice of the input graphs is critical, to correctly bias the neural network's prediction. Depending on the type of the graphs and nodes in the graph, they can be readily available to use such as social networks or protein linking networks, or they can be constructed (a)~using generic graphs such as Knowledge Bases, that consists of relationship links between entities, (b)~using embeddings learnt by an unsupervised learning technique, or, (c)~using sparse feature representations for each vertex. Additionally, the proposed training objective can be easily modified for directed graphs. We have discussed using node-level features as inputs to the neural network. In the absence of such inputs, our training scheme can still be deployed using input features derived from the graph itself. We show in figure \ref{fig:adj} and in experiments that the neighbourhood information such as rows in the adjacency matrix are simple to construct, yet powerful inputs to the network. These features can also be combined with existing features. When the number of graph nodes is high, this construction can have a high complexity and result in a large number of input features. This can be avoided by several ways: (i)~clustering the nodes and using the cluster assignments and similarities, (ii)~learning an embedding function of nodes \cite{perozzi2014deepwalk}, or (iii)~sampling the neighbourhood/context \cite{yang2016revisiting}. In practice, we observe that the input space can be bounded by a constant, even for massive graphs, with efficient scalable methods like unsupervised propagation (i.e., propagating node identity labels across the graph and selecting ones with highest support as input features to neural graph machines). \subsection{Optimization} The proposed objective function in equation \ref{eqn:cost_ngm} has several summations over the labeled points and edges, and can be equivalently written as follows, \begin{align} \mathcal{C}_{\mathrm{NGM}}(\theta) &= \sum_{(u,v)\in \mathcal{E}_{LL}} \alpha_1 w_{uv} d(h_\theta(x_u), h_\theta(x_v)) + c_{uv} \nonumber \\ & \;\;\; + \sum_{(u,v)\in \mathcal{E}_{LU}} \alpha_2 w_{uv} d(h_\theta(x_u), h_\theta(x_v)) + c_{u} \nonumber \\ & \;\;\;+ \sum_{(u,v)\in \mathcal{E}_{UU}} \alpha_3 w_{uv} d(h_\theta(x_u), h_\theta(x_v) \label{eqn:cost_ngm_new}, \end{align} where \begin{align} c_{uv} &= \frac{1}{|u|} c (g_\theta(x_u), y_u) + \frac{1}{|v|}c (g_\theta(x_v), y_v) \nonumber\\ c_{u} &= \frac{1}{|u|} c (g_\theta(x_u), y_u), \nonumber \end{align} $|u|$ and $|v|$ are the number of edges incident to vertices $u$ and $v$, respectively. The objective in its new form enables stochastic training to be deployed by sampling edges. In particular, in each training iteration, we use a minibatch of edges and obtain the stochastic gradients of the objective. To further reduce noise and speedup learning, we sample edges within a neighbourhood region, that is to make sure some sampled edges have shared end nodes. \subsection{Complexity} The complexity of each epoch in training using equation \ref{eqn:cost_ngm_new} is $\mathcal{O}(M)$ where $M=|\mathcal{E}|$ is the number of edges in the graph. In the case where there is a large number of unlabeled-unlabeled edges, they potentially do not help learning and could be ignored, leading to a lower complexity. One strategy to include them is self-training, that is to grow seeds or labeled nodes as we train the networks. We experimentally demonstrate this technique in \cref{sec:doc_class}. Predictions at inference time can be made at the same cost as that of vanilla neural networks. \begin{abstract} Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural networks, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training framework with a graph-regularised objective, namely {\it Neural Graph Machines}, that can combine the power of neural networks and label propagation. This work generalises previous literature on graph-augmented training of neural networks, enabling it to be applied to multiple neural architectures (Feed-forward NNs, CNNs and LSTM RNNs) and a wide range of graphs. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a)~allowing the network to train using labeled data as in the supervised setting, (b)~biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs, with a runtime that is linear in the number of edges. The proposed joint training approach convincingly outperforms many existing methods on a wide range of tasks (multi-label classification on social graphs, news categorization, document classification and semantic intent classification), with multiple forms of graph inputs (including graphs with and without node-level features) and using different types of neural networks. \end{abstract}\section{Experiments} \label{sec:experiments} In this section, we provide several experiments showing the efficacy of the proposed training objective on a wide range of tasks, datasets and network architectures. All the experiments are done using a TensorFlow implementation \citep{tensorflow2015-whitepaper}. \subsection{Multi-label Classification of Nodes on Graphs} \label{sec:exp_graph} We first demonstrate our approach using a multi-label classification problem on nodes in a relationship graph. In particular, the {\it BlogCatalog} dataset \citep{agarwal2009social}, a network of social relationships between bloggers is considered. This graph has 10,312 nodes, 333,983 edges and 39 labels per node, which represent the bloggers, their social connections and the bloggers' interests, respectively. Following previous approaches in the literature \citep{grover2016node2vec, agarwal2009social}, we train and make predictions using multiple one-vs-rest classifiers. Since there are no provided features for each node, we use the rows of the adjacency matrix as input features, as discussed in section \ref{sec:graph_construction}. Feed-forward neural networks (FFNNs) with one hidden layer of 50 units are employed to map the constructed inputs to the node labels. As we use the test set to construct the graph and augment the training objective, the training in this experiment is transductive. Critically, to combat the unbalanced training set, we employ weighted sampling during training, i.e.~making sure each minibatch has both positive and negative examples. In this experiment, we fix $\alpha_i$ to be equal, and experiment with $\alpha=0.1$ and use the $l$-2 metric to compute the distance $d$ between the hidden representations of the networks. In addition, we create a range of train/test splits by varying the number of training points being presented to the networks. We compare our method (NGM-FFNN) against a two-stage approach that first uses node2vec \citep{grover2016node2vec} to generate node embeddings and then uses a linear one-vs-rest classifier for classification. The methods are evaluated using two metrics Macro F1 and Micro F1. The average results for different train/test splits using our method and the baseline are included in table \ref{tab:mlp}. In addition, we compare NGM-FFNN with a non-augmented FFNN in which $\alpha=0$, i.e.~no edge information is used during training. We observe that the graph-augmented training scheme performs better (6\% relative improvement on Macro F1 when the training set size is 20\% and 50\% of the dataset) or comparatively (when the training size is 80\%) compared to the vanilla neural networks trained with no edge information. Both methods significantly outperform the approach that uses node embeddings and linear classifiers. We observe the same improvement over node2vec on the Micro F1 metric and NGM-FFNN is comparable to vanilla FFNN ($\alpha=0$) but outperforms other methods on the recall metric. \footnotetext{These results are different compared to \cite{grover2016node2vec}, since we treat the classifiers (one per label) independently. Both methods shown here use the exact same setting and training/test data splits.} These results demonstrate that using the graph itself as direct inputs to the neural network and letting the network figure out a non-linear mapping directly from the raw graph is more effective than the two-stage approach considered. More importantly, the results also show that using the graph information improves the performance in the limited data regime (for example: when training set is only 20\% or 50\% of the dataset). \subsection{Text Classification using Character-level CNNs} \label{sec:exp_cnn} We evaluate the proposed objective function on a multi-class text classification task using a character-level convolutional neural network (CNN). We use the AG news dataset from \cite{zhang2015text}, where the task is to classify a news article into one of 4 categories. Each category has 30,000 examples for training and 1,900 examples for testing. In addition to the train and test sets, there are 111,469 examples that are treated as unlabeled examples. As there is no provided graph structure linking the articles, we create such a graph based on the embeddings of the articles. We restrict the graph construction to only the train set and the unlabeled examples and keep the test set only for evaluation. We use the Google News word2vec corpus to calculate the average embedding for each news article and use the cosine similarity of document embeddings as a similarity metric. Each node is restricted to have a maximum of 5 neighbors. We construct the CNN in the same way as \citep{zhang2015text} and pick their competitive {\it ``small CNN''} as our baseline for a more reasonable comparison to our set-up. Our approach employs the same network, but with significantly smaller number of convolutional layers and layer sizes, as shown in table \ref{tab:cnn_config}. The networks are trained with the same hyper-parameters as reported in \cite{zhang2015text}. We observed that the model converged within 20 epochs (the model loss did not change much) and hence used this as a stopping criterion for this task. Experiments also showed that running the network for longer also did not change the qualitative performance. We use the cross entropy loss on the final outputs of the network, that is $d = \mathrm{cross\_entropy}(g(x_u), g(x_v))$, to compute the distance between nodes on an edge. In addition, we also experiment with a data augmentation technique using an English thesaurus, as done in \cite{zhang2015text}. We compare the ``tiny CNN'' trained using the proposed objective function with the baseline using the accuracy on the test set in table \ref{tab:cnn}. Our approach outperforms the baseline by provides a 1.8\% absolute and 2.1\% relative improvement in accuracy, despite using a much smaller network. In addition, our model with graph augmentation trains much faster and produces results on par or better than the performance of a significantly larger network, {\it ``large CNN"} \cite{zhang2015text}, which has an accuracy of 87.18 without using a thesaurus, and 86.61 with the thesaurus. \subsection{Semantic Intent Classification using LSTM RNNs} \label{sec:exp_rnn} We compare the performance of our approach for training RNN sequence models (LSTM) for a semantic intent classification task as described in the recent work on SmartReply \citep{smartreply2016} for automatically generating short email responses. One of the underlying tasks in SmartReply is to discover and map short response messages to semantic intent clusters.\footnote{For details regarding SmartReply and how the semantic intent clusters are generated, refer \cite{smartreply2016}.} We choose 20 intent classes and created a dataset comprised of 5,483 samples (3,832 for training, 560 for validation and 1,091 for testing). Each sample instance corresponds to a short response message text paired with a semantic intent category that was manually verified by human annotators. For example, {\it``That sounds awesome!''} and {\it``Sounds fabulous''} belong to the {\it sounds good} intent cluster. We construct a sparse graph in a similar manner as the news categorization task using word2vec embeddings over the message text and computing similarity to generate a response message graph with fixed node degree (k=10). We use $l$-2 for the distance metric $d(\cdot)$ and choose $\alpha$ based on the development set. We run the experiments for a fixed number of time steps and pick the best results on the development set. A multilayer LSTM architecture (2 layers, 100 dimensions) is used for the RNN sequence model. The LSTM model and its NGM variant are also compared against other baseline systems---{\it Random} baseline ranks the intent categories randomly and {\it Frequency} baseline ranks them in order of their frequency in the training corpus. To evaluate the intent prediction quality of different approaches, for each test instance, we compute the rank of the actual intent category $\mathrm{rank}_i$ with respect to the ranking produced by the method and use this to calculate the Mean Reciprocal Rank:\\ \begin{equation*} \nonumber \mathrm{MRR}=\frac{1}{N} \sum_{i=1}^N \frac{1}{\mathrm{rank}_i} \end{equation*} We show in table \ref{tab:rnn} that LSTM RNNs with our proposed graph-augmented training objective function outperform standard baselines by achieving a better MRR. \subsection{Low-supervision Document Classification} \label{sec:doc_class} Finally, we compare our method on a task with very limited supervision---the PubMed document classification problem \cite{sen2008collective}. The task is to classify each document into one of 3 classes, with each document being described by a TF-IDF weighted word vector. The graph is available as a citation network: two documents are connected to each other if one cites the other. The graph has 19,717 nodes and 44,338 edges, with each class having 20 seed nodes and 1000 test nodes. In our experiments we exclude the test nodes from the graph entirely, training only on the labeled and unlabeled nodes. We train a feed-forward neural network (FFNN) with two hidden layers with 250 and 100 neurons, using the $l$-2 distance metric on the last hidden layer. The NGM-FFNN model is trained with $\alpha_i = 0.2$, while the baseline FFNN is trained with $\alpha_i = 0$ (i.e., a supervised-only model). We use self-training to train the model, starting with just the 60 seed nodes (20 per class) as training data. The amount of training data is iteratively increased by assigning labels to the immediate neighbors of the labeled nodes and retraining the model. For the self-trained NGM-FFNN model, this strategy results in incrementally growing the neighborhood and thereby, $LL$ and $LU$ edges in equation \ref{eqn:cost_ngm_new} objective. We compare the final NGM-FFNN model against the FFNN baseline and other techniques reported in \cite{yang2016revisiting} including the Planetoid models \cite{yang2016revisiting}, semi-supervised embedding \cite{weston2012deep}, manifold regression \cite{belkin2006manifold}, transductive SVM \cite{joachims1999transductive}, label propagation \cite{zhu2003semi}, graph embeddings \cite{perozzi2014deepwalk} and a linear softmax model. Full results are included in \cref{tab:pubmed}. \vspace{-0.1in} \vspace{-0.1in} The results show that the NGM model (without any tuning) outperforms many baselines including FFNN, semi-supervised embedding, manifold regularization and Planetoid-G/Planetoid-T, and compares favorably to Planetoid-I. Most importantly, this result demonstrates the graph augmentation scheme can lead to better regularised neural networks, especially in low sample regime (20 samples per class in this case). We believe that with tuning, NGM accuracy can be improved even further. \section{Background and related works} \label{sec:background} In this section, we will lay out the groundwork for our proposed training objective in section \ref{sec:ngm}. \subsection{Neural network learning} Neural networks are a class of non-linear mapping from inputs to outputs and comprised of multiple layers that can potentially learn useful representations for predicting the outputs. We will view various models such as feed-forward neural networks, recurrent neural networks and convolutional networks under the same umbrella. Given a set of $N$ training input-output pairs $\{x_n, y_n\}_{n=1}^{N}$, such neural networks are often trained by performing maximum likelihood learning, that is, tuning their parameters so that the networks' outputs are close to the ground truth under some criterion, \begin{align} \mathcal{C}_{\mathrm{NN}}(\theta) = \sum_{n} c(g_\theta(x_n), y_n), \label{eqn:cost_nn} \end{align} where $g_\theta(\cdot)$ denotes the overall mapping, parameterized by $\theta$, and $c(\cdot)$ denotes a loss function such as $l$-2 for regression or cross entropy for classification. The cost function $c$ and the mapping $g$ are typically differentiable w.r.t~$\theta$, which facilitates optimisation via gradient descent. Importantly, this can be scaled to a large number of training instances by employing stochastic training using minibatches of data. However, it is not clear how unlabeled data, if available, can be treated using this objective, or if extra information about the training set, such as relational structures can be used. \subsection{Graph-based semi-supervised learning} In this section, we provide a concise introduction to graph-based semi-supervised learning using {\it label propagation} and its training objective. Suppose we are given a graph $G=(V, E, W)$ where $V$ is the set of nodes, $E$ the set of edges and $W$ the edge weight matrix. Let $V_l, V_u$ be the labeled and unlabeled nodes in the graph. The goal is to predict a soft assignment of labels for each node in the graph, $\hat{Y}$, given the training label distribution for the seed nodes, $Y$. Mathematically, label propagation performs minimization of the following convex objective function, for $L$ labels, \begin{align} \mathcal{C}_{\mathrm{LP}}(\hat{Y}) &= \mu_1 \sum_{v \in V_l} \norm{\hat{Y}_v - Y_v}_2^2 \nonumber \\ &\quad + \mu_2 \sum_{v \in V, u \in \mathcal{N}(v)} w_{u,v} \norm{\hat{Y}_v - \hat{Y}_u}_2^2 \nonumber \\ &\quad + \mu_3 \sum_{v \in V} \norm{\hat{Y}_v - U}_2^2, \label{eqn:cost_lp} \end{align} subject to $\sum_{l=1}^{L}\hat{Y}_{vl} = 1$, where $\mathcal{N}(v)$ is the neighbour node set of the node $v$, and $U$ is the prior distribution over all labels, $w_{u,v}$ is the edge weight between nodes $u$ and $v$, and $\mu_1$, $\mu_2$, and $\mu_3$ are hyperparameters that balance the contribution of individual terms in the objective. The terms in the objective function above encourage that: (a)~the label distribution of seed nodes should be close to the ground truth, (b)~the label distribution of neighbouring nodes should be similar, and, (c)~if relevant, the label distribution should stay close to our prior belief. This objective function can be solved efficiently using iterative methods such as the Jacobi procedure. That is, in each step, each node aggregates the label distributions from its neighbours and adjusts its own distribution, which is then repeated until convergence. In practice, the iterative updates can be done in parallel or in a distributed fashion which then allows large graphs with a large number of nodes and labels to be trained efficiently. \citet{bengio2006label} and \citet{ravi2016large} provide good surveys on the topic for interested readers. There are many variants of label propagation that could be viewed as optimising modified versions of \cref{eqn:cost_lp}. For example, manifold regularization \cite{belkin2006manifold} replaces the label distribution $\hat{Y}$ by a Reproducing Kernel Hilbert Space mapping from input features. Similarly, \citet{weston2012deep} also employs such mapping but uses a feed-forward neural network instead. Both methods can be classified as inductive learning algorithms; whereas the original label propagation algorithm is transductive \cite{yang2016revisiting}. These aforementioned methods are closest to our proposed approach; however, there are key differences. Our work generalizes previously proposed frameworks for graph-augmented training of neural networks (e.g., \citet{weston2012deep}) and extends it to new settings, for example, when there is only graph input and no features are available. Unlike the previous works, we show that the graph augmented training method can work with multiple neural network architectures (Feed-forward NNs, CNNs, RNNs) and on multiple prediction tasks and datasets using {\it natural} as well as {\it constructed} graphs. The experiment results (see \cref{sec:experiments}) clearly validate the effectiveness of this method in all these different settings, in both inductive and transductive learning paradigms. Besides the methodology, our study also presents an important contribution towards assessing the effectiveness of graph combined neural networks as a generic training mechanism for different architectures and problems, which was not well studied in previous work. More recently, graph embedding techniques have been used to create node embedding that encode local structures of the graph and the provided node labels \cite{perozzi2014deepwalk, yang2016revisiting}. These techniques target learning better node representations to be used for other tasks such as node classification. In this work, we aim to directly learn better predictive models from the graph. We compare our method to these two-stage (embedding + classifier) techniques in several experiments in \cref{sec:experiments}. Our work is also different and orthogonal to recent works on using neural networks {\it on} graphs, for example: \citet{defferrard2016convolutional} employs spectral graph convolution to create a neural-network like classifier. However, these approaches requires many approximations to arrive at a practical implementation. Here, we advocate a training objective that {\it uses} graphs to augment neural network learning, and works with many forms of graphs and with any type of neural network. \documentclass{article} % more modern \newcommand\norm[1]{\left\lVert#1\right\rVert} \makeatletter \def\@xfootnote[#1]{% \protected@xdef\@thefnmark{#1}% \@footnotemark\@footnotetext} \makeatother \newcommand{\theHalgorithm}{\arabic{algorithm}} \usepackage[accepted]{icml2016} \icmltitlerunning{Neural Graph Machines: Learning Neural Networks Using Graphs} \begin{document} \twocolumn[ \icmltitle{Neural Graph Machines: Learning Neural Networks Using Graphs} \icmlauthor{Thang D. Bui$^*$\footnotemark}{tdb40@cam.ac.uk} \icmlauthor{Sujith Ravi$^\dagger$}{sravi@google.com} \icmlauthor{Vivek Ramavajjala$^\dagger$}{vramavaj@google.com} \vskip 0.1in \icmladdress{$^*$ University of Cambridge, United Kingdom} \vskip -0.15in \icmladdress{$^\dagger$ Google Research, Mountain View, CA, USA} \vskip 0.3in ] \footnotetext{Work done during an internship at Google.} \input{abstract} \input{intro} \input{background} \input{ngm} \input{experiments} \input{summary} \section*{Acknowledgements} We would like to thank the Google Expander team for insightful feedback. \bibliographystyle{icml2017} \end{document}
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
1611.00020
Table 6: Percentage and performance of model generated programs with different complexity (number of expressions).
[ "#Expressions", "0", "1", "2", "3" ]
[ [ "[ITALIC] Percentage", "0.4%", "62.9%", "29.8%", "6.9%" ], [ "[ITALIC] F1", "0.0", "73.5", "59.9", "70.3" ] ]
Among the programs generated by the model, a significant portion (36.7%) uses more than one expression. We observe that programs with three expressions use a more limited set of properties, mainly focusing on answering a few types of questions such as “who plays meg in family guy”, “what college did jeff corwin go to” and “which countries does russia border”. In contrast, programs with two expressions use a more diverse set of properties, which could explain the lower performance compared to programs with three expressions.
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage[noend]{algpseudocode} \usepackage[font={footnotesize}]{caption} \newcommand\jb[1]{[\textcolor{red}{JB: {#1}}]} \newcommand\cl[1]{[\textcolor{red}{CL: {#1}}]} \aclfinalcopy % Uncomment this line for the final submission \def\aclpaperid{606} % Enter the acl Paper ID here \setlength\titlebox{5cm} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Neural Symbolic Machines: \\ Learning Semantic Parsers on Freebase with Weak Supervision} \author{ %Anonymized for review} \bf{Chen Liang}\thanks{\hspace{1mm} Work done while the author was interning at Google}, \bf{Jonathan Berant}\thanks{\hspace{1mm} Work done while the author was visiting Google}, \bf{Quoc Le}, \bf{Kenneth D. Forbus}, \bf{Ni Lao} \\ Northwestern University, Evanston, IL \\ Tel-Aviv University, Tel Aviv-Yafo, Israel \\ Google Inc., Mountain View, CA \\ \{chenliang2013,forbus\}@u.northwestern.edu, joberant@cs.tau.ac.il, \{qvl,nlao\}@google.com } \date{} \begin{document} \maketitle \begin{abstract} Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete %non-differentiable operations against a large knowledge-base. In this work, we introduce a \textit{Neural Symbolic Machine} (NSM), which contains (a) a neural ``programmer", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a \textit{key-variable memory} to handle compositionality (b) a symbolic ``computer", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE we augment it with an \textit{iterative maximum-likelihood} training process. NSM outperforms the state-of-the-art on the \textsc{WebQuestionsSP} dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge. \end{abstract} \section{Introduction} Deep neural networks have achieved impressive performance in supervised classification and structured prediction tasks such as speech recognition \cite{hinton2012deep}, machine translation \cite{bahdanau2014align,wu2016gnmt} and more. However, training neural networks for semantic parsing \cite{zelle96geoquery,zettlemoyer05ccg,liang11dcs} or program induction, where language is mapped to a symbolic representation that is executed by an executor, through weak supervision remains challenging. This is because the model must interact with a symbolic executor through non-differentiable operations to search over a large program space. In semantic parsing, recent work handled this \cite{dong2016language,jia2016data} by training from manually annotated programs and avoiding program execution at training time. However, annotating programs is known to be expensive and scales poorly. In program induction, attempts to address this problem \cite{graves2014neural,reed2015neural,kaiser2015neural,graves2016hybrid,andreas2016compose} either utilized low-level memory~\cite{zaremba2015reinforcement}, or required memory to be differentiable~\cite{Neelakantan2015NeuralPI,yin2015neural} so that the model can be trained with backpropagation. This makes it difficult to use the efficient discrete operations and memory of a traditional computer, and limited the application to synthetic or small knowledge bases. In this paper, we propose to utilize the memory and discrete operations of a traditional computer in a novel Manager-Programmer-Computer (MPC) framework for neural program induction, which integrates three components: \begin{enumerate}[noitemsep,topsep=0pt] \item A \textbf{``manager"} that provides weak supervision (e.g., \textsc{`NYC'} in Figure~\ref{fig:challenges}) through a reward indicating how well a task is accomplished. Unlike full supervision, weak supervision is easy to obtain at scale (Section \ref{sec:webquestions}). \item A \textbf{``programmer"} that takes natural language as input and generates a program that is a sequence of tokens (Figure~\ref{fig:parsing}). The programmer learns from the reward and must overcome the hard search problem of finding correct programs (Section \ref{subsec:model}). \item A \textbf{``computer"} that executes programs in a high level programming language. % like Lisp. %The computer's Its non-differentiable %symbolic memory enables \textit{abstract}, \textit{scalable} and \textit{precise} operations, but makes training more challenging (Section \ref{subsec:training}). To help the ``programmer" prune the search space, it provides a friendly \textit{neural computer interface}, which detects and eliminates invalid choices (Section \ref{subsec:language}). \end{enumerate} Within this framework, we introduce the Neural Symbolic Machine (NSM) and apply it to semantic parsing. NSM contains a neural sequence-to-sequence (seq2seq) ``programmer" \cite{sutskever2014sequence} and a symbolic non-differentiable Lisp interpreter (``computer") that executes programs against a large knowledge-base (KB). Our technical contribution in this work is threefold. First, to support language \textit{compositionality}, we augment the standard seq2seq model with a \textit{key-variable memory} to save and reuse intermediate execution results (Figure~\ref{fig:challenges}). This is a novel application of pointer networks ~\cite{vinyals2015pointer} to compositional semantics. Second, to alleviate the search problem of finding correct programs when training from question-answer pairs,% (Figure~\ref{fig:challenges}), we use the computer to execute partial programs and prune the programmer's search space by checking the syntax and semantics of generated programs. This generalizes %past work on training from weak supervision the weakly supervised semantic parsing framework~\cite{liang11dcs,berant2013semantic} by leveraging semantic denotations during structural search. Third, to train from weak supervision and directly maximize the expected reward we turn to the REINFORCE~\cite{Williams92simplestatistical} algorithm. Since learning from scratch is difficult for REINFORCE, we combine it with an \textit{iterative maximum likelihood} (ML) training process, where beam search is used to find pseudo-gold programs, which are then used to augment the objective of REINFORCE. On the \textsc{WebQuestionsSP} dataset \cite{yih2016webquestionssp}, NSM achieves new state-of-the-art results with weak supervision, significantly closing the gap between weak and full supervision for this task. Unlike prior works, it is trained end-to-end, and does not require feature engineering or domain-specific knowledge. \section{Neural Symbolic Machines} We now introduce NSM by % within the MPC framework. first describing the ``computer", a non-differentiable Lisp interpreter that executes programs against a large KB and provides code assistance (Section \ref{subsec:language}). We then propose a seq2seq model (``programmer") that supports compositionality using a key-variable memory to save and reuse intermediate results (Section~\ref{subsec:model}). Finally, we describe a training procedure that is based on REINFORCE, but is augmented with pseudo-gold programs found by an iterative ML training procedure (Section~\ref{subsec:training}). Before diving into details, we define the \textit{semantic parsing} task: given a knowledge base $\mathbb{K}$, and a question $x=(w_1, w_2, ..., w_m)$, produce a program or logical form $z$ that when executed against $\mathbb{K}$ generates the right answer $y$. Let $\mathcal{E}$ denote a set of entities (e.g., \textsc{AbeLincoln}),\footnote{We also consider numbers (e.g., ``1.33'') and date-times (e.g., ``1999-1-1'') as entities.} and let $\mathcal{P}$ denote a set of properties (e.g., \textsc{PlaceOfBirth}). A knowledge base $\mathbb{K}$ is a set of assertions or triples $(e_1,p,e_2) \in \mathcal{E} \times \mathcal{P} \times \mathcal{E}$, such as (\textsc{AbeLincoln}, \textsc{PlaceOfBirth}, \textsc{Hodgenville}). \subsection{Computer: Lisp Interpreter with Code Assistance} \label{subsec:language} Semantic parsing typically requires using a set of operations to query the knowledge base and process the results. Operations learned with neural networks such as addition and sorting do not perfectly generalize to inputs that are larger than the ones observed in the training data~\cite{graves2014neural,reed2015neural}. In contrast, operations implemented in high level programming languages are \textit{abstract}, \textit{scalable}, and \textit{precise}, thus generalizes perfectly to inputs of arbitrary size. Based on this observation, we implement operations necessary for semantic parsing with an ordinary programming language instead of trying to learn them with a neural network. We adopt a Lisp interpreter %with code assistance as the ``computer". A program $C$ is a list of expressions $(c_1 ... c_N)$, where each expression is either a special token ``\textit{Return}" indicating the end of the program, or a list of tokens enclosed by parentheses ``$( F A_1 ... A_K )$". $F$ is a function, which takes as input $K$ arguments of specific types. Table~\ref{tab-functions} defines the semantics of each function and the types of its arguments (either a property $p$ or a variable $r$). When a function is executed, it returns an entity list that is the expression's denotation in $\mathbb{K}$, and save it to a new variable. By introducing variables that save the intermediate results of execution, the program naturally models \emph{language compositionality} and describes from left to right a bottom-up derivation of the full meaning of the natural language input, which is convenient in a seq2seq model (Figure~\ref{fig:challenges}). This is reminiscent of the floating parser \cite{wang2015building,Pasupat2015CompositionalSP}, where a derivation tree that is not grounded in the input is incrementally constructed. The set of programs defined by our functions is equivalent to the subset of $\lambda$-calculus presented in \cite{yih2015semantic}. We did not use full Lisp programming language here, because constructs like control flow and loops are unnecessary for most current semantic parsing tasks, and it is simple to add more functions to the model when necessary. To create a friendly \textit{neural computer interface}, the interpreter provides code assistance to the programmer by producing a list of valid tokens at each step. First, a valid token should not cause a syntax error: e.g., if the previous token is ``$($", the next token must be a function name, and if the previous token is ``\textit{Hop}", the next token must be a variable. More importantly, a valid token should not cause a semantic (run-time) error: this is detected using the denotation saved in the variables. For example, if the previously generated tokens were ``$($ \textit{Hop} $r$", the next available token is restricted to properties $\{p \mid \exists e,e': e \in r, (e, p, e') \in \mathbb{K}\}$ that are reachable from entities in $r$ in the KB. These checks are enabled by the variables and can be derived from the definition of the functions in Table \ref{tab-functions}. The interpreter prunes the ``programmer"'s search space by orders of magnitude, and enables learning from weak supervision on a large KB. \subsection{Programmer: Seq2seq Model with Key-Variable Memory} \label{subsec:model} Given the ``computer'', the ``programmer'' needs to map natural language into a program, which is a sequence of tokens that reference operations and values in the ``computer". We base our programmer on a standard seq2seq model with attention, but extend it with a key-variable memory that allows the model to learn to represent and refer to program variables (Figure \ref{fig:parsing}). Sequence-to-sequence models consist of two RNNs, an encoder and a decoder. We used a 1-layer GRU \cite{cho2014:gru} for both the encoder and decoder. Given a sequence of words ${w_1, w_2 ... w_m}$, each word $w_t$ is mapped to an embedding $q_t$ (embedding details are in Section~\ref{sec:experiment}). Then, the encoder reads these embeddings and updates its hidden state step by step using $h_{t+1} = GRU(h_{t}, q_{t}, \theta_{Encoder})$, where $\theta_{Encoder}$ are the GRU parameters. The decoder updates its hidden states $u_t$ by $u_{t+1} = GRU(u_{t}, c_{t-1}, \theta_{Decoder})$, where $c_{t-1}$ is the embedding of last step's output token $a_{t-1}$, and $\theta_{Decoder}$ are the GRU parameters. The last hidden state of the encoder $h_{T}$ is used as the decoder's initial state. We also adopt a dot-product attention similar to \newcite{dong2016language}. The tokens of the program ${a_1, a_2 ... a_n}$ are generated one by one using a softmax over the vocabulary of valid tokens at each step, as provided by the ``computer" (Section \ref{subsec:language}). To achieve compositionality, the decoder must learn to represent and refer to intermediate variables whose value was saved in the ``computer'' after execution. Therefore, we augment the model with a \textbf{key-variable memory}, where each entry has two components: a continuous embedding key $v_i$, and a corresponding variable token $R_i$ referencing the value in the ``computer" (see Figure~\ref{fig:parsing}). During encoding, we use an entity linker to link text spans (e.g., \emph{``US"}) to KB entities. For each linked entity we add a memory entry where the key is the average of GRU hidden states over the entity span, and the variable token ($R_1$) is the name of a variable in the computer holding the linked entity (\emph{m.USA}) as its value. During decoding, when a full expression is generated (i.e., the decoder generates ``$)$"), it gets executed, and the result is stored as the value of a new variable in the ``computer". This variable is keyed by the GRU hidden state at that step. When a new variable $R_1$ with key embedding $v_1$ is added into the key-variable memory, the token $R_1$ is added into the decoder vocabulary with $v_1$ as its embedding. The final answer returned by the ``programmer'' is the value of the last computed variable. Similar to pointer networks ~\cite{vinyals2015pointer}, the key embeddings for variables are dynamically generated for each example. During training, the model learns to represent variables by backpropagating gradients from a time step where a variable is selected by the decoder, through the key-variable memory, to an earlier time step when the key embedding was computed. Thus, the encoder/decoder learns to generate representations for variables such that they can be used at the right time to construct the correct program. While the key embeddings are differentiable, the values referenced by the variables (lists of entities), %in the knowledge base stored in the ``computer'', are symbolic and non-differentiable. This distinguishes the key-variable memory from other memory-augmented neural networks that use continuous differentiable embeddings as the values of memory entries \cite{weston2014memory,graves2016ntm}. \subsection{Training NSM with Weak Supervision} \label{subsec:training} NSM executes non-differentiable operations against a KB, and thus end-to-end backpropagation is not possible. Therefore, we base our training procedure on REINFORCE \cite{Williams92simplestatistical,norouzi2016}. When the reward signal is sparse and the search space is large, it is common to utilize some full supervision to pre-train REINFORCE \cite{silver2016mastering}. To train from weak supervision, we suggest an iterative ML procedure for finding pseudo-gold programs that will bootstrap REINFORCE. \paragraph{REINFORCE} \label{reinforce} We can formulate training as a reinforcement learning problem: given a question $x$, the state, action and reward at each time step $t \in \{0, 1, ..., T\}$ are $(s_t, a_t, r_t)$. Since the environment is deterministic, the state is defined by the question $x$ and the action sequence: $s_t=(x, a_{0:t-1})$, where $a_{0:t-1}=(a_0, ..., a_{t-1})$ is the history of actions at time $t$. A valid action at time $t$ is $a_t \in A(s_t)$, where $A(s_t)$ is the set of valid tokens given by the ``computer". Since each action corresponds to a token, the full history $a_{0:T}$ corresponds to a program. The reward $r_t=I[t=T] \cdot F_1(x, a_{0:T})$ is non-zero only at the last step of decoding, and is the $F_1$ score computed comparing the gold answer and the answer generated by executing the program $a_{0:T}$. Thus, the cumulative reward of a program $a_{0:T}$ is \begin{equation*} R(x,a_{0:T})=\sum_t r_t = F_1(x, a_{0:T}). \end{equation*} The agent's decision making procedure at each time is defined by a policy, $\pi_\theta(s, a) = P_\theta(a_t=a|x, a_{0:t-1})$, where $\theta$ are the model parameters. Since the environment is deterministic, the probability of generating a program $a_{0:T}$ is \begin{equation*} P_\theta(a_{0:T}|x) = \prod_{t} P_\theta(a_t \mid x, a_{0:t-1}). \end{equation*} We can define our objective to be the expected cumulative reward and use policy gradient methods such as REINFORCE for training. The objective and gradient are: \begin{equation*}% \label{eq:rl} \begin{split} J^{RL}(\theta) =& \sum_{x} \mathbb{E}_{P_\theta(a_{0:T} \mid x)}[R(x,a_{0:T})], \\ \nabla_{\theta} J^{RL}(\theta) =& \sum_{x} \sum_{a_{0:T}} P_\theta(a_{0:T} \mid x) \cdot [R(x,a_{0:T})-\\ & B(x)] \cdot \nabla_{\theta} \log P_\theta(a_{0:T} \mid x), \end{split} \end{equation*} where $B(x)= \sum_{a_{0:T}} P_\theta(a_{0:T} \mid x) R(x,a_{0:T})$ is a baseline that reduces the variance of the gradient estimation without introducing bias. Having a separate network to predict the baseline is an interesting future direction. While REINFORCE assumes a stochastic policy, we use beam search for gradient estimation. Thus, in contrast with common practice of approximating the gradient by sampling from the model, we use the top-$k$ action sequences (programs) in the beam with normalized probabilities. This allows training to focus on sequences with high probability, which are on the decision boundaries, and reduces the variance of the gradient. Empirically (and in line with prior work), REINFORCE converged slowly and often got stuck in local optima (see Section \ref{sec:experiment}). The difficulty of training resulted from the sparse reward signal in the large search space, which caused model probabilities for programs with non-zero reward to be very small at the beginning. If the beam size $k$ is small, good programs fall off the beam, leading to zero gradients for all programs in the beam. If the beam size $k$ is large, training is very slow, and the normalized probabilities of good programs when the model is untrained are still very small, leading to (1) near zero baselines, thus near zero gradients on ``bad" programs (2) near zero gradients on good programs due to the low probability $P_\theta(a_{0:T} \mid x)$. To combat this, we present an alternative training strategy based on maximum-likelihood. \paragraph{Iterative ML} If we had gold programs, we could directly optimize their likelihood. Since we do not have gold programs, we can perform an iterative procedure (similar to hard Expectation-Maximization (EM)), where we search for good programs given fixed parameters, and then optimize the probability of the best program found so far. We do decoding on an example with a large beam size and declare $a^{best}_{0:T}(x)$ to be the pseudo-gold program, which achieved highest reward with shortest length among the programs decoded on $x$ in all previous iterations. Then, we can optimize the ML objective: \begin{equation} \label{eq:ml} J^{ML}(\theta)= \sum_{x} \log{P_\theta(a^{best}_{0:T}(x) \mid x)} \end{equation} A question $x$ is not included if we did not find any program with positive reward. Training with iterative ML is fast because there is at most one program per example and the gradient is not weighted by model probability. while decoding with a large beam size is slow, we could train for multiple epochs after each decoding. This iterative process has a bootstrapping effect that a better model leads to a better program $a^{best}_{0:T}(x)$ through decoding, and a better program $a^{best}_{0:T}(x)$ leads to a better model through training. Even with a large beam size, some programs are hard to find because of the large search space. A common solution to this problem is to use curriculum learning \cite{zaremba2015reinforcement,reed2015neural}. The size of the search space is controlled by both the set of functions used in the program and the program length. We apply curriculum learning by gradually increasing both these quantities (see details in Section \ref{sec:experiment}) when performing iterative ML. Nevertheless, iterative ML uses only pseudo-gold programs and does not directly optimize the objective we truly care about. This has two adverse effects: (1) The best program $a^{best}_{0:T}(x)$ could be a spurious program that accidentally produces the correct answer (e.g., using the property \textsc{PlaceOfBirth} instead of \textsc{PlaceOfDeath} when the two places are the same), and thus does not generalize to other questions. (2) Because training does not observe full negative programs, the model often fails to distinguish between tokens that are related to one another. For example, differentiating \textsc{ParentsOf} vs. \textsc{SiblingsOf} vs. \textsc{ChildrenOf} can be challenging. We now present learning where we combine iterative ML with REINFORCE. \begin{algorithm}[ht!] \caption{IML-REINFORCE}\label{alg:aug-reinf} \begin{algorithmic} \footnotesize{ \State \textbf{Input:} question-answer pairs $\mathbb{D}=\{(x_i, y_i)\}$, mix ratio $\alpha$, reward function $R(\cdot)$, training iterations $N_{ML}$, $N_{RL}$, and beam sizes $B_{ML}$, $B_{RL}$. \State \textbf{Procedure:} \State Initialize $C^*_x=\emptyset$ the best program so far for $x$ \State Initialize model $\theta$ randomly \Comment{Iterative ML} \For{ $n=1$ to $N_{ML}$} \For{ $(x,y)$ in $D$} \State $\mathbb{C} \leftarrow$ Decode $B_{ML}$ programs given $x$ \For{ $j$ in $1 ... |\mathbb{C}|$} \If{ $R_{x,y}(C_j) > R_{x,y}(C^*_x)$ } $C^*_x \leftarrow C_j$ \EndIf \EndFor \EndFor \State $\theta \leftarrow$ ML training with $\mathbb{D}_{ML} = \{(x, C^*_x)\}$ \EndFor \State Initialize model $\theta$ randomly \Comment{REINFORCE} \For{ $n=1$ to $N_{RL}$} \State $\mathbb{D}_{RL} \leftarrow \emptyset$ is the RL training set \For{ $(x, y)$ in $D$} \State $\mathbb{C} \leftarrow$ Decode $B_{RL}$ programs from $x$ \For{ $j$ in $1 ... |\mathbb{C}|$} \If{ $R_{x,y}(C_j) > R_{x,y}(C^*_x)$ } $C^*_x \leftarrow C_j$ \EndIf \EndFor \State $\mathbb{C} \leftarrow \mathbb{C} \cup \{C^*_x\}$ \For{ $j$ in $1 ... |\mathbb{C}|$} \State $\hat{p}_j \leftarrow (1 - \alpha) \cdot \frac{p_j}{\sum_{j'} p_{j'}}$ where $p_j=P_\theta(C_j \mid x)$ \If{ $C_j = C^*_x$ } $\hat{p}_j \leftarrow \hat{p}_j + \alpha$ \EndIf \State $\mathbb{D}_{RL} \leftarrow \mathbb{D}_{RL} \cup \{(x, C_j, \hat{p}_j)\}$ \EndFor \EndFor \State $\theta \leftarrow$ REINFORCE training with $\mathbb{D}_{RL}$ \EndFor } \end{algorithmic} \end{algorithm} \paragraph{Augmented REINFORCE} To bootstrap REINFORCE, we can use iterative ML to find pseudo-gold programs, and then add these programs to the beam with a reasonably large probability. This is similar to methods from imitation learning \cite{ross2011reduction,jiang2012learned} that define a proposal distribution by linearly interpolating the model distribution and an oracle. Algorithm~\ref{alg:aug-reinf} describes our overall training procedure. We first run iterative ML for $N_{ML}$ iterations and record the best program found for every example $x_i$. Then, we run REINFORCE, where we normalize the probabilities of the programs in beam to sum to $(1-\alpha)$ and add $\alpha$ to the probability of the best found program $C^*(x_i)$. Consequently, the model always puts a reasonable amount of probability on a program with high reward during training. Note that we randomly initialized the parameters for REINFORCE, since initializing from the final ML parameters seems to get stuck in a local optimum and produced worse results. On top of imitation learning, our approach is related to the common practice in reinforcement learning \cite{schaul2016prioritized} to replay rare successful experiences to reduce the training variance and improve training efficiency. This is also similar to recent developments \cite{wu2016gnmt} in machine translation, where ML and RL objectives are linearly combined, because anchoring the model to some high-reward outputs stabilizes training. \makeatletter \def\BState{\State\hskip-\ALG@thistlm} \makeatother \section{Experiments and Analysis} \label{sec:experiment} We now empirically show that NSM can learn a semantic parser from weak supervision over a large KB. We evaluate on \textsc{WebQuestionsSP}, a challenging semantic parsing dataset with strong baselines. Experiments show that NSM achieves new state-of-the-art performance on \textsc{WebQuestionsSP} with weak supervision, and significantly closes the gap between weak and full supervisions for this task. \subsection{The \textsc{WebQuestionsSP} dataset} \label{sec:webquestions} The \textsc{WebQuestionsSP} dataset \cite{yih2016webquestionssp} contains full semantic parses for a subset of the questions from % following WebQSP paper to call it semantic parses \textsc{WebQuestions} \cite{berant2013semantic}, because 18.5\% of the original dataset were found to be ``not answerable". It consists of 3,098 question-answer pairs for training and 1,639 for testing, which were collected using Google Suggest API, and the answers were originally obtained using Amazon Mechanical Turk workers. They were updated in \cite{yih2016webquestionssp} by annotators who were familiar with the design of Freebase and added semantic parses. We further separated out 620 questions from the training set as a validation set. For query pre-processing we used an in-house named entity linking system to find the entities in a question. The quality of the entity linker is similar to that of \cite{yih2015semantic} at $94\%$ of the gold root entities being included. Similar to \newcite{dong2016language}, we replaced named entity tokens with a special token ``\textit{ENT}". For example, the question ``\emph{who plays meg in family guy}" is changed to ``\emph{who plays ENT in ENT ENT}". This helps reduce overfitting, because instead of memorizing the correct program for a specific entity, the model has to focus on other context words in the sentence, which improves generalization. Following \cite{yih2015semantic} we used the last publicly available snapshot of Freebase \cite{bollacker2008freebase}. Since NSM training requires random access to Freebase during decoding, we preprocessed Freebase by removing predicates that are not related to world knowledge (starting with ``\emph{/common/}", ``\emph{/type/}", ``\emph{/freebase/}"),\footnote{We kept ``\emph{/common/topic/notable\_types}".} and removing all text valued predicates, which are rarely the answer. Out of all 27K relations, 434 relations are removed during preprocessing. This results in a graph that fits in memory with 23K relations, 82M nodes, and 417M edges. \subsection{Model Details} For pre-trained word embeddings, we used the 300 dimension GloVe word embeddings trained on 840B tokens \cite{Pennington2014GloveGV}. On the encoder side, we added a projection matrix to transform the embeddings into 50 dimensions. On the decoder side, we used the same GloVe embeddings to construct an embedding for each property using its Freebase id, and also added a projection matrix to transform this embedding to 50 dimensions. A Freebase id contains three parts: domain, type, and property. For example, the Freebase id for \textsc{ParentsOf} is \emph{``/people/person/parents"}. \emph{``people"} is the domain, \emph{``person"} is the type and \emph{``parents"} is the property. The embedding is constructed by concatenating the average of word embeddings in the domain and type name to the average of word embeddings in the property name. For example, if the embedding dimension is 300, the embedding dimension for \emph{``/people/person/parents"} will be 600. The first 300 dimensions will be the average of the embeddings for \emph{``people"} and \emph{``person"}, and the second 300 dimensions will be the embedding for \emph{``parents"}. The dimension of encoder hidden state, decoder hidden state and key embeddings are all 50. The embeddings for the functions and special tokens (e.g., ``\emph{UNK}", ``\emph{GO}") are randomly initialized by a truncated normal distribution with mean=0.0 and stddev=0.1. All the weight matrices are initialized with a uniform distribution in $[-\frac{\sqrt{3}}{d}, \frac{\sqrt{3}}{d}]$ where $d$ is the input dimension. Dropout rate is set to 0.5, and we see a clear tendency for larger dropout rate to produce better performance, indicating overfitting is a major problem for learning. \subsection{Training Details} In iterative ML training, the decoder uses a beam of size $k=100$ to update the pseudo-gold programs and the model is trained for 20 epochs after each decoding step. We use the Adam optimizer \cite{KingmaB14} with initial learning rate 0.001. In our experiment, this process usually converges after a few (5-8) iterations. For REINFORCE training, the best hyperparameters are chosen using the validation set. We use a beam of size $k=5$ for decoding, and $\alpha$ is set to 0.1. Because the dataset is small and some relations are only used once in the whole training set, we train the model on the entire training set for 200 iterations with the best hyperparameters. Then we train the model with learning rate decay until convergence. Learning rate is decayed as $g_{t} = g_{0} \times \beta ^{\frac{\max(0, t-t_{s})}{m}}$, where $g_{0}=0.001$, $\beta=0.5$ $m=1000$, and $t_{s}$ is the number of training steps at the end of iteration 200. Since decoding needs to query the knowledge base (KB) constantly, the speed bottleneck for training is decoding. We address this problem in our implementation by partitioning the dataset, and using multiple decoders in parallel to handle each partition. We use 100 decoders, which queries 50 KG servers, and one trainer. The neural network model is implemented in TensorFlow. Since the model is small, we didn't see a significant speedup by using GPU, so all the decoders and the trainer are using CPU only. Inspired by the staged generation process in \newcite{yih2015semantic}, curriculum learning includes two steps. We first run iterative ML for 10 iterations with programs constrained to only use the ``\emph{Hop}" function and the maximum number of expressions is 2. Then, we run iterative ML again, but use both ``\emph{Hop}" and ``\emph{Filter}". The maximum number of expressions is 3, and the relations used by ``\emph{Hop}" are restricted to those that appeared in $a^{best}_{0:T}(q)$ in the first step. \subsection{Results and discussion} We evaluate performance using the offical evaluation script for \textsc{WebQuestionsSP}. Because the answer to a question may contain multiple entities or values, precision, recall and F1 are computed based on the output of each individual question, and average F1 is reported as the main evaluation metric. Accuracy measures the proportion of questions that are answered exactly. A comparison to STAGG, the previous state-of-the-art model \cite{yih2016webquestionssp,yih2015semantic}, is shown in Table~\ref{tab-result}. Our model beats STAGG with weak supervision by a significant margin on all metrics, while relying on no feature engineering or hand-crafted rules. When STAGG is trained with strong supervision it obtains an F1 of 71.7, and thus NSM closes half the gap between training with weak and full supervision. Four key ingredients lead to the final performance of NSM. The first one is the neural computer interface that provides code assistance by checking for syntax and semantic errors. We find that semantic checks are very effective for open-domain KBs with a large number of properties. For our task, the average number of choices is reduced from 23K per step (all properties) to less than 100 (the average number of properties connected to an entity). The second ingredient is augmented REINFORCE training. Table~\ref{tab-rl-ml} compares augmented REINFORCE, REINFORCE, and iterative ML on the validation set. REINFORCE gets stuck in local optimum and performs poorly. Iterative ML training is not directly optimizing the F1 measure, and achieves sub-optimal results. In contrast, augmented REINFORCE is able to bootstrap using pseudo-gold programs found by iterative ML and achieves the best performance on both the training and validation set. The third ingredient is curriculum learning during iterative ML. We compare the performance of the best programs found with and without curriculum learning in Table~\ref{tab-cur}. We find that the best programs found with curriculum learning are substantially better than those found without curriculum learning by a large margin on every metric. The last important ingredient is reducing overfitting. Given the small size of the dataset, overfitting is a major problem for training neural network models. We show the contributions of different techniques for controlling overfitting in Table \ref{tab-ablation}. Note that after all the techniques have been applied, the model is still overfitting with training F1@1=83.0\% and validation F1@1=67.2\%. Among the programs generated by the model, a significant portion (36.7\%) uses more than one expression. From Table~\ref{tab-complexity}, we can see that the performance doesn't decrease much as the compositional depth increases, indicating that the model is effective at capturing compositionality. We observe that programs with three expressions use a more limited set of properties, mainly focusing on answering a few types of questions such as ``who plays meg in family guy'', ``what college did jeff corwin go to'' and ``which countries does russia border''. In contrast, programs with two expressions use a more diverse set of properties, which could explain the lower performance compared to programs with three expressions. \paragraph{Error analysis} Error analysis on the validation set shows two main sources of errors: \begin{enumerate}[noitemsep,topsep=0pt] \item \textbf{Search failure}: Programs with high reward are not found during search for pseudo-gold programs, either because the beam size is not large enough, or because the set of functions implemented by the interpreter is insufficient. The 89.5\% F1 score in Table~\ref{tab-cur} indicates that at least $10\%$ of the questions are of this kind. \item \textbf{Ranking failure}: Programs with high reward exist in the beam, but are not ranked at the top during decoding. Because the training error is low, this is largely due to overfitting or spurious programs. The 67.2\% F1 score in Table~\ref{tab-rl-ml} indicates that about $20\%$ of the questions are of this kind. \end{enumerate} \section{Related work} Among deep learning models for program induction, Reinforcement Learning Neural Turing Machines (RL-NTMs) \cite{zaremba2015reinforcement} are the most similar to NSM, as a non-differentiable machine is controlled by a sequence model. Therefore, both models rely on REINFORCE for training. The main difference between the two is the abstraction level of the programming language. RL-NTM uses lower level operations such as memory address manipulation and byte reading/writing, while NSM uses a high level programming language over a large knowledge base that includes operations such as following properties from entities, or sorting based on a property, which is more suitable for representing semantics. Earlier works such as OOPS \cite{Schmidhuber04} has desirable characteristics, for example, the ability to define new functions. These remain to be future improvements for NSM. We formulate NSM training as an instance of reinforcement learning \cite{sutton1998reinforcement} in order to directly optimize the task reward of the structured prediction problem \cite{norouzi2016,li2016deep,Yu2015Skim}. Compared to imitation learning methods~\cite{daume09searn, ross2011reduction} that interpolate a model distribution with an oracle, NSM needs to solve a challenging search problem of training from weak supervisions in a large search space. Our solution employs two techniques (a) a symbolic ``computer" helps find good programs by pruning the search space (b) an iterative ML training process, where beam search is used to find pseudo-gold programs. Wiseman and Rush \cite{wiseman2016beam} proposed a max-margin approach to train a sequence-to-sequence scorer. However, their training procedure is more involved, and we did not implement it in this work. MIXER \cite{ranzato2015sequence} also proposed to combine ML training and REINFORCE, but they only considered tasks with full supervisions. Berant and Liang \cite{berant2015imitation} applied imitation learning to semantic parsing, but still requires hand crafted grammars and features. NSM is similar to Neural Programmer \cite{Neelakantan2015NeuralPI} and Dynamic Neural Module Network \cite{andreas2016compose} in that they all solve the problem of semantic parsing from structured data, and generate programs using similar semantics. The main difference between these approaches is how an intermediate result (the memory) is represented. Neural Programmer and Dynamic-NMN chose to represent results as vectors of weights (row selectors and attention vectors), which enables backpropagation and search through all possible programs in parallel. However, their strategy is not applicable to a large KB such as Freebase, which contains about 100M entities, and more than 20k properties. Instead, NSM chooses a more scalable approach, where the ``computer" saves intermediate results, and the neural network only refers to them with variable names (e.g., ``$R_1$" for all cities in the US). NSM is similar to the Path Ranking Algorithm (PRA) \cite{lao2011random} in that semantics is encoded as a sequence of actions, and denotations are used to prune the search space during learning. NSM is more powerful than PRA by 1) allowing more complex semantics to be composed through the use of a key-variable memory; 2) controlling the search procedure with a trained neural network, while PRA only samples actions uniformly; 3) allowing input questions to express complex relations, and then dynamically generating action sequences. PRA can combine multiple semantic representations to produce the final prediction, which remains to be future work for NSM. \section{Conclusion} We propose the Manager-Programmer-Computer framework for neural program induction. It integrates neural networks with a symbolic \textit{non-differentiable} computer to support \textit{abstract}, \textit{scalable} and \textit{precise} operations through a friendly \textit{neural computer interface}. Within this framework, we introduce the Neural Symbolic Machine, which integrates a neural sequence-to-sequence ``programmer" with key-variable memory, and a symbolic Lisp interpreter with code assistance. Because the interpreter is non-differentiable and to directly optimize the task reward, we apply REINFORCE and use pseudo-gold programs found by an iterative ML training process to bootstrap training. NSM achieves new state-of-the-art results on a challenging semantic parsing dataset with weak supervision, and significantly closes the gap between weak and full supervision. It is trained end-to-end, and does not require any feature engineering or domain-specific knowledge. \subsubsection*{Acknowledgements} We thank for discussions and help from Arvind Neelakantan, Mohammad Norouzi, Tom Kwiatkowski, Eugene Brevdo, Lukasz Kaizer, Thomas Strohmann, Yonghui Wu, Zhifeng Chen, Alexandre Lacoste, and John Blitzer. The second author is partially supported by the Israel Science Foundation, grant 942/16. \bibliographystyle{acl_natbib} \onecolumn \appendix \section{Supplementary Material} \label{sec:supplemental} \subsection{Extra Figures} \end{document}
Learning Identity Mappings with Residual Gates
1611.01260
Table 3: Test error (%) on the CIFAR-10 dataset, for ResNets, Wide ResNets and their augmented counterparts. k decay is when weight decay is also applied to the k parameters in an augmented network.
[ "Acc", "Original", "Gated", "Gated ( [ITALIC] k decay)" ]
[ [ "Resnet 5", "7.16", "[BOLD] 6.67", "7.04" ], [ "Wide ResNet (4,10) + Dropout", "3.89", "[BOLD] 3.65", "3.74" ] ]
Augmenting each model adds 15 and 12 parameters, respectively. We observe that k decay hurts performance in both cases, indicating that they should either remain unregularized or suffer a more subtle regularization compared to the weight parameters. Due to its direct connection to layer degeneration, regularizing k results in enforcing identity mappings, which might harm the model.
\documentclass{article} % For LaTeX2e \RequirePackage{amsmath,amsthm,amsfonts,amssymb} % more modern \title{Learning Identity Mappings with Residual Gates} \author{Pedro H. P. ~Savarese\\ COPPE/PESC\\ Federal University of Rio de Janeiro\\ Rio de Janeiro, Brazil \\ \texttt{savarese@land.ufrj.br} \\ } \author{Pedro H. P. ~Savarese \\ COPPE/PESC\\ Federal University of Rio de Janeiro\\ Rio de Janeiro, Brazil \\ \texttt{savarese@land.ufrj.br} \\ \And Leonardo O. ~Mazza \\ Poli \\ Federal University of Rio de Janeiro \\ Rio de Janeiro, Brazil \\ \texttt{leonardomazza@poli.ufrj.br} \\ \And Daniel R. ~Figueiredo \\ COPPE/PESC \\ Federal University of Rio de Janeiro \\ Rio de Janeiro, Brazil \\ \texttt{daniel@land.ufrj.br} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} We propose a new layer design by adding a linear gating mechanism to shortcut connections. By using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Residual Networks, where a layer is reformulated in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier optimization by making degeneration into identity mappings simpler. We propose a new model, the Gated Residual Network, which is the result when augmenting Residual Networks. Experimental results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmentation facilitates the optimization of deep models, and that it provides high tolerance to full layer removal: the model retains over $90 \%$ of its performance even after half of its layers have been randomly removed. We also evaluate our model on CIFAR-10 and CIFAR-100 using Wide Gated ResNets, achieving $3.65 \%$ and $18.27 \%$ error, respectively. \end{abstract} \section{Introduction} \label{introduction} As the number of layers of neural networks increase, effectively training its parameters becomes a fundamental problem (\cite{deephard}). Many obstacles challenge the training of neural networks, including vanishing/exploding gradients (\cite{hardtrain}), saturating activation functions (\cite{saturation}) and poor weight initialization (\cite{glorot}). Techniques such as unsupervised pre-training (\cite{aes}), non-saturating activation functions (\cite{relu}) and normalization (\cite{bn}) target these issues and enable the training of deeper networks. However, stacking more than a dozen layers still lead to a hard to train model. Recently, models such as Residual Networks (\cite{resnet1}) and Highway Neural Networks (\cite{highway}) permitted the design of networks with hundreds of layers. A key idea of these models is to allow for information to flow more freely through the layers, by using shortcut connections between the layer's input and output. This layer design greatly facilitates training, due to shorter paths between the lower layers and the network's error function. In particular, these models can more easily learn identity mappings in the layers, thus allowing the network to be deeper and learn more abstract representations (\cite{representations}). Such networks have been highly successful in many computer vision tasks. On the theoretical side, it is suggested that depth contributes exponentially more to the representational capacity of networks than width (\cite{exp1} \cite{exp2} \cite{exp3} \cite{exp4}). This agrees with the increasing depth of winning architectures on challenges such as ImageNet (\cite{resnet1} \cite{googlenet}). Increasing the depth of networks significantly increases its representational capacity and consequently its performance, an observation supported by theory (\cite{exp1} \cite{exp2} \cite{exp3} \cite{exp4}) and practice (\cite{resnet1} \cite{googlenet}). Moreover, \cite{resnet1} showed that, by construction, one can increase a network's depth while preserving its performance. These two observations suggest that it suffices to stack more layers to a network in order to increase its performance. However, this behavior is not observed in practice even with recently proposed models, in part due to the challenge of training ever deeper networks. In this work we aim to improve the training of deep networks by proposing a layer design that builds on Residual Networks and Highway Neural Networks. The key idea is to facilitate the learning of identity mappings by introducing a {\em gating mechanism} to the shortcut connection, as illustrated in Figure~\ref{aug}. Note that the shortcut connection is controlled by a gate that is parameterized with a scalar, $k$. This is a key difference from Highway Networks, where a tensor is used to regulate the shortcut connection, along with the incoming data. The idea of using a scalar is simple: it is easier to learn $k=0$ than to learn $W_g=0$ for a weight tensor $W_g$ controlling the gate. Indeed, this single scalar allows for stronger supervision on lower layers, by making gradients flow more smoothly in the optimization. We apply our proposed network design to Residual Networks, as illustrated in Figure~\ref{resaug}. Note that in this case the layer becomes simply $u = g(k) f_r(x,W) + x$, where $f_r$ denotes the layer's residual function. Thus, the shortcut connection allows the input to flow freely without any interference of $g(k)$ through the layer. We will call this model Gated Residual Network, or GResNets. Again, note that learning identity mappings is again much easier in comparison to the original ResNets. Note that layers that degenerated into identity mappings have no impact in the signal propagating through the network, and thus can be removed without affecting performance. The removal of such layers can be seen as a transposed application of sparse encoding (\cite{sparse}): transposing the sparsity from neurons to layers provides a form to prune them entirely from the network. Indeed, we show that performance decays slowly in GResNets when layers are removed, when compared to ResNets. We evaluate the performance of the proposed design in two experiments. First, we evaluate fully-connected GResNets on MNIST and compare it with fully-connected ResNets, showing superior performance and robustness to layer removal. Second, we apply our model to Wide ResNets (\cite{wide}) and test its performance on CIFAR, obtaining results that are superior to all previously published results (to the best of our knowledge). These findings indicate that learning identity mappings is a fundamental aspect of learning in deep networks, and designing models where this is easier seems highly effective. \section{Augmentation with Residual Gates} \subsection{Theoretical Intuition} Recall that a network's depth can always be increased without affecting its performance -- it suffices to add layers that perform identity mappings. Consider a classic fully-connected ReLU network with layers defined as $u = ReLU( \langle x,W \rangle )$. When adding a new layer, if we initialize $W$ to the identity matrix $I$, we have: \begin{equation*} u = ReLU(\langle x, I \rangle) = ReLU(x) = x \end{equation*} The last step holds since $x$ is an output of a previous ReLU layer, and $ReLU(ReLU(x)) = ReLU(x)$. Thus, adding more layers should only improve performance. However, how can a network with more layers learn to yield performance superior than a network with less layers? A key observation is that if learning identity mapping is easy, then the network with more layers is more likely to yield superior performance, as it can more easily recover the performance of a smaller network through identity mappings. The layer design of Residual Networks allows for deeper models to be trained due to its shortcut connections. Note that in ResNets the identity mapping is learned when $W = 0$ instead of $W = I$. Considering a residual layer $u = ReLU( \langle x,W \rangle ) + x$, we have: \begin{equation*} u = ReLU(\langle x, 0 \rangle) + x = ReLU(0) + x = x \end{equation*} Intuitively, residual layers can degenerate into identity mappings more effectively since learning an all-zero matrix is easier than learning the identity matrix. To support this argument, consider weight parameters randomly initialized with zero mean. Hence, the point $W = 0$ is located exactly in the center of the probability mass distribution used to initialize the weights. However, assuming that residual layers can trivially learn the parameter set $W = 0$ implies ignoring the randomness when initializing the weights. We demonstrate this by calculating the expected component-wise distance between $W_{o}$ and the origin. Here, $W_{o}$ denotes the weight tensor after initialization and prior to any optimization. Note that the distance between $W_{o}$ and the origin captures the effort for a network to learn identity mappings: \begin{equation*} E \Big [ (W_{o} - 0)^2 \Big ] = E \Big [ W_{o}^2 \Big ] = Var \Big [ W_{o} \Big ] \end{equation*} Note that the distance is given by the distribution's variance, and there is no reason to assume it to be negligible. Additionally, the fact that Residual Networks still suffer from optimization issues caused by depth (\cite{stdepth}) further supports this claim. Some initialization schemes propose a variance in the order of $O(\frac{1}{n})$ (\cite{glorotinit}, \cite{prelu}), however this represents the distance for each individual parameter in $W$. For tensors with $O(n^2)$ parameters, the total distance -- either absolute or Euclidean -- between $W_{o}$ and the origin will be in the order of $O(n)$. \subsection{Residual Gates} As previously mentioned, the key contribution in this work is the proposal of a layer design where learning a single scalar parameter suffices in order for the layer to degenerate into an identity mapping. As in Highway Networks, we propose the addition of gated shortcut connections. Our gates, however, are parametrized by a single scalar value, being easier to analyze and learn. In our model, the effort required to learn identity mappings does not depend on any parameter, such as the layer width, in sharp contrast to prior models. Our design is as follows: a layer $u = f(x,W)$ becomes $u = g(k) f(x,W) + (1 - g(k)) x$, where $k$ is a scalar parameter. This design is illustrated in Figure~\ref{aug}. Note that such layer can quickly degenerate by setting $g(k)$ to $0$. Using the ReLU activation function as $g$, it suffices that $k \leq 0$ for $g(k) = 0$. By adding an extra parameter, the dimensionality of the cost surface also grows by one. This new dimension, however, can be easily understood due to the specific nature of the layer reformulation. The original surface is maintained on the $k = 1$ slice, since the gated model becomes equivalent to the original one. On the $k = 0$ slice we have an identity mapping, and the associated cost for all points in such slice is the same cost associated with the point $\{k = 1, W = I\}$: this follows since both parameter configurations correspond to identity mappings, therefore being equivalent. Lastly, due to the linear nature of $g(k)$ and consequently of the gates, all other slices $k \neq 0, k \neq 1$ will be a linear combination between the slices $k = 0$ and $k = 1$. We proceed to use residual layers as the basis for our design, for two reasons. First, they are the current standard for computer vision tasks. Second, ResNets lack means to regulate the residuals, therefore a linear gating mechanism might not only allow deeper models, but could also improve performance. Thus, the residual layer is given by: \begin{equation*} u = f(x,W) = f_r(x, W) + x \end{equation*} where $f_r(x,W)$ is the layer's residual function -- in our case, \textbf{BN-ReLU-Conv-BN-ReLU-Conv}. Our approach changes this layer by adding a linear gate, yielding: \begin{align*} \begin{split} u &= g(k) f(x,W) + (1 - g(k))x \\ &= g(k) ( f_r(x, W) + x ) + (1 - g(k))x \\ &= g(k) f_r(x,W) + x \end{split} \end{align*} Our approach applied to residual layers is shown in Figure \ref{resaug}. The resulting layer maintains the shortcut connection unaltered, which according to \cite{resnet2} is a desired property when designing residual blocks. As $(1 - g(k))$ vanishes from the formulation, $g(k)$ stops acting as a dual gating mechanism and can be interpreted as a flow regulator. Note that this model introduces a single scalar parameter per layer block. This new dimension can be interpreted as discussed above, except that the slice $k = 0$ is equivalent to $\{k = 1, W = 0\}$, since an identity mapping is learned when $W = 0$ in ResNets. \section{Experiments} All models were implemented on Keras (\cite{keras}) or on Torch (\cite{t7}), and were executed on a Geforce GTX 1070. Larger models or more complex datasets, such as the ImageNet (\cite{imagenet}), were not explored due to hardware limitations. \subsection{MNIST} The MNIST dataset (\cite{mnist}) is composed of $60,000$ greyscale images with $28 \times 28$ pixels. Images represent handwritten digits, resulting in a total of 10 classes. We trained three types of fully-connected models: classical plain networks, ResNets and GResNets. The networks consist of a linear layer with 50 neurons, followed by $d$ layers with 50 neurons each, and lastly a softmax layer for classification. Only the $d$ middle layers differ between the three architectures -- the first linear layer and the softmax layer are the same in all experiments. For plain networks, each layer performs dot product, followed by Batch Normalization and a ReLU activation function. Initial tests with pre-activations (\cite{resnet2}) resulted in poor performance on the validation set, therefore we opted for the traditional \textbf{Dot-BN-ReLU} layer when designing Residual Networks. Each residual block is consists of two layers, as conventional. All networks were trained using Adam (\cite{adam}) with Nesterov momentum (\cite{adamnest}) for a total of 100 epochs using mini-batches of size 128. No learning rate decay was used: we kept the learning rate and momentum fixed to $0.002$ and $0.9$ during the whole training. For preprocessing, we divided each pixel value by 255, normalizing their values to $[0,1]$. The training curves for classical plain networks, ResNets and GResNets with varying depth are shown in Figure \ref{mnist_loss}. The distance between the curves increase with the depth, showing that the augmentation helps the training of deeper models. Table \ref{mnist_table} shows the test error for each depth and architecture. ResNets converge in experiments with $d = 50$ and $d = 100$ ($52$ and $102$ layers, respectively), while classical models do not. Gated Residual Networks perform better in all settings, and the performance boost is more noticeable with increased depths. The relative error decreased approximately $2.5 \%$ for $d = \{2,10,20\}$, $8.7 \%$ for $d=50$ and $16\%$ for $d = 100$. As observed in Table \ref{mnist_k}, the mean values of $k$ decrease as the model gets deeper, showing that shortcut connections have less impact on shallow networks. This agrees with empirical results that ResNets perform better than classical plain networks as the depth increases. We also analyzed how layer removal affects ResNets and GResNets. We compared how the deepest networks ($d = 100$) behave as residual blocks composed of 2 layers are completely removed from the models. The final values for each $k$ parameter, according to its corresponding residual block, is shown in Figure \ref{pruning}. We can observe that layers close to the middle of the network have a smaller $k$ than these in the beginning or the end. Therefore, the middle layers have less importance by due to being closer to identity mappings. Results are shown in Figure \ref{pruning}. For Gated Residual Networks, we prune pairs of layers following two strategies. One consists of pruning layers in a greedy fashion, where blocks with the smallest $k$ are removed first. In the other we remove blocks randomly. We present results using both strategies for GResNets, and only random pruning for ResNets since they lack the $k$ parameter. The greedy strategy is slightly better for Gated Residual Networks, showing that the $k$ parameter is indeed a good indicator of a layer's importance for the model, but that layers tend to assume the same level of significance. In a fair comparison, where both models are pruned randomly, GResNets retain a satisfactory performance even after half of its layers have been removed, while ResNets suffer performance decrease after just a few layers. Therefore augmented models are not only more robust to layer removal, but can have a fair share of their layers pruned and still perform well. Faster predictions can be generated by using a pruned version of an original model. \subsection{CIFAR} The CIFAR datasets (\cite{cifar}) consists of $60,000$ color images with $32 \times 32$ pixels each. CIFAR-10 has a total of 10 classes, including pictures of cats, birds and airplanes. The CIFAR-100 dataset is composed of the same number of images, however with a total of 100 classes. Residual Networks have surpassed state-of-the-art results on CIFAR. We test GResNets, Wide GResNets (\cite{wide}) and compare them with their original, non-augmented models. For pre-activation ResNets, as described in \cite{resnet2}, we follow the original implementation details. We set an initial learning rate of 0.1, and decrease it by a factor of 10 after 50\% and 75\% epochs. SGD with Nesterov momentum of 0.9 are used for optimization, and the only pre-processing consists of mean subtraction. Weight decay of 0.0001 is used for regularization, and Batch Normalization's momentum is set to 0.9. We follow the implementation from \cite{wide} for Wide ResNets. The learning rate is initialized as 0.1, and decreases by a factor of 5 after 30\%, 60\% and 80\% epochs. Images are mean/std normalized, and a weight decay of 0.0005 is used for regularization. When dropout is specified, we apply 0.3 dropout (\cite{dropout}) between convolutions. All other details are the same as for ResNets. For both architectures we use moderate data augmentation: images are padded with 4 pixels, and we take random crops of size $32 \times 32$ during training. Additionally, each image is horizontally flipped with $50\%$ probability. We use batch size 128 for all experiments. For all gated networks, we initialize $k$ with a constant value of $1$. One crucial question is whether weight decay should be applied to the $k$ parameters. We call this "$k$ decay", and also compare GResNets and Wide GResNets when it is applied with the same magnitude of the weight decay: 0.0001 for GResNet and 0.0005 for Wide GResNet. Table \ref{cifar_comp} shows the test error for two architectures: a ResNet with $n = 5$, and a Wide ResNet with $n = 4$, $n = 10$. Augmenting each model adds 15 and 12 parameters, respectively. We observe that $k$ decay hurts performance in both cases, indicating that they should either remain unregularized or suffer a more subtle regularization compared to the weight parameters. Due to its direct connection to layer degeneration, regularizing $k$ results in enforcing identity mappings, which might harm the model. As in the previous experiment, in Figure \ref{cpruning} we present the final $k$ values for each block. We can observe that the $k$ values follow an intriguing pattern: the lowest values are for the blocks of index $1$, $5$ and $9$, which are exactly the ones that increase the feature map dimension. This indicates that, in such residual blocks, the convolution performed in the shortcut connection to increase dimension is more important than the residual block itself. Additionally, the peak value for the last residual block suggests that its shortcut connection is of little importance, and could as well be fully removed without greatly impacting the model. Results of different models on the CIFAR datasets are shown in Table \ref{cifar_all}. The training and test errors are presented in Figure \ref{cifar}. To the authors' knowledge, those are the best results on CIFAR-10 and CIFAR-100 with moderate data augmentation -- only random flips and translations. \section{Conclusion} We have proposed a novel layer design based on Highway Neural Networks, which can be applied to provide general layers a quick way to learn identity mappings. Unlike Highway or Residual Networks, layers generated by our technique require optimizing only one parameter to degenerate into identity. By designing our method such that randomly initialized parameter sets are always close to identity mappings, our design offers less issues with optimization issues caused by depth. We have shown that applying our technique to ResNets yield a model that can regulate the residuals, named Gated Residual Networks. This model performed better in all our experiments with negligible extra training time and parameters. Lastly, we have shown how it can be used for layer pruning, effectively removing large numbers of parameters from a network without necessarily harming its performance. \bibliographystyle{iclr2017_conference} \end{document}
Learning Identity Mappings with Residual Gates
1611.01260
Table 2: Mean k for increasingly deep Gated Residual Networks.
[ "Depth = [ITALIC] d+2", "Mean [ITALIC] k" ]
[ [ "[ITALIC] d=2", "5.58" ], [ "[ITALIC] d=10", "2.54" ], [ "[ITALIC] d=20", "1.73" ], [ "[ITALIC] d=50", "1.04" ], [ "[ITALIC] d=100", "0.67" ] ]
This agrees with empirical results that ResNets perform better than classical plain networks as the depth increases.
\documentclass{article} % For LaTeX2e \RequirePackage{amsmath,amsthm,amsfonts,amssymb} % more modern \title{Learning Identity Mappings with Residual Gates} \author{Pedro H. P. ~Savarese\\ COPPE/PESC\\ Federal University of Rio de Janeiro\\ Rio de Janeiro, Brazil \\ \texttt{savarese@land.ufrj.br} \\ } \author{Pedro H. P. ~Savarese \\ COPPE/PESC\\ Federal University of Rio de Janeiro\\ Rio de Janeiro, Brazil \\ \texttt{savarese@land.ufrj.br} \\ \And Leonardo O. ~Mazza \\ Poli \\ Federal University of Rio de Janeiro \\ Rio de Janeiro, Brazil \\ \texttt{leonardomazza@poli.ufrj.br} \\ \And Daniel R. ~Figueiredo \\ COPPE/PESC \\ Federal University of Rio de Janeiro \\ Rio de Janeiro, Brazil \\ \texttt{daniel@land.ufrj.br} \\ } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \begin{document} \maketitle \begin{abstract} We propose a new layer design by adding a linear gating mechanism to shortcut connections. By using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Residual Networks, where a layer is reformulated in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier optimization by making degeneration into identity mappings simpler. We propose a new model, the Gated Residual Network, which is the result when augmenting Residual Networks. Experimental results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmentation facilitates the optimization of deep models, and that it provides high tolerance to full layer removal: the model retains over $90 \%$ of its performance even after half of its layers have been randomly removed. We also evaluate our model on CIFAR-10 and CIFAR-100 using Wide Gated ResNets, achieving $3.65 \%$ and $18.27 \%$ error, respectively. \end{abstract} \section{Introduction} \label{introduction} As the number of layers of neural networks increase, effectively training its parameters becomes a fundamental problem (\cite{deephard}). Many obstacles challenge the training of neural networks, including vanishing/exploding gradients (\cite{hardtrain}), saturating activation functions (\cite{saturation}) and poor weight initialization (\cite{glorot}). Techniques such as unsupervised pre-training (\cite{aes}), non-saturating activation functions (\cite{relu}) and normalization (\cite{bn}) target these issues and enable the training of deeper networks. However, stacking more than a dozen layers still lead to a hard to train model. Recently, models such as Residual Networks (\cite{resnet1}) and Highway Neural Networks (\cite{highway}) permitted the design of networks with hundreds of layers. A key idea of these models is to allow for information to flow more freely through the layers, by using shortcut connections between the layer's input and output. This layer design greatly facilitates training, due to shorter paths between the lower layers and the network's error function. In particular, these models can more easily learn identity mappings in the layers, thus allowing the network to be deeper and learn more abstract representations (\cite{representations}). Such networks have been highly successful in many computer vision tasks. On the theoretical side, it is suggested that depth contributes exponentially more to the representational capacity of networks than width (\cite{exp1} \cite{exp2} \cite{exp3} \cite{exp4}). This agrees with the increasing depth of winning architectures on challenges such as ImageNet (\cite{resnet1} \cite{googlenet}). Increasing the depth of networks significantly increases its representational capacity and consequently its performance, an observation supported by theory (\cite{exp1} \cite{exp2} \cite{exp3} \cite{exp4}) and practice (\cite{resnet1} \cite{googlenet}). Moreover, \cite{resnet1} showed that, by construction, one can increase a network's depth while preserving its performance. These two observations suggest that it suffices to stack more layers to a network in order to increase its performance. However, this behavior is not observed in practice even with recently proposed models, in part due to the challenge of training ever deeper networks. In this work we aim to improve the training of deep networks by proposing a layer design that builds on Residual Networks and Highway Neural Networks. The key idea is to facilitate the learning of identity mappings by introducing a {\em gating mechanism} to the shortcut connection, as illustrated in Figure~\ref{aug}. Note that the shortcut connection is controlled by a gate that is parameterized with a scalar, $k$. This is a key difference from Highway Networks, where a tensor is used to regulate the shortcut connection, along with the incoming data. The idea of using a scalar is simple: it is easier to learn $k=0$ than to learn $W_g=0$ for a weight tensor $W_g$ controlling the gate. Indeed, this single scalar allows for stronger supervision on lower layers, by making gradients flow more smoothly in the optimization. We apply our proposed network design to Residual Networks, as illustrated in Figure~\ref{resaug}. Note that in this case the layer becomes simply $u = g(k) f_r(x,W) + x$, where $f_r$ denotes the layer's residual function. Thus, the shortcut connection allows the input to flow freely without any interference of $g(k)$ through the layer. We will call this model Gated Residual Network, or GResNets. Again, note that learning identity mappings is again much easier in comparison to the original ResNets. Note that layers that degenerated into identity mappings have no impact in the signal propagating through the network, and thus can be removed without affecting performance. The removal of such layers can be seen as a transposed application of sparse encoding (\cite{sparse}): transposing the sparsity from neurons to layers provides a form to prune them entirely from the network. Indeed, we show that performance decays slowly in GResNets when layers are removed, when compared to ResNets. We evaluate the performance of the proposed design in two experiments. First, we evaluate fully-connected GResNets on MNIST and compare it with fully-connected ResNets, showing superior performance and robustness to layer removal. Second, we apply our model to Wide ResNets (\cite{wide}) and test its performance on CIFAR, obtaining results that are superior to all previously published results (to the best of our knowledge). These findings indicate that learning identity mappings is a fundamental aspect of learning in deep networks, and designing models where this is easier seems highly effective. \section{Augmentation with Residual Gates} \subsection{Theoretical Intuition} Recall that a network's depth can always be increased without affecting its performance -- it suffices to add layers that perform identity mappings. Consider a classic fully-connected ReLU network with layers defined as $u = ReLU( \langle x,W \rangle )$. When adding a new layer, if we initialize $W$ to the identity matrix $I$, we have: \begin{equation*} u = ReLU(\langle x, I \rangle) = ReLU(x) = x \end{equation*} The last step holds since $x$ is an output of a previous ReLU layer, and $ReLU(ReLU(x)) = ReLU(x)$. Thus, adding more layers should only improve performance. However, how can a network with more layers learn to yield performance superior than a network with less layers? A key observation is that if learning identity mapping is easy, then the network with more layers is more likely to yield superior performance, as it can more easily recover the performance of a smaller network through identity mappings. The layer design of Residual Networks allows for deeper models to be trained due to its shortcut connections. Note that in ResNets the identity mapping is learned when $W = 0$ instead of $W = I$. Considering a residual layer $u = ReLU( \langle x,W \rangle ) + x$, we have: \begin{equation*} u = ReLU(\langle x, 0 \rangle) + x = ReLU(0) + x = x \end{equation*} Intuitively, residual layers can degenerate into identity mappings more effectively since learning an all-zero matrix is easier than learning the identity matrix. To support this argument, consider weight parameters randomly initialized with zero mean. Hence, the point $W = 0$ is located exactly in the center of the probability mass distribution used to initialize the weights. However, assuming that residual layers can trivially learn the parameter set $W = 0$ implies ignoring the randomness when initializing the weights. We demonstrate this by calculating the expected component-wise distance between $W_{o}$ and the origin. Here, $W_{o}$ denotes the weight tensor after initialization and prior to any optimization. Note that the distance between $W_{o}$ and the origin captures the effort for a network to learn identity mappings: \begin{equation*} E \Big [ (W_{o} - 0)^2 \Big ] = E \Big [ W_{o}^2 \Big ] = Var \Big [ W_{o} \Big ] \end{equation*} Note that the distance is given by the distribution's variance, and there is no reason to assume it to be negligible. Additionally, the fact that Residual Networks still suffer from optimization issues caused by depth (\cite{stdepth}) further supports this claim. Some initialization schemes propose a variance in the order of $O(\frac{1}{n})$ (\cite{glorotinit}, \cite{prelu}), however this represents the distance for each individual parameter in $W$. For tensors with $O(n^2)$ parameters, the total distance -- either absolute or Euclidean -- between $W_{o}$ and the origin will be in the order of $O(n)$. \subsection{Residual Gates} As previously mentioned, the key contribution in this work is the proposal of a layer design where learning a single scalar parameter suffices in order for the layer to degenerate into an identity mapping. As in Highway Networks, we propose the addition of gated shortcut connections. Our gates, however, are parametrized by a single scalar value, being easier to analyze and learn. In our model, the effort required to learn identity mappings does not depend on any parameter, such as the layer width, in sharp contrast to prior models. Our design is as follows: a layer $u = f(x,W)$ becomes $u = g(k) f(x,W) + (1 - g(k)) x$, where $k$ is a scalar parameter. This design is illustrated in Figure~\ref{aug}. Note that such layer can quickly degenerate by setting $g(k)$ to $0$. Using the ReLU activation function as $g$, it suffices that $k \leq 0$ for $g(k) = 0$. By adding an extra parameter, the dimensionality of the cost surface also grows by one. This new dimension, however, can be easily understood due to the specific nature of the layer reformulation. The original surface is maintained on the $k = 1$ slice, since the gated model becomes equivalent to the original one. On the $k = 0$ slice we have an identity mapping, and the associated cost for all points in such slice is the same cost associated with the point $\{k = 1, W = I\}$: this follows since both parameter configurations correspond to identity mappings, therefore being equivalent. Lastly, due to the linear nature of $g(k)$ and consequently of the gates, all other slices $k \neq 0, k \neq 1$ will be a linear combination between the slices $k = 0$ and $k = 1$. We proceed to use residual layers as the basis for our design, for two reasons. First, they are the current standard for computer vision tasks. Second, ResNets lack means to regulate the residuals, therefore a linear gating mechanism might not only allow deeper models, but could also improve performance. Thus, the residual layer is given by: \begin{equation*} u = f(x,W) = f_r(x, W) + x \end{equation*} where $f_r(x,W)$ is the layer's residual function -- in our case, \textbf{BN-ReLU-Conv-BN-ReLU-Conv}. Our approach changes this layer by adding a linear gate, yielding: \begin{align*} \begin{split} u &= g(k) f(x,W) + (1 - g(k))x \\ &= g(k) ( f_r(x, W) + x ) + (1 - g(k))x \\ &= g(k) f_r(x,W) + x \end{split} \end{align*} Our approach applied to residual layers is shown in Figure \ref{resaug}. The resulting layer maintains the shortcut connection unaltered, which according to \cite{resnet2} is a desired property when designing residual blocks. As $(1 - g(k))$ vanishes from the formulation, $g(k)$ stops acting as a dual gating mechanism and can be interpreted as a flow regulator. Note that this model introduces a single scalar parameter per layer block. This new dimension can be interpreted as discussed above, except that the slice $k = 0$ is equivalent to $\{k = 1, W = 0\}$, since an identity mapping is learned when $W = 0$ in ResNets. \section{Experiments} All models were implemented on Keras (\cite{keras}) or on Torch (\cite{t7}), and were executed on a Geforce GTX 1070. Larger models or more complex datasets, such as the ImageNet (\cite{imagenet}), were not explored due to hardware limitations. \subsection{MNIST} The MNIST dataset (\cite{mnist}) is composed of $60,000$ greyscale images with $28 \times 28$ pixels. Images represent handwritten digits, resulting in a total of 10 classes. We trained three types of fully-connected models: classical plain networks, ResNets and GResNets. The networks consist of a linear layer with 50 neurons, followed by $d$ layers with 50 neurons each, and lastly a softmax layer for classification. Only the $d$ middle layers differ between the three architectures -- the first linear layer and the softmax layer are the same in all experiments. For plain networks, each layer performs dot product, followed by Batch Normalization and a ReLU activation function. Initial tests with pre-activations (\cite{resnet2}) resulted in poor performance on the validation set, therefore we opted for the traditional \textbf{Dot-BN-ReLU} layer when designing Residual Networks. Each residual block is consists of two layers, as conventional. All networks were trained using Adam (\cite{adam}) with Nesterov momentum (\cite{adamnest}) for a total of 100 epochs using mini-batches of size 128. No learning rate decay was used: we kept the learning rate and momentum fixed to $0.002$ and $0.9$ during the whole training. For preprocessing, we divided each pixel value by 255, normalizing their values to $[0,1]$. The training curves for classical plain networks, ResNets and GResNets with varying depth are shown in Figure \ref{mnist_loss}. The distance between the curves increase with the depth, showing that the augmentation helps the training of deeper models. Table \ref{mnist_table} shows the test error for each depth and architecture. ResNets converge in experiments with $d = 50$ and $d = 100$ ($52$ and $102$ layers, respectively), while classical models do not. Gated Residual Networks perform better in all settings, and the performance boost is more noticeable with increased depths. The relative error decreased approximately $2.5 \%$ for $d = \{2,10,20\}$, $8.7 \%$ for $d=50$ and $16\%$ for $d = 100$. As observed in Table \ref{mnist_k}, the mean values of $k$ decrease as the model gets deeper, showing that shortcut connections have less impact on shallow networks. This agrees with empirical results that ResNets perform better than classical plain networks as the depth increases. We also analyzed how layer removal affects ResNets and GResNets. We compared how the deepest networks ($d = 100$) behave as residual blocks composed of 2 layers are completely removed from the models. The final values for each $k$ parameter, according to its corresponding residual block, is shown in Figure \ref{pruning}. We can observe that layers close to the middle of the network have a smaller $k$ than these in the beginning or the end. Therefore, the middle layers have less importance by due to being closer to identity mappings. Results are shown in Figure \ref{pruning}. For Gated Residual Networks, we prune pairs of layers following two strategies. One consists of pruning layers in a greedy fashion, where blocks with the smallest $k$ are removed first. In the other we remove blocks randomly. We present results using both strategies for GResNets, and only random pruning for ResNets since they lack the $k$ parameter. The greedy strategy is slightly better for Gated Residual Networks, showing that the $k$ parameter is indeed a good indicator of a layer's importance for the model, but that layers tend to assume the same level of significance. In a fair comparison, where both models are pruned randomly, GResNets retain a satisfactory performance even after half of its layers have been removed, while ResNets suffer performance decrease after just a few layers. Therefore augmented models are not only more robust to layer removal, but can have a fair share of their layers pruned and still perform well. Faster predictions can be generated by using a pruned version of an original model. \subsection{CIFAR} The CIFAR datasets (\cite{cifar}) consists of $60,000$ color images with $32 \times 32$ pixels each. CIFAR-10 has a total of 10 classes, including pictures of cats, birds and airplanes. The CIFAR-100 dataset is composed of the same number of images, however with a total of 100 classes. Residual Networks have surpassed state-of-the-art results on CIFAR. We test GResNets, Wide GResNets (\cite{wide}) and compare them with their original, non-augmented models. For pre-activation ResNets, as described in \cite{resnet2}, we follow the original implementation details. We set an initial learning rate of 0.1, and decrease it by a factor of 10 after 50\% and 75\% epochs. SGD with Nesterov momentum of 0.9 are used for optimization, and the only pre-processing consists of mean subtraction. Weight decay of 0.0001 is used for regularization, and Batch Normalization's momentum is set to 0.9. We follow the implementation from \cite{wide} for Wide ResNets. The learning rate is initialized as 0.1, and decreases by a factor of 5 after 30\%, 60\% and 80\% epochs. Images are mean/std normalized, and a weight decay of 0.0005 is used for regularization. When dropout is specified, we apply 0.3 dropout (\cite{dropout}) between convolutions. All other details are the same as for ResNets. For both architectures we use moderate data augmentation: images are padded with 4 pixels, and we take random crops of size $32 \times 32$ during training. Additionally, each image is horizontally flipped with $50\%$ probability. We use batch size 128 for all experiments. For all gated networks, we initialize $k$ with a constant value of $1$. One crucial question is whether weight decay should be applied to the $k$ parameters. We call this "$k$ decay", and also compare GResNets and Wide GResNets when it is applied with the same magnitude of the weight decay: 0.0001 for GResNet and 0.0005 for Wide GResNet. Table \ref{cifar_comp} shows the test error for two architectures: a ResNet with $n = 5$, and a Wide ResNet with $n = 4$, $n = 10$. Augmenting each model adds 15 and 12 parameters, respectively. We observe that $k$ decay hurts performance in both cases, indicating that they should either remain unregularized or suffer a more subtle regularization compared to the weight parameters. Due to its direct connection to layer degeneration, regularizing $k$ results in enforcing identity mappings, which might harm the model. As in the previous experiment, in Figure \ref{cpruning} we present the final $k$ values for each block. We can observe that the $k$ values follow an intriguing pattern: the lowest values are for the blocks of index $1$, $5$ and $9$, which are exactly the ones that increase the feature map dimension. This indicates that, in such residual blocks, the convolution performed in the shortcut connection to increase dimension is more important than the residual block itself. Additionally, the peak value for the last residual block suggests that its shortcut connection is of little importance, and could as well be fully removed without greatly impacting the model. Results of different models on the CIFAR datasets are shown in Table \ref{cifar_all}. The training and test errors are presented in Figure \ref{cifar}. To the authors' knowledge, those are the best results on CIFAR-10 and CIFAR-100 with moderate data augmentation -- only random flips and translations. \section{Conclusion} We have proposed a novel layer design based on Highway Neural Networks, which can be applied to provide general layers a quick way to learn identity mappings. Unlike Highway or Residual Networks, layers generated by our technique require optimizing only one parameter to degenerate into identity. By designing our method such that randomly initialized parameter sets are always close to identity mappings, our design offers less issues with optimization issues caused by depth. We have shown that applying our technique to ResNets yield a model that can regulate the residuals, named Gated Residual Networks. This model performed better in all our experiments with negligible extra training time and parameters. Lastly, we have shown how it can be used for layer pruning, effectively removing large numbers of parameters from a network without necessarily harming its performance. \bibliographystyle{iclr2017_conference} \end{document}
Regularizing CNNs with Locally Constrained Decorrelations
1611.01967
Table 1: Count of the Flops for the models used in this paper: the 3-hidden-layer MLP and the 110-layer ResNet we use later in the experiments section when not regularized, using DeCov (Cogswell et al. (2016)) and using OrthoReg. Batch size is set to 128, the same we use to train the ResNet. Regularizing weights is orders of magnitude faster than regularizing activations.
[ "[EMPTY]", "[BOLD] Base", "[BOLD] DeCov", "[BOLD] OrthoReg" ]
[ [ "[BOLD] MLP", "8.1 108", "5.2 1010", "9.7 109" ], [ "[BOLD] ResNet-110", "6.5 1010", "3.4 1014", "3.4 108" ] ]
Then, we show that models generalize better when different feature detectors are enforced to be dissimilar. Although it might seem contradictory, CNNs can benefit from having repeated filter weights with different biases, as shown by Li et al. However, those repeated filters must be shared copies and adding too many unshared filter weights to CNNs increases overfitting and the need for stronger regularization (Zagoruyko & Komodakis Thus, our proposed method and multi-bias neural networks are complementary since they jointly increase the representation power of the network with fewer parameters.
\documentclass{article} % For LaTeX2e %ctan.org\pkg\algorithms \title{Regularizing CNNs with Locally Constrained Decorrelations} \author{Pau Rodr\'{i}guez$^\dagger$, Jordi Gonz\`{a}lez$^{\dagger, \ddagger}$, Guillem Cucurull$^\dagger$, Josep M. Gonfaus$^\ddagger$, Xavier Roca$^{\dagger,\ddagger}$ \\ $\dagger$Computer Vision Center - Univ. Aut\`onoma de Barcelona (UAB), 08193 Bellaterra, Catalonia Spain\\ $^\ddagger$Visual Tagging Services, Campus UAB, 08193 Bellaterra, Catalonia Spain } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN. \end{abstract} \section{Introduction} Neural networks perform really well in numerous tasks even when initialized randomly and trained with Stochastic Gradient Descent (SGD) (see \cite{krizhevsky_imagenet_2012}). Deeper models, like Googlenet (\cite{szegedy2015going}) and Deep Residual Networks (\cite{szegedy2015going, He2015}) are released each year, providing impressive results and even surpassing human performances in well-known datasets such as the Imagenet (\cite{russakovsky2015imagenet}). This would not have been possible without the help of regularization and initialization techniques which solve the overfitting and convergence problems that are usually caused by data scarcity and the growth of the architectures. From the literature, two different regularization strategies can be defined. The first ones consist in reducing the complexity of the model by (i) reducing the effective number of parameters with weight decay (\cite{nowlan1992simplifying}), and (ii) randomly dropping activations with Dropout (\cite{srivastava2014dropout}) or dropping weights with DropConnect (\cite{wan_regularization_2013}) so as to prevent feature co-adaptation. Due to their nature, although this set of strategies have proved to be very effective, they do not leverage all the capacity of the models they regularize. The second group of regularizations is those which improve the effectiveness and generality of the trained model without reducing its capacity. In this second group, the most relevant approaches decorrelate the weights or feature maps, e.g. \cite{bengio2009slow} introduced a new criterion so as to learn slow decorrelated features while pre-training models. In the same line \cite{bao2013incoherent} presented "incoherent training", a regularizer for reducing the decorrelation of the network activations or feature maps in the context of speech recognition. Although regularizations in the second group are promising and have already been used to reduce the overfitting in different tasks, even with the presence of Dropout (as shown by \cite{cogswell2015reducing}), they are seldom used in the large scale image recognition domain because of the small improvement margins they provide together with the computational overhead they introduce. Although they are not directly presented as regularizers, there are other strategies to reduce the overfitting such as Batch Normalization (\cite{ioffe2015batch}), which decreases the overfitting by reducing the internal covariance shift. In the same line, initialization strategies such as "Xavier" (\cite{glorot_understanding_2010}) or "He" (\cite{he2015delving}), also keep the same variance at both input and output of the layers in order to preserve propagated signals in deep neural networks. Orthogonal initialization techniques are another family which set the weights in a decorrelated initial state so as to condition the network training to converge into better representations. For instance, \cite{mishkin_all_2015} propose to initialize the network with decorrelated features using orthonormal initialization (\cite{saxe_exact_2013}) while normalizing the variance of the outputs as well. In this work we hypothesize that regularizing negatively correlated features is an obstacle for achieving better results and we introduce OrhoReg, a novel regularization technique that addresses the performance margin issue by only regularizing positively correlated feature weights. Moreover, OrthoReg is computationally efficient since it only regularizes the feature weights, which makes it very suitable for the latest CNN models. We verify our hypothesis through a series of experiments: first using MNIST as a proof of concept, secondly we regularize wide residual networks on CIFAR-10, CIFAR-100, and SVHN (\cite{netzer2011reading}) achieving the lowest error rates in the dataset to the best of our knowledge. \section{Dealing with weight redundancies} Deep Neural Networks (DNN) are very expressive models which can usually have millions of parameters. However, with limited data, they tend to overfit. There is an abundant number of techniques in order to deal with this problem, from L1 and L2 regularizations (\cite{nowlan1992simplifying}), early-stopping, Dropout or DropConnect. Models presenting high levels of overfitting usually have a lot of redundancy in their feature weights, capturing similar patterns with slight differences which usually correspond to noise in the training data. A particular case where this is evident is in AlexNet (\cite{krizhevsky_imagenet_2012}), which presents very similar convolution filters and even "dead" ones, as it was remarked by \cite{zeiler2014visualizing}. In fact, given a set of parameters $\theta_{I,j}$ connecting a set of inputs $I = \{i_1,i_2,\ldots, i_n\}$ to a neuron $h_j$, two neurons $\{h_j,h_{k}\},\ j \ne k$ will be positively correlated, and thus fire always together if $\theta_{I,j} = \theta_{I,k}$ and negatively correlated if $\theta_{I,j} = -\theta_{I,k}$. In other words, two neurons with the same or slightly different weights will produce very similar outputs. In order to reduce the redundancy present in the network parameters, one should maximize the amount of information encoded by each neuron. From an information theory point of view, this means one should not be able to predict the output of a neuron given the output of the rest of the neurons of the layer. However, this measure requires batch statistics, huge joint probability tables, and it would have a high computational cost. In this paper, we will focus on the weights correlation rather than activation independence since it still is an open problem in many neural network models and it can be addressed without introducing too much overhead, see Table \ref{tab:resnet_flops}. Then, we show that models generalize better when different feature detectors are enforced to be dissimilar. Although it might seem contradictory, CNNs can benefit from having repeated filter weights with different biases, as shown by \cite{li2016multi}. However, those repeated filters must be shared copies and adding too many unshared filter weights to CNNs increases overfitting and the need for stronger regularization (\cite{zagoruyko2016wide}). Thus, our proposed method and multi-bias neural networks are complementary since they jointly increase the representation power of the network with fewer parameters. In order to find a good target to optimize so as to reduce the correlation between weights, it is first required to find a metric to measure it. In this paper, we propose to use the cosine similarity between feature detectors to express how strong is their relationship. Note that the cosine similarity is equivalent to the Pearson correlation for mean-centered normalized vectors, but we will use the term correlation for the sake of clarity. \subsection{Orthogonal weight regularization} This section introduces the orthogonal weight regularization, a regularization technique that aims to reduce feature detector correlation enforcing local orthogonality between all pairs of weight vectors. In order to keep the magnitudes of the detectors unaffected, we have chosen the cosine similarity between the vector pairs in order to solely focus on the vectors angle $\beta \in [-\pi,\pi]$. Then, given any pair of feature vectors of the same size $\theta_1,\theta_2$ the cosine of their relative angle is: \begin{equation} \cos(\theta_1,\theta_2) = \frac{\langle \theta_1, \theta_2\rangle}{||\theta_1|| ||\theta_2||} \label{eq:cosine_sim} \end{equation} Where $\langle \theta_1, \theta_2\rangle$ denotes the inner product between $\theta_1$ and $\theta_2$. We then square the cosine similarity in order to define a regularization cost function for steepest descent that has its local minima when vectors are orthogonal: \begin{equation} C(\theta) = \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n} \cos^2(\theta_i,\theta_j) = \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n}\Big(\frac{\langle \theta_i, \theta_j \rangle}{||\theta_i|| ||\theta_j||}\Big)^2 \label{eq:target_loss} \end{equation} Where $\theta_i$ are the weights connecting the output of the layer $l-1$ to the neuron $i$ of the layer $l$, which has $n$ hidden units. Interestingly, minimizing this cost function relates to the minimization of the Frobenius norm of the cross-covariance matrix without the diagonal. This cost will be added to the global cost of the model $J(\theta;X,y)$, where $X$ are the inputs and $y$ are the labels or targets, obtaining $\tilde{J}(\theta;X,y) = J(\theta;X,y) + \gamma C(\theta)$. Note that $\gamma$ is an hyperparameter that weights the relative contribution of the regularization term. We can now define the gradient with respect to the parameters: \begin{equation} \frac{\delta}{\delta\theta_{(i,j)}} C(\theta) = \sum_{k=1,k\ne i}^{n} \frac{\theta_{(k,j)}\langle \theta_{i},\theta_{k} \rangle}{\langle \theta_i, \theta_i \rangle \langle \theta_{k}, \theta_{k} \rangle} - \frac{\theta_{(i,j)} \langle \theta_i, \theta_{k} \rangle^2}{\langle \theta_i, \theta_i \rangle^2 \langle \theta_{k}, \theta_{k} \rangle } \label{eq:target_dloss} \end{equation} The second term is introduced by the magnitude normalization. As magnitudes are not relevant for the vector angle problem, this equation can be simplified just by assuming normalized feature detectors: \begin{equation} \frac{\delta}{\delta \theta_{(i,j)}} C(\theta) = \sum_{k=1,k\ne i}^{n} \theta_{(k,j)}\langle \theta_{i},\theta_{k} \rangle \label{eq:target_dloss_simple} \end{equation} We then add eq. \ref{eq:target_dloss_simple} to the backpropagation gradient: \begin{equation} \Delta \theta_{(i,j)} = -\alpha\Big(\nabla J_{\theta_{(i,j)}} + \gamma \sum_{k=1,k\ne i}^{n} \theta_{(k,j)}\langle \theta_{i},\theta_{k}\rangle \Big) \label{eq:target_backprop_loss} \end{equation} Where $\alpha$ is the global learning rate coefficient, $J$ any target loss function for the backpropagation algorithm. Although this update can be done sequentially for each feature-detector pair, it can be vectorized to speedup computations. Let $\Theta$ be a matrix where each row is a feature detector $\theta_{(I,j)}$ corresponding to the normalized weights connecting the whole input $I$ of the layer to the neuron $j$. Then, $\Theta \Theta^t$ contains the inner product of each pair of vectors $i$ and $j$ in each position $i,j$. Subsequently, we subtract the diagonal so as to ignore the angle from each feature with respect to itself and multiply by $\Theta$ to compute the final value corresponding to the sum in eq. \ref{eq:target_backprop_loss}: \begin{equation} \Delta \Theta = -\alpha\Big(\nabla J_{\Theta} + \gamma (\Theta \Theta^t - diag(\Theta \Theta^t)) \Theta \Big) \label{eq:target_backprop_loss_reg} \end{equation} Where the second term is $\nabla C_\Theta$. Algorithm \ref{alg:reg_step} summarizes the steps in order to apply OrthoReg. \begin{algorithm}[!t] \caption{Orthogonal Regularization Step.} \label{alg:reg_step} \begin{algorithmic}[1] \Require{Layer parameter matrices $\Theta^{l}$, regularization coefficient $\gamma$, global learning rate $\alpha$}. \For{each layer $l$ to regularize} \State{$\eta_1 = norm\_rows(\Theta^l)$} \Comment{Keep norm of the rows of $\Theta^l$.} \State{$\Theta_1^l = div\_rows(\Theta^l, \eta_1)$} \Comment{Keep a $\Theta_1^{l}$ with normalized rows.} \State{$innerProdMat = \Theta^l_1 transpose(\Theta^l_1)$} \State{$\nabla\Theta^l_1 = \gamma(innerProdMat - diag(innerProdMat)) \Theta^l_1$} \Comment{Second term in eq. \ref{eq:target_backprop_loss_reg}} \State{$\Delta\Theta^l = -\alpha(\nabla J_{\Theta^l} + \gamma\nabla\Theta^l_1) $}\Comment{Complete eq. \ref{eq:target_backprop_loss_reg}.} \EndFor \end{algorithmic} \end{algorithm} \subsection{Negative Correlations} Note that the presented algorithm, based on the cosine similarity, penalizes any kind of correlation between all pairs of feature detectors, i.e. the positive and the negative correlations, see Figure \ref{fig:orig_loss}. However, negative correlations are related to inhibitory connections, competitive learning, and self-organization. In fact, there is evidence that negative correlations can help a neural population to increase the signal-to-noise ratio (\cite{chelaru2016negative}) in the V1. In order to find out the advantages of keeping negative correlations, we propose to use an exponential to squash the gradients for angles greater than $\frac{\pi}{2} (orthogonal)$: \begin{equation} C(\theta) = \sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n} \log(1 + e^{\lambda (cos(\theta_i,\theta_j)-1)}) = \log(1 + e^{\lambda (\langle \theta_i,\theta_j\rangle -1)}), \ ||\theta_i||=||\theta_j||=1 \label{eq:target_loss_negcorr} \end{equation} Where $\lambda$ is a coefficient that controls the minimum angle-of-influence of the regularizer, i.e. the minimum angle between two feature weights so that there exists a gradient pushing them apart, see Figure \ref{fig:new_loss}. We empirically found that the regularizer worked well for $\lambda=10$, see Figure \ref{fig:toy_dataset_reg}. Note that when $\lambda \simeq 10$ the loss and the gradients approximate to zero when vectors are at more than $\frac{\pi}{2}$ (orthogonal). As a result of incorporating the squashing function on the cosine similarity, negatively correlated feature weights will not be regularized. This is different from all previous approaches and the loss presented in eq. \ref{eq:target_loss}, where all pairs of weight vectors influence each other. Thus, from now on, the loss in eq. \ref{eq:target_loss} is named as global loss and the loss in eq. \ref{eq:target_loss_negcorr} is named as local loss. The derivative of eq. \ref{eq:target_loss_negcorr} is: \begin{equation} \frac{\delta}{\delta\theta_{(i,j)}}C(\theta) = \sum_{k=1, k\neq i}^n \lambda\frac{e^{\lambda \langle \theta_{i}, \theta_{k} \rangle}\theta_{({k},{j})}}{e^{\lambda \langle \theta_{i}, \theta_{k} \rangle} + e^\lambda} \label{eq:target_dloss_negcorr} \end{equation} Then, given the element-wise exponential operator $\exp$, we define the following expression in order to simplify the formulas: \begin{equation} \hat{\Theta} = \exp({\lambda(\Theta\Theta^t)}) \end{equation} and thus, the $\Delta$ in vectorial form can be formulated as: \begin{equation} \nabla C_\Theta = \lambda\frac{(\hat{\Theta} - diag(\hat{\Theta}))\Theta}{\hat{\Theta} - diag(\hat{\Theta}) + e^\lambda} \end{equation} In order to provide a visual example, we have created a $2D$ toy dataset and used the previous equations for positive and negative $\gamma$ values, see Figure \ref{fig:toy_example}. As expected, it can be seen that the angle between all pairs of adjacent feature weights becomes more uniform after regularization. Note that Figure \ref{fig:toy_dataset_reg} shows that regularization with the global loss (eq. \ref{eq:target_loss}) results in less uniform angles than using the local loss as shown in \ref{fig:toy_dataset_reg_2} (which corresponds to the local loss presented in eq. \ref{eq:target_loss_negcorr}) because vectors in opposite quadrants still influence each other. This is why in Figure \ref{fig:nnangle}, it can be seen that the mean nearest neighbor angle using the global loss (b) is more unstable than the local loss (c). As a proof of concept, we also performed gradient ascent, which minimizes the angle between the vectors. Thus, in Figures \ref{fig:toy_dataset_unreg} and \ref{fig:toy_dataset_unreg_2}, it can be seen that the locality introduced by the local loss reaches a stable configuration where feature weights with angle $ \frac{\pi}{2}$ are too far to attract each other. The effects of global and local regularizations on Alexnet, VGG-16 and a 50-layer ResNet are shown on Figure \ref{fig:real_feature_angle}. As it can be seen, OrthoReg reaches higher decorrelation bounds. Lower decorrelation peaks are still observed when the input dimensionality of the layers is smaller than the output since all vectors cannot be orthogonal at the same time. In this case, local regularization largely outperforms global regularization since it removes interferences caused by negatively correlated feature weights. This suggests why increasing fully connected layers' size has not improved networks performance. \section{Experiments} In this section we provide a set of experiments that verify that (i) training with the proposed regularization increases the performance of naive unregularized models, (ii) negatively correlated feature weights are useful, and (iii) the proposed regularization improves the performance of state-of-the-art models. \subsection{Verification experiments} As a sanity check, we first train a three-hidden-layer Multi-Layer Perceptron (MLP) with \texttt{ReLU} non-liniarities on the MNIST dataset (\cite{lecun1998gradient}). Our code is based in the \texttt{train-a-digit-classifier} example included in \texttt{torch/demos}\footnote{\label{note1}\url{https://github.com/torch/demos}}, which uses an upsampled version of the dataset ($32\times 32$). The only pre-processing applied to the data is a global standardization. The model is trained with SGD and a batch size of $200$ during $200$ epochs. No momentum neither weight decay was applied. By default, the magnitude of the weights of this experiments is recovered after each regularization step in order to prove the regularization only affects their angle. \textbf{Sensitivity to hyperparameters.} We train a three-hidden-layer MLP with 1024 hidden units, and different $\gamma$ and $\lambda$ values so as to verify how they affect the performance of the model. Figure \ref{fig:mnist_error_gamma} shows that the model effectively achieves the best error rate for the highest gamma value ($\gamma=1$), thus proving the advantages of the regularization. On Figure \ref{fig:mnist_overfitting}, we verify that higher regularization rates produce more general models. Figure \ref{fig:mnist_lambda} depicts the sensitivity of the model to $\lambda$. As expected, the best value is found when lambda corresponds to Orthogonality ($\lambda \simeq 10$). \textbf{Negative Correlations.} Figure \ref{fig:mnist_negcorr} highlights the difference between regularizing with the global or the local regularizer. Although both regularizations reach better error rates than the unregularized counterpart, the local regularization is better than the global. This confirms the hypothesis that negative correlations are useful and thus, performance decreases when we reduce them. \textbf{Compatibility with initialization and dropout.} To demonstrate the proposed regularization can help even when other regularizations are present, we trained a CNN with (i) dropout (\texttt{c32-c64-l512-d0.5-l10})\footnote{$cxx$ = convolution with $xx$ filters. $lxx$ = fully-connected with $xx$ units. $dxx$ = dropout with prob $xx$.} or (ii) LSUV initialization (\cite{mishkin_all_2015}). In Table \ref{tab:mnist_cnn}, we show that best results are obtained when orthogonal regularization is present. The results are consistent with the hypothesis that OrthoReg, as well as Dropout and LSUV, focuses on reducing the model redundancy. Thus, when one of them is present, the margin of improvement for the others is reduced. \subsection{Regularization on CIFAR-10 and CIFAR-100} We show that the proposed OrthoReg can help to improve the performance of state-of-the-art models such as deep residual networks (\cite{He2015}). In order to show the regularization is suitable for deep CNNs, we successfuly regularize a 110-layer ResNet\footnote{\url{https://github.com/gcr/torch-residual-networks}} on CIFAR-10, decreasing its error from 6.55\% to 6.29\% without data augmentation. In order to compare with the most recent state-of-the-art, we train a wide residual network (\cite{zagoruyko2016wide_v2}) on CIFAR-10 and CIFAR-100. The experiment is based on a \texttt{torch} implementation of the 28-layer and 10th width factor wide deep residual model, for which the median error rate on CIFAR-10 is $3.89\%$ and $18.85\%$ on CIFAR-100\footnote{\url{https://github.com/szagoruyko/wide-residual-networks}}. As it can be seen in Figure \ref{fig:wide_compare}, regularizing with OrthoReg yields the best test error rates compared to the baselines. The regularization coefficient $\gamma$ was chosen using grid search although similar values were found for all the experiments, specially if regularization gradients are normalized before adding them to the weights. The regularization was equally applied to all the convolution layers of the (wide) ResNet. We found that, although the regularized models were already using weight decay, dropout, and batch normalization, best error rates were always achieved with OrthoReg. Table \ref{tab:cifar_results} compares the performance of the regularized models with other state-of-the-art results. As it can be seen the regularized model surpasses the state of the art, with a $5.1\%$ relative error improvement on CIFAR-10, and a $1.5\%$ relative error improvement on CIFAR-100. \subsection{Regularization on SVHN} For SVHN we follow the procedure depicted in \cite{zagoruyko2016wide}, training a wide residual network of \texttt{depth=28}, \texttt{width=4}, and dropout. Results are shown in Table \ref{tab:results_svhn}. As it can be seen, we reduce the error rate from $1.64\%$ to $1.54\%$, which is the lowest value reported on this dataset to the best of our knowledge. \section{Discussion} Regularization by feature decorrelation can reduce Neural Networks overfitting even in the presence of other kinds of regularizations. However, especially when the number of feature detectors is higher than the input dimensionality, its decorrelation capacity is limited due to the effects of negatively correlated features. We showed that imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing regularizers to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with the constrained regularization present lower overfitting even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of 110-layer ResNets and wide ResNets on CIFAR-10, CIFAR-100, and SVHN improving their performance. Note that despite OrthoReg consistently improves state of the art ReLU networks, the choice of the activation function could affect regularizers like the one presented in this work. In this sense, the effect of asymmetrical activations on feature correlations and regularizers should be further investigated in the future. \section*{Acknowledgements} Authors acknowledge the support of the Spanish project TIN2015-65464-R (MINECO/FEDER), the 2016FI\_B 01163 grant of Generalitat de Catalunya, and the COST Action IC1307 iV\&L Net (European Network on Integrating Vision and Language) supported by COST (European Cooperation in Science and Technology). We also gratefully acknowledge the support of NVIDIA Corporation with the donation of a Tesla K40 GPU and a GTX TITAN GPU, used for this research. \bibliographystyle{iclr2017_conference} \end{document}
Regularizing CNNs with Locally Constrained Decorrelations
1611.01967
Table 2: Error rates for a small CNN trained with the MNIST dataset. OrthoReg leads to much better results when no other improvements such as Dropout and LSUV are present but it can still make small accuracy increments when these two techniques are present.
[ "[BOLD] OrthoReg", "[BOLD] Base", "[BOLD] Base+Dropout", "[BOLD] Base+LSUV" ]
[ [ "None", "0.92", "0.70±0.01", "0.86" ], [ "Conv Layers", "0.75", "0.69±0.03", "0.83" ], [ "All Layers", "[BOLD] 0.75", "[BOLD] 0.66± [BOLD] 0.03", "[BOLD] 0.79" ] ]
Compatibility with initialization and dropout. To demonstrate the proposed regularization can help even when other regularizations are present, we trained a CNN with (i) dropout (c32-c64-l512-d0.5-l10) (ii) The results are consistent with the hypothesis that OrthoReg, as well as Dropout and LSUV, focuses on reducing the model redundancy. Thus, when one of them is present, the margin of improvement for the others is reduced.
\documentclass{article} % For LaTeX2e %ctan.org\pkg\algorithms \title{Regularizing CNNs with Locally Constrained Decorrelations} \author{Pau Rodr\'{i}guez$^\dagger$, Jordi Gonz\`{a}lez$^{\dagger, \ddagger}$, Guillem Cucurull$^\dagger$, Josep M. Gonfaus$^\ddagger$, Xavier Roca$^{\dagger,\ddagger}$ \\ $\dagger$Computer Vision Center - Univ. Aut\`onoma de Barcelona (UAB), 08193 Bellaterra, Catalonia Spain\\ $^\ddagger$Visual Tagging Services, Campus UAB, 08193 Bellaterra, Catalonia Spain } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN. \end{abstract} \section{Introduction} Neural networks perform really well in numerous tasks even when initialized randomly and trained with Stochastic Gradient Descent (SGD) (see \cite{krizhevsky_imagenet_2012}). Deeper models, like Googlenet (\cite{szegedy2015going}) and Deep Residual Networks (\cite{szegedy2015going, He2015}) are released each year, providing impressive results and even surpassing human performances in well-known datasets such as the Imagenet (\cite{russakovsky2015imagenet}). This would not have been possible without the help of regularization and initialization techniques which solve the overfitting and convergence problems that are usually caused by data scarcity and the growth of the architectures. From the literature, two different regularization strategies can be defined. The first ones consist in reducing the complexity of the model by (i) reducing the effective number of parameters with weight decay (\cite{nowlan1992simplifying}), and (ii) randomly dropping activations with Dropout (\cite{srivastava2014dropout}) or dropping weights with DropConnect (\cite{wan_regularization_2013}) so as to prevent feature co-adaptation. Due to their nature, although this set of strategies have proved to be very effective, they do not leverage all the capacity of the models they regularize. The second group of regularizations is those which improve the effectiveness and generality of the trained model without reducing its capacity. In this second group, the most relevant approaches decorrelate the weights or feature maps, e.g. \cite{bengio2009slow} introduced a new criterion so as to learn slow decorrelated features while pre-training models. In the same line \cite{bao2013incoherent} presented "incoherent training", a regularizer for reducing the decorrelation of the network activations or feature maps in the context of speech recognition. Although regularizations in the second group are promising and have already been used to reduce the overfitting in different tasks, even with the presence of Dropout (as shown by \cite{cogswell2015reducing}), they are seldom used in the large scale image recognition domain because of the small improvement margins they provide together with the computational overhead they introduce. Although they are not directly presented as regularizers, there are other strategies to reduce the overfitting such as Batch Normalization (\cite{ioffe2015batch}), which decreases the overfitting by reducing the internal covariance shift. In the same line, initialization strategies such as "Xavier" (\cite{glorot_understanding_2010}) or "He" (\cite{he2015delving}), also keep the same variance at both input and output of the layers in order to preserve propagated signals in deep neural networks. Orthogonal initialization techniques are another family which set the weights in a decorrelated initial state so as to condition the network training to converge into better representations. For instance, \cite{mishkin_all_2015} propose to initialize the network with decorrelated features using orthonormal initialization (\cite{saxe_exact_2013}) while normalizing the variance of the outputs as well. In this work we hypothesize that regularizing negatively correlated features is an obstacle for achieving better results and we introduce OrhoReg, a novel regularization technique that addresses the performance margin issue by only regularizing positively correlated feature weights. Moreover, OrthoReg is computationally efficient since it only regularizes the feature weights, which makes it very suitable for the latest CNN models. We verify our hypothesis through a series of experiments: first using MNIST as a proof of concept, secondly we regularize wide residual networks on CIFAR-10, CIFAR-100, and SVHN (\cite{netzer2011reading}) achieving the lowest error rates in the dataset to the best of our knowledge. \section{Dealing with weight redundancies} Deep Neural Networks (DNN) are very expressive models which can usually have millions of parameters. However, with limited data, they tend to overfit. There is an abundant number of techniques in order to deal with this problem, from L1 and L2 regularizations (\cite{nowlan1992simplifying}), early-stopping, Dropout or DropConnect. Models presenting high levels of overfitting usually have a lot of redundancy in their feature weights, capturing similar patterns with slight differences which usually correspond to noise in the training data. A particular case where this is evident is in AlexNet (\cite{krizhevsky_imagenet_2012}), which presents very similar convolution filters and even "dead" ones, as it was remarked by \cite{zeiler2014visualizing}. In fact, given a set of parameters $\theta_{I,j}$ connecting a set of inputs $I = \{i_1,i_2,\ldots, i_n\}$ to a neuron $h_j$, two neurons $\{h_j,h_{k}\},\ j \ne k$ will be positively correlated, and thus fire always together if $\theta_{I,j} = \theta_{I,k}$ and negatively correlated if $\theta_{I,j} = -\theta_{I,k}$. In other words, two neurons with the same or slightly different weights will produce very similar outputs. In order to reduce the redundancy present in the network parameters, one should maximize the amount of information encoded by each neuron. From an information theory point of view, this means one should not be able to predict the output of a neuron given the output of the rest of the neurons of the layer. However, this measure requires batch statistics, huge joint probability tables, and it would have a high computational cost. In this paper, we will focus on the weights correlation rather than activation independence since it still is an open problem in many neural network models and it can be addressed without introducing too much overhead, see Table \ref{tab:resnet_flops}. Then, we show that models generalize better when different feature detectors are enforced to be dissimilar. Although it might seem contradictory, CNNs can benefit from having repeated filter weights with different biases, as shown by \cite{li2016multi}. However, those repeated filters must be shared copies and adding too many unshared filter weights to CNNs increases overfitting and the need for stronger regularization (\cite{zagoruyko2016wide}). Thus, our proposed method and multi-bias neural networks are complementary since they jointly increase the representation power of the network with fewer parameters. In order to find a good target to optimize so as to reduce the correlation between weights, it is first required to find a metric to measure it. In this paper, we propose to use the cosine similarity between feature detectors to express how strong is their relationship. Note that the cosine similarity is equivalent to the Pearson correlation for mean-centered normalized vectors, but we will use the term correlation for the sake of clarity. \subsection{Orthogonal weight regularization} This section introduces the orthogonal weight regularization, a regularization technique that aims to reduce feature detector correlation enforcing local orthogonality between all pairs of weight vectors. In order to keep the magnitudes of the detectors unaffected, we have chosen the cosine similarity between the vector pairs in order to solely focus on the vectors angle $\beta \in [-\pi,\pi]$. Then, given any pair of feature vectors of the same size $\theta_1,\theta_2$ the cosine of their relative angle is: \begin{equation} \cos(\theta_1,\theta_2) = \frac{\langle \theta_1, \theta_2\rangle}{||\theta_1|| ||\theta_2||} \label{eq:cosine_sim} \end{equation} Where $\langle \theta_1, \theta_2\rangle$ denotes the inner product between $\theta_1$ and $\theta_2$. We then square the cosine similarity in order to define a regularization cost function for steepest descent that has its local minima when vectors are orthogonal: \begin{equation} C(\theta) = \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n} \cos^2(\theta_i,\theta_j) = \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n}\Big(\frac{\langle \theta_i, \theta_j \rangle}{||\theta_i|| ||\theta_j||}\Big)^2 \label{eq:target_loss} \end{equation} Where $\theta_i$ are the weights connecting the output of the layer $l-1$ to the neuron $i$ of the layer $l$, which has $n$ hidden units. Interestingly, minimizing this cost function relates to the minimization of the Frobenius norm of the cross-covariance matrix without the diagonal. This cost will be added to the global cost of the model $J(\theta;X,y)$, where $X$ are the inputs and $y$ are the labels or targets, obtaining $\tilde{J}(\theta;X,y) = J(\theta;X,y) + \gamma C(\theta)$. Note that $\gamma$ is an hyperparameter that weights the relative contribution of the regularization term. We can now define the gradient with respect to the parameters: \begin{equation} \frac{\delta}{\delta\theta_{(i,j)}} C(\theta) = \sum_{k=1,k\ne i}^{n} \frac{\theta_{(k,j)}\langle \theta_{i},\theta_{k} \rangle}{\langle \theta_i, \theta_i \rangle \langle \theta_{k}, \theta_{k} \rangle} - \frac{\theta_{(i,j)} \langle \theta_i, \theta_{k} \rangle^2}{\langle \theta_i, \theta_i \rangle^2 \langle \theta_{k}, \theta_{k} \rangle } \label{eq:target_dloss} \end{equation} The second term is introduced by the magnitude normalization. As magnitudes are not relevant for the vector angle problem, this equation can be simplified just by assuming normalized feature detectors: \begin{equation} \frac{\delta}{\delta \theta_{(i,j)}} C(\theta) = \sum_{k=1,k\ne i}^{n} \theta_{(k,j)}\langle \theta_{i},\theta_{k} \rangle \label{eq:target_dloss_simple} \end{equation} We then add eq. \ref{eq:target_dloss_simple} to the backpropagation gradient: \begin{equation} \Delta \theta_{(i,j)} = -\alpha\Big(\nabla J_{\theta_{(i,j)}} + \gamma \sum_{k=1,k\ne i}^{n} \theta_{(k,j)}\langle \theta_{i},\theta_{k}\rangle \Big) \label{eq:target_backprop_loss} \end{equation} Where $\alpha$ is the global learning rate coefficient, $J$ any target loss function for the backpropagation algorithm. Although this update can be done sequentially for each feature-detector pair, it can be vectorized to speedup computations. Let $\Theta$ be a matrix where each row is a feature detector $\theta_{(I,j)}$ corresponding to the normalized weights connecting the whole input $I$ of the layer to the neuron $j$. Then, $\Theta \Theta^t$ contains the inner product of each pair of vectors $i$ and $j$ in each position $i,j$. Subsequently, we subtract the diagonal so as to ignore the angle from each feature with respect to itself and multiply by $\Theta$ to compute the final value corresponding to the sum in eq. \ref{eq:target_backprop_loss}: \begin{equation} \Delta \Theta = -\alpha\Big(\nabla J_{\Theta} + \gamma (\Theta \Theta^t - diag(\Theta \Theta^t)) \Theta \Big) \label{eq:target_backprop_loss_reg} \end{equation} Where the second term is $\nabla C_\Theta$. Algorithm \ref{alg:reg_step} summarizes the steps in order to apply OrthoReg. \begin{algorithm}[!t] \caption{Orthogonal Regularization Step.} \label{alg:reg_step} \begin{algorithmic}[1] \Require{Layer parameter matrices $\Theta^{l}$, regularization coefficient $\gamma$, global learning rate $\alpha$}. \For{each layer $l$ to regularize} \State{$\eta_1 = norm\_rows(\Theta^l)$} \Comment{Keep norm of the rows of $\Theta^l$.} \State{$\Theta_1^l = div\_rows(\Theta^l, \eta_1)$} \Comment{Keep a $\Theta_1^{l}$ with normalized rows.} \State{$innerProdMat = \Theta^l_1 transpose(\Theta^l_1)$} \State{$\nabla\Theta^l_1 = \gamma(innerProdMat - diag(innerProdMat)) \Theta^l_1$} \Comment{Second term in eq. \ref{eq:target_backprop_loss_reg}} \State{$\Delta\Theta^l = -\alpha(\nabla J_{\Theta^l} + \gamma\nabla\Theta^l_1) $}\Comment{Complete eq. \ref{eq:target_backprop_loss_reg}.} \EndFor \end{algorithmic} \end{algorithm} \subsection{Negative Correlations} Note that the presented algorithm, based on the cosine similarity, penalizes any kind of correlation between all pairs of feature detectors, i.e. the positive and the negative correlations, see Figure \ref{fig:orig_loss}. However, negative correlations are related to inhibitory connections, competitive learning, and self-organization. In fact, there is evidence that negative correlations can help a neural population to increase the signal-to-noise ratio (\cite{chelaru2016negative}) in the V1. In order to find out the advantages of keeping negative correlations, we propose to use an exponential to squash the gradients for angles greater than $\frac{\pi}{2} (orthogonal)$: \begin{equation} C(\theta) = \sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n} \log(1 + e^{\lambda (cos(\theta_i,\theta_j)-1)}) = \log(1 + e^{\lambda (\langle \theta_i,\theta_j\rangle -1)}), \ ||\theta_i||=||\theta_j||=1 \label{eq:target_loss_negcorr} \end{equation} Where $\lambda$ is a coefficient that controls the minimum angle-of-influence of the regularizer, i.e. the minimum angle between two feature weights so that there exists a gradient pushing them apart, see Figure \ref{fig:new_loss}. We empirically found that the regularizer worked well for $\lambda=10$, see Figure \ref{fig:toy_dataset_reg}. Note that when $\lambda \simeq 10$ the loss and the gradients approximate to zero when vectors are at more than $\frac{\pi}{2}$ (orthogonal). As a result of incorporating the squashing function on the cosine similarity, negatively correlated feature weights will not be regularized. This is different from all previous approaches and the loss presented in eq. \ref{eq:target_loss}, where all pairs of weight vectors influence each other. Thus, from now on, the loss in eq. \ref{eq:target_loss} is named as global loss and the loss in eq. \ref{eq:target_loss_negcorr} is named as local loss. The derivative of eq. \ref{eq:target_loss_negcorr} is: \begin{equation} \frac{\delta}{\delta\theta_{(i,j)}}C(\theta) = \sum_{k=1, k\neq i}^n \lambda\frac{e^{\lambda \langle \theta_{i}, \theta_{k} \rangle}\theta_{({k},{j})}}{e^{\lambda \langle \theta_{i}, \theta_{k} \rangle} + e^\lambda} \label{eq:target_dloss_negcorr} \end{equation} Then, given the element-wise exponential operator $\exp$, we define the following expression in order to simplify the formulas: \begin{equation} \hat{\Theta} = \exp({\lambda(\Theta\Theta^t)}) \end{equation} and thus, the $\Delta$ in vectorial form can be formulated as: \begin{equation} \nabla C_\Theta = \lambda\frac{(\hat{\Theta} - diag(\hat{\Theta}))\Theta}{\hat{\Theta} - diag(\hat{\Theta}) + e^\lambda} \end{equation} In order to provide a visual example, we have created a $2D$ toy dataset and used the previous equations for positive and negative $\gamma$ values, see Figure \ref{fig:toy_example}. As expected, it can be seen that the angle between all pairs of adjacent feature weights becomes more uniform after regularization. Note that Figure \ref{fig:toy_dataset_reg} shows that regularization with the global loss (eq. \ref{eq:target_loss}) results in less uniform angles than using the local loss as shown in \ref{fig:toy_dataset_reg_2} (which corresponds to the local loss presented in eq. \ref{eq:target_loss_negcorr}) because vectors in opposite quadrants still influence each other. This is why in Figure \ref{fig:nnangle}, it can be seen that the mean nearest neighbor angle using the global loss (b) is more unstable than the local loss (c). As a proof of concept, we also performed gradient ascent, which minimizes the angle between the vectors. Thus, in Figures \ref{fig:toy_dataset_unreg} and \ref{fig:toy_dataset_unreg_2}, it can be seen that the locality introduced by the local loss reaches a stable configuration where feature weights with angle $ \frac{\pi}{2}$ are too far to attract each other. The effects of global and local regularizations on Alexnet, VGG-16 and a 50-layer ResNet are shown on Figure \ref{fig:real_feature_angle}. As it can be seen, OrthoReg reaches higher decorrelation bounds. Lower decorrelation peaks are still observed when the input dimensionality of the layers is smaller than the output since all vectors cannot be orthogonal at the same time. In this case, local regularization largely outperforms global regularization since it removes interferences caused by negatively correlated feature weights. This suggests why increasing fully connected layers' size has not improved networks performance. \section{Experiments} In this section we provide a set of experiments that verify that (i) training with the proposed regularization increases the performance of naive unregularized models, (ii) negatively correlated feature weights are useful, and (iii) the proposed regularization improves the performance of state-of-the-art models. \subsection{Verification experiments} As a sanity check, we first train a three-hidden-layer Multi-Layer Perceptron (MLP) with \texttt{ReLU} non-liniarities on the MNIST dataset (\cite{lecun1998gradient}). Our code is based in the \texttt{train-a-digit-classifier} example included in \texttt{torch/demos}\footnote{\label{note1}\url{https://github.com/torch/demos}}, which uses an upsampled version of the dataset ($32\times 32$). The only pre-processing applied to the data is a global standardization. The model is trained with SGD and a batch size of $200$ during $200$ epochs. No momentum neither weight decay was applied. By default, the magnitude of the weights of this experiments is recovered after each regularization step in order to prove the regularization only affects their angle. \textbf{Sensitivity to hyperparameters.} We train a three-hidden-layer MLP with 1024 hidden units, and different $\gamma$ and $\lambda$ values so as to verify how they affect the performance of the model. Figure \ref{fig:mnist_error_gamma} shows that the model effectively achieves the best error rate for the highest gamma value ($\gamma=1$), thus proving the advantages of the regularization. On Figure \ref{fig:mnist_overfitting}, we verify that higher regularization rates produce more general models. Figure \ref{fig:mnist_lambda} depicts the sensitivity of the model to $\lambda$. As expected, the best value is found when lambda corresponds to Orthogonality ($\lambda \simeq 10$). \textbf{Negative Correlations.} Figure \ref{fig:mnist_negcorr} highlights the difference between regularizing with the global or the local regularizer. Although both regularizations reach better error rates than the unregularized counterpart, the local regularization is better than the global. This confirms the hypothesis that negative correlations are useful and thus, performance decreases when we reduce them. \textbf{Compatibility with initialization and dropout.} To demonstrate the proposed regularization can help even when other regularizations are present, we trained a CNN with (i) dropout (\texttt{c32-c64-l512-d0.5-l10})\footnote{$cxx$ = convolution with $xx$ filters. $lxx$ = fully-connected with $xx$ units. $dxx$ = dropout with prob $xx$.} or (ii) LSUV initialization (\cite{mishkin_all_2015}). In Table \ref{tab:mnist_cnn}, we show that best results are obtained when orthogonal regularization is present. The results are consistent with the hypothesis that OrthoReg, as well as Dropout and LSUV, focuses on reducing the model redundancy. Thus, when one of them is present, the margin of improvement for the others is reduced. \subsection{Regularization on CIFAR-10 and CIFAR-100} We show that the proposed OrthoReg can help to improve the performance of state-of-the-art models such as deep residual networks (\cite{He2015}). In order to show the regularization is suitable for deep CNNs, we successfuly regularize a 110-layer ResNet\footnote{\url{https://github.com/gcr/torch-residual-networks}} on CIFAR-10, decreasing its error from 6.55\% to 6.29\% without data augmentation. In order to compare with the most recent state-of-the-art, we train a wide residual network (\cite{zagoruyko2016wide_v2}) on CIFAR-10 and CIFAR-100. The experiment is based on a \texttt{torch} implementation of the 28-layer and 10th width factor wide deep residual model, for which the median error rate on CIFAR-10 is $3.89\%$ and $18.85\%$ on CIFAR-100\footnote{\url{https://github.com/szagoruyko/wide-residual-networks}}. As it can be seen in Figure \ref{fig:wide_compare}, regularizing with OrthoReg yields the best test error rates compared to the baselines. The regularization coefficient $\gamma$ was chosen using grid search although similar values were found for all the experiments, specially if regularization gradients are normalized before adding them to the weights. The regularization was equally applied to all the convolution layers of the (wide) ResNet. We found that, although the regularized models were already using weight decay, dropout, and batch normalization, best error rates were always achieved with OrthoReg. Table \ref{tab:cifar_results} compares the performance of the regularized models with other state-of-the-art results. As it can be seen the regularized model surpasses the state of the art, with a $5.1\%$ relative error improvement on CIFAR-10, and a $1.5\%$ relative error improvement on CIFAR-100. \subsection{Regularization on SVHN} For SVHN we follow the procedure depicted in \cite{zagoruyko2016wide}, training a wide residual network of \texttt{depth=28}, \texttt{width=4}, and dropout. Results are shown in Table \ref{tab:results_svhn}. As it can be seen, we reduce the error rate from $1.64\%$ to $1.54\%$, which is the lowest value reported on this dataset to the best of our knowledge. \section{Discussion} Regularization by feature decorrelation can reduce Neural Networks overfitting even in the presence of other kinds of regularizations. However, especially when the number of feature detectors is higher than the input dimensionality, its decorrelation capacity is limited due to the effects of negatively correlated features. We showed that imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing regularizers to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with the constrained regularization present lower overfitting even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of 110-layer ResNets and wide ResNets on CIFAR-10, CIFAR-100, and SVHN improving their performance. Note that despite OrthoReg consistently improves state of the art ReLU networks, the choice of the activation function could affect regularizers like the one presented in this work. In this sense, the effect of asymmetrical activations on feature correlations and regularizers should be further investigated in the future. \section*{Acknowledgements} Authors acknowledge the support of the Spanish project TIN2015-65464-R (MINECO/FEDER), the 2016FI\_B 01163 grant of Generalitat de Catalunya, and the COST Action IC1307 iV\&L Net (European Network on Integrating Vision and Language) supported by COST (European Cooperation in Science and Technology). We also gratefully acknowledge the support of NVIDIA Corporation with the donation of a Tesla K40 GPU and a GTX TITAN GPU, used for this research. \bibliographystyle{iclr2017_conference} \end{document}
Regularizing CNNs with Locally Constrained Decorrelations
1611.01967
Table 3: Comparison with other CNNs on CIFAR-10 and CIFAR-100 (Test error %). Orthogonally regularized residual networks achieve the best results to the best of our knowldege. Only single-crop results are reported for fairness of comparison. *Median over 5 runs as reported by Zagoruyko & Komodakis (November 2016).
[ "[BOLD] Network", "[BOLD] CIFAR-10", "[BOLD] CIFAR-100", "[BOLD] Augmented" ]
[ [ "Maxout (Goodfellow et al. ( 2013 ))", "9.38", "38.57", "YES" ], [ "NiN (Lin et al. ( 2014 ))", "8.81", "35.68", "YES" ], [ "DSN (Lee et al. ( 2015 ))", "7.97", "34.57", "YES" ], [ "Highway Network (Srivastava et al. ( 2015 ))", "7.60", "32.24", "YES" ], [ "All-CNN (Springenberg et al. ( 2015 ))", "7.25", "33.71", "NO" ], [ "110-Layer ResNet (He et al. ( 2015a ))", "6.61", "28.4", "NO" ], [ "ELU-Network (Clevert et al. ( 2016 ))", "6.55", "[BOLD] 24.28", "NO" ], [ "[BOLD] OrthoReg on 110-Layer ResNet*", "[BOLD] 6.29±0.19", "28.33±0.5", "NO" ], [ "LSUV (Mishkin & Matas ( 2016 ))", "5.84", "-", "YES" ], [ "Fract. Max-Pooling (Graham ( 2014 ))", "4.50", "27.62", "YES" ], [ "Wide ResNet v1 (Zagoruyko & Komodakis ( May 2016 ))*", "4.37", "20.40", "YES" ], [ "[BOLD] OrthoReg on Wide ResNet v1 (May 2016)*", "4.32±0.05", "19.50±0.03", "YES" ], [ "Wide ResNet v2 (Zagoruyko & Komodakis ( November 2016 ))*", "3.89", "18.85", "YES" ], [ "[BOLD] OrthoReg on Wide ResNet v2 (November 2016)*", "[BOLD] 3.69± [BOLD] 0.01", "[BOLD] 18.56± [BOLD] 0.12", "YES" ] ]
As it can be seen the regularized model surpasses the state of the art, with a 5.1% relative error improvement on CIFAR-10, and a 1.5% relative error improvement on CIFAR-100.
\documentclass{article} % For LaTeX2e %ctan.org\pkg\algorithms \title{Regularizing CNNs with Locally Constrained Decorrelations} \author{Pau Rodr\'{i}guez$^\dagger$, Jordi Gonz\`{a}lez$^{\dagger, \ddagger}$, Guillem Cucurull$^\dagger$, Josep M. Gonfaus$^\ddagger$, Xavier Roca$^{\dagger,\ddagger}$ \\ $\dagger$Computer Vision Center - Univ. Aut\`onoma de Barcelona (UAB), 08193 Bellaterra, Catalonia Spain\\ $^\ddagger$Visual Tagging Services, Campus UAB, 08193 Bellaterra, Catalonia Spain } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN. \end{abstract} \section{Introduction} Neural networks perform really well in numerous tasks even when initialized randomly and trained with Stochastic Gradient Descent (SGD) (see \cite{krizhevsky_imagenet_2012}). Deeper models, like Googlenet (\cite{szegedy2015going}) and Deep Residual Networks (\cite{szegedy2015going, He2015}) are released each year, providing impressive results and even surpassing human performances in well-known datasets such as the Imagenet (\cite{russakovsky2015imagenet}). This would not have been possible without the help of regularization and initialization techniques which solve the overfitting and convergence problems that are usually caused by data scarcity and the growth of the architectures. From the literature, two different regularization strategies can be defined. The first ones consist in reducing the complexity of the model by (i) reducing the effective number of parameters with weight decay (\cite{nowlan1992simplifying}), and (ii) randomly dropping activations with Dropout (\cite{srivastava2014dropout}) or dropping weights with DropConnect (\cite{wan_regularization_2013}) so as to prevent feature co-adaptation. Due to their nature, although this set of strategies have proved to be very effective, they do not leverage all the capacity of the models they regularize. The second group of regularizations is those which improve the effectiveness and generality of the trained model without reducing its capacity. In this second group, the most relevant approaches decorrelate the weights or feature maps, e.g. \cite{bengio2009slow} introduced a new criterion so as to learn slow decorrelated features while pre-training models. In the same line \cite{bao2013incoherent} presented "incoherent training", a regularizer for reducing the decorrelation of the network activations or feature maps in the context of speech recognition. Although regularizations in the second group are promising and have already been used to reduce the overfitting in different tasks, even with the presence of Dropout (as shown by \cite{cogswell2015reducing}), they are seldom used in the large scale image recognition domain because of the small improvement margins they provide together with the computational overhead they introduce. Although they are not directly presented as regularizers, there are other strategies to reduce the overfitting such as Batch Normalization (\cite{ioffe2015batch}), which decreases the overfitting by reducing the internal covariance shift. In the same line, initialization strategies such as "Xavier" (\cite{glorot_understanding_2010}) or "He" (\cite{he2015delving}), also keep the same variance at both input and output of the layers in order to preserve propagated signals in deep neural networks. Orthogonal initialization techniques are another family which set the weights in a decorrelated initial state so as to condition the network training to converge into better representations. For instance, \cite{mishkin_all_2015} propose to initialize the network with decorrelated features using orthonormal initialization (\cite{saxe_exact_2013}) while normalizing the variance of the outputs as well. In this work we hypothesize that regularizing negatively correlated features is an obstacle for achieving better results and we introduce OrhoReg, a novel regularization technique that addresses the performance margin issue by only regularizing positively correlated feature weights. Moreover, OrthoReg is computationally efficient since it only regularizes the feature weights, which makes it very suitable for the latest CNN models. We verify our hypothesis through a series of experiments: first using MNIST as a proof of concept, secondly we regularize wide residual networks on CIFAR-10, CIFAR-100, and SVHN (\cite{netzer2011reading}) achieving the lowest error rates in the dataset to the best of our knowledge. \section{Dealing with weight redundancies} Deep Neural Networks (DNN) are very expressive models which can usually have millions of parameters. However, with limited data, they tend to overfit. There is an abundant number of techniques in order to deal with this problem, from L1 and L2 regularizations (\cite{nowlan1992simplifying}), early-stopping, Dropout or DropConnect. Models presenting high levels of overfitting usually have a lot of redundancy in their feature weights, capturing similar patterns with slight differences which usually correspond to noise in the training data. A particular case where this is evident is in AlexNet (\cite{krizhevsky_imagenet_2012}), which presents very similar convolution filters and even "dead" ones, as it was remarked by \cite{zeiler2014visualizing}. In fact, given a set of parameters $\theta_{I,j}$ connecting a set of inputs $I = \{i_1,i_2,\ldots, i_n\}$ to a neuron $h_j$, two neurons $\{h_j,h_{k}\},\ j \ne k$ will be positively correlated, and thus fire always together if $\theta_{I,j} = \theta_{I,k}$ and negatively correlated if $\theta_{I,j} = -\theta_{I,k}$. In other words, two neurons with the same or slightly different weights will produce very similar outputs. In order to reduce the redundancy present in the network parameters, one should maximize the amount of information encoded by each neuron. From an information theory point of view, this means one should not be able to predict the output of a neuron given the output of the rest of the neurons of the layer. However, this measure requires batch statistics, huge joint probability tables, and it would have a high computational cost. In this paper, we will focus on the weights correlation rather than activation independence since it still is an open problem in many neural network models and it can be addressed without introducing too much overhead, see Table \ref{tab:resnet_flops}. Then, we show that models generalize better when different feature detectors are enforced to be dissimilar. Although it might seem contradictory, CNNs can benefit from having repeated filter weights with different biases, as shown by \cite{li2016multi}. However, those repeated filters must be shared copies and adding too many unshared filter weights to CNNs increases overfitting and the need for stronger regularization (\cite{zagoruyko2016wide}). Thus, our proposed method and multi-bias neural networks are complementary since they jointly increase the representation power of the network with fewer parameters. In order to find a good target to optimize so as to reduce the correlation between weights, it is first required to find a metric to measure it. In this paper, we propose to use the cosine similarity between feature detectors to express how strong is their relationship. Note that the cosine similarity is equivalent to the Pearson correlation for mean-centered normalized vectors, but we will use the term correlation for the sake of clarity. \subsection{Orthogonal weight regularization} This section introduces the orthogonal weight regularization, a regularization technique that aims to reduce feature detector correlation enforcing local orthogonality between all pairs of weight vectors. In order to keep the magnitudes of the detectors unaffected, we have chosen the cosine similarity between the vector pairs in order to solely focus on the vectors angle $\beta \in [-\pi,\pi]$. Then, given any pair of feature vectors of the same size $\theta_1,\theta_2$ the cosine of their relative angle is: \begin{equation} \cos(\theta_1,\theta_2) = \frac{\langle \theta_1, \theta_2\rangle}{||\theta_1|| ||\theta_2||} \label{eq:cosine_sim} \end{equation} Where $\langle \theta_1, \theta_2\rangle$ denotes the inner product between $\theta_1$ and $\theta_2$. We then square the cosine similarity in order to define a regularization cost function for steepest descent that has its local minima when vectors are orthogonal: \begin{equation} C(\theta) = \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n} \cos^2(\theta_i,\theta_j) = \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n}\Big(\frac{\langle \theta_i, \theta_j \rangle}{||\theta_i|| ||\theta_j||}\Big)^2 \label{eq:target_loss} \end{equation} Where $\theta_i$ are the weights connecting the output of the layer $l-1$ to the neuron $i$ of the layer $l$, which has $n$ hidden units. Interestingly, minimizing this cost function relates to the minimization of the Frobenius norm of the cross-covariance matrix without the diagonal. This cost will be added to the global cost of the model $J(\theta;X,y)$, where $X$ are the inputs and $y$ are the labels or targets, obtaining $\tilde{J}(\theta;X,y) = J(\theta;X,y) + \gamma C(\theta)$. Note that $\gamma$ is an hyperparameter that weights the relative contribution of the regularization term. We can now define the gradient with respect to the parameters: \begin{equation} \frac{\delta}{\delta\theta_{(i,j)}} C(\theta) = \sum_{k=1,k\ne i}^{n} \frac{\theta_{(k,j)}\langle \theta_{i},\theta_{k} \rangle}{\langle \theta_i, \theta_i \rangle \langle \theta_{k}, \theta_{k} \rangle} - \frac{\theta_{(i,j)} \langle \theta_i, \theta_{k} \rangle^2}{\langle \theta_i, \theta_i \rangle^2 \langle \theta_{k}, \theta_{k} \rangle } \label{eq:target_dloss} \end{equation} The second term is introduced by the magnitude normalization. As magnitudes are not relevant for the vector angle problem, this equation can be simplified just by assuming normalized feature detectors: \begin{equation} \frac{\delta}{\delta \theta_{(i,j)}} C(\theta) = \sum_{k=1,k\ne i}^{n} \theta_{(k,j)}\langle \theta_{i},\theta_{k} \rangle \label{eq:target_dloss_simple} \end{equation} We then add eq. \ref{eq:target_dloss_simple} to the backpropagation gradient: \begin{equation} \Delta \theta_{(i,j)} = -\alpha\Big(\nabla J_{\theta_{(i,j)}} + \gamma \sum_{k=1,k\ne i}^{n} \theta_{(k,j)}\langle \theta_{i},\theta_{k}\rangle \Big) \label{eq:target_backprop_loss} \end{equation} Where $\alpha$ is the global learning rate coefficient, $J$ any target loss function for the backpropagation algorithm. Although this update can be done sequentially for each feature-detector pair, it can be vectorized to speedup computations. Let $\Theta$ be a matrix where each row is a feature detector $\theta_{(I,j)}$ corresponding to the normalized weights connecting the whole input $I$ of the layer to the neuron $j$. Then, $\Theta \Theta^t$ contains the inner product of each pair of vectors $i$ and $j$ in each position $i,j$. Subsequently, we subtract the diagonal so as to ignore the angle from each feature with respect to itself and multiply by $\Theta$ to compute the final value corresponding to the sum in eq. \ref{eq:target_backprop_loss}: \begin{equation} \Delta \Theta = -\alpha\Big(\nabla J_{\Theta} + \gamma (\Theta \Theta^t - diag(\Theta \Theta^t)) \Theta \Big) \label{eq:target_backprop_loss_reg} \end{equation} Where the second term is $\nabla C_\Theta$. Algorithm \ref{alg:reg_step} summarizes the steps in order to apply OrthoReg. \begin{algorithm}[!t] \caption{Orthogonal Regularization Step.} \label{alg:reg_step} \begin{algorithmic}[1] \Require{Layer parameter matrices $\Theta^{l}$, regularization coefficient $\gamma$, global learning rate $\alpha$}. \For{each layer $l$ to regularize} \State{$\eta_1 = norm\_rows(\Theta^l)$} \Comment{Keep norm of the rows of $\Theta^l$.} \State{$\Theta_1^l = div\_rows(\Theta^l, \eta_1)$} \Comment{Keep a $\Theta_1^{l}$ with normalized rows.} \State{$innerProdMat = \Theta^l_1 transpose(\Theta^l_1)$} \State{$\nabla\Theta^l_1 = \gamma(innerProdMat - diag(innerProdMat)) \Theta^l_1$} \Comment{Second term in eq. \ref{eq:target_backprop_loss_reg}} \State{$\Delta\Theta^l = -\alpha(\nabla J_{\Theta^l} + \gamma\nabla\Theta^l_1) $}\Comment{Complete eq. \ref{eq:target_backprop_loss_reg}.} \EndFor \end{algorithmic} \end{algorithm} \subsection{Negative Correlations} Note that the presented algorithm, based on the cosine similarity, penalizes any kind of correlation between all pairs of feature detectors, i.e. the positive and the negative correlations, see Figure \ref{fig:orig_loss}. However, negative correlations are related to inhibitory connections, competitive learning, and self-organization. In fact, there is evidence that negative correlations can help a neural population to increase the signal-to-noise ratio (\cite{chelaru2016negative}) in the V1. In order to find out the advantages of keeping negative correlations, we propose to use an exponential to squash the gradients for angles greater than $\frac{\pi}{2} (orthogonal)$: \begin{equation} C(\theta) = \sum_{i=1}^{n}\sum_{j=1,j\ne i}^{n} \log(1 + e^{\lambda (cos(\theta_i,\theta_j)-1)}) = \log(1 + e^{\lambda (\langle \theta_i,\theta_j\rangle -1)}), \ ||\theta_i||=||\theta_j||=1 \label{eq:target_loss_negcorr} \end{equation} Where $\lambda$ is a coefficient that controls the minimum angle-of-influence of the regularizer, i.e. the minimum angle between two feature weights so that there exists a gradient pushing them apart, see Figure \ref{fig:new_loss}. We empirically found that the regularizer worked well for $\lambda=10$, see Figure \ref{fig:toy_dataset_reg}. Note that when $\lambda \simeq 10$ the loss and the gradients approximate to zero when vectors are at more than $\frac{\pi}{2}$ (orthogonal). As a result of incorporating the squashing function on the cosine similarity, negatively correlated feature weights will not be regularized. This is different from all previous approaches and the loss presented in eq. \ref{eq:target_loss}, where all pairs of weight vectors influence each other. Thus, from now on, the loss in eq. \ref{eq:target_loss} is named as global loss and the loss in eq. \ref{eq:target_loss_negcorr} is named as local loss. The derivative of eq. \ref{eq:target_loss_negcorr} is: \begin{equation} \frac{\delta}{\delta\theta_{(i,j)}}C(\theta) = \sum_{k=1, k\neq i}^n \lambda\frac{e^{\lambda \langle \theta_{i}, \theta_{k} \rangle}\theta_{({k},{j})}}{e^{\lambda \langle \theta_{i}, \theta_{k} \rangle} + e^\lambda} \label{eq:target_dloss_negcorr} \end{equation} Then, given the element-wise exponential operator $\exp$, we define the following expression in order to simplify the formulas: \begin{equation} \hat{\Theta} = \exp({\lambda(\Theta\Theta^t)}) \end{equation} and thus, the $\Delta$ in vectorial form can be formulated as: \begin{equation} \nabla C_\Theta = \lambda\frac{(\hat{\Theta} - diag(\hat{\Theta}))\Theta}{\hat{\Theta} - diag(\hat{\Theta}) + e^\lambda} \end{equation} In order to provide a visual example, we have created a $2D$ toy dataset and used the previous equations for positive and negative $\gamma$ values, see Figure \ref{fig:toy_example}. As expected, it can be seen that the angle between all pairs of adjacent feature weights becomes more uniform after regularization. Note that Figure \ref{fig:toy_dataset_reg} shows that regularization with the global loss (eq. \ref{eq:target_loss}) results in less uniform angles than using the local loss as shown in \ref{fig:toy_dataset_reg_2} (which corresponds to the local loss presented in eq. \ref{eq:target_loss_negcorr}) because vectors in opposite quadrants still influence each other. This is why in Figure \ref{fig:nnangle}, it can be seen that the mean nearest neighbor angle using the global loss (b) is more unstable than the local loss (c). As a proof of concept, we also performed gradient ascent, which minimizes the angle between the vectors. Thus, in Figures \ref{fig:toy_dataset_unreg} and \ref{fig:toy_dataset_unreg_2}, it can be seen that the locality introduced by the local loss reaches a stable configuration where feature weights with angle $ \frac{\pi}{2}$ are too far to attract each other. The effects of global and local regularizations on Alexnet, VGG-16 and a 50-layer ResNet are shown on Figure \ref{fig:real_feature_angle}. As it can be seen, OrthoReg reaches higher decorrelation bounds. Lower decorrelation peaks are still observed when the input dimensionality of the layers is smaller than the output since all vectors cannot be orthogonal at the same time. In this case, local regularization largely outperforms global regularization since it removes interferences caused by negatively correlated feature weights. This suggests why increasing fully connected layers' size has not improved networks performance. \section{Experiments} In this section we provide a set of experiments that verify that (i) training with the proposed regularization increases the performance of naive unregularized models, (ii) negatively correlated feature weights are useful, and (iii) the proposed regularization improves the performance of state-of-the-art models. \subsection{Verification experiments} As a sanity check, we first train a three-hidden-layer Multi-Layer Perceptron (MLP) with \texttt{ReLU} non-liniarities on the MNIST dataset (\cite{lecun1998gradient}). Our code is based in the \texttt{train-a-digit-classifier} example included in \texttt{torch/demos}\footnote{\label{note1}\url{https://github.com/torch/demos}}, which uses an upsampled version of the dataset ($32\times 32$). The only pre-processing applied to the data is a global standardization. The model is trained with SGD and a batch size of $200$ during $200$ epochs. No momentum neither weight decay was applied. By default, the magnitude of the weights of this experiments is recovered after each regularization step in order to prove the regularization only affects their angle. \textbf{Sensitivity to hyperparameters.} We train a three-hidden-layer MLP with 1024 hidden units, and different $\gamma$ and $\lambda$ values so as to verify how they affect the performance of the model. Figure \ref{fig:mnist_error_gamma} shows that the model effectively achieves the best error rate for the highest gamma value ($\gamma=1$), thus proving the advantages of the regularization. On Figure \ref{fig:mnist_overfitting}, we verify that higher regularization rates produce more general models. Figure \ref{fig:mnist_lambda} depicts the sensitivity of the model to $\lambda$. As expected, the best value is found when lambda corresponds to Orthogonality ($\lambda \simeq 10$). \textbf{Negative Correlations.} Figure \ref{fig:mnist_negcorr} highlights the difference between regularizing with the global or the local regularizer. Although both regularizations reach better error rates than the unregularized counterpart, the local regularization is better than the global. This confirms the hypothesis that negative correlations are useful and thus, performance decreases when we reduce them. \textbf{Compatibility with initialization and dropout.} To demonstrate the proposed regularization can help even when other regularizations are present, we trained a CNN with (i) dropout (\texttt{c32-c64-l512-d0.5-l10})\footnote{$cxx$ = convolution with $xx$ filters. $lxx$ = fully-connected with $xx$ units. $dxx$ = dropout with prob $xx$.} or (ii) LSUV initialization (\cite{mishkin_all_2015}). In Table \ref{tab:mnist_cnn}, we show that best results are obtained when orthogonal regularization is present. The results are consistent with the hypothesis that OrthoReg, as well as Dropout and LSUV, focuses on reducing the model redundancy. Thus, when one of them is present, the margin of improvement for the others is reduced. \subsection{Regularization on CIFAR-10 and CIFAR-100} We show that the proposed OrthoReg can help to improve the performance of state-of-the-art models such as deep residual networks (\cite{He2015}). In order to show the regularization is suitable for deep CNNs, we successfuly regularize a 110-layer ResNet\footnote{\url{https://github.com/gcr/torch-residual-networks}} on CIFAR-10, decreasing its error from 6.55\% to 6.29\% without data augmentation. In order to compare with the most recent state-of-the-art, we train a wide residual network (\cite{zagoruyko2016wide_v2}) on CIFAR-10 and CIFAR-100. The experiment is based on a \texttt{torch} implementation of the 28-layer and 10th width factor wide deep residual model, for which the median error rate on CIFAR-10 is $3.89\%$ and $18.85\%$ on CIFAR-100\footnote{\url{https://github.com/szagoruyko/wide-residual-networks}}. As it can be seen in Figure \ref{fig:wide_compare}, regularizing with OrthoReg yields the best test error rates compared to the baselines. The regularization coefficient $\gamma$ was chosen using grid search although similar values were found for all the experiments, specially if regularization gradients are normalized before adding them to the weights. The regularization was equally applied to all the convolution layers of the (wide) ResNet. We found that, although the regularized models were already using weight decay, dropout, and batch normalization, best error rates were always achieved with OrthoReg. Table \ref{tab:cifar_results} compares the performance of the regularized models with other state-of-the-art results. As it can be seen the regularized model surpasses the state of the art, with a $5.1\%$ relative error improvement on CIFAR-10, and a $1.5\%$ relative error improvement on CIFAR-100. \subsection{Regularization on SVHN} For SVHN we follow the procedure depicted in \cite{zagoruyko2016wide}, training a wide residual network of \texttt{depth=28}, \texttt{width=4}, and dropout. Results are shown in Table \ref{tab:results_svhn}. As it can be seen, we reduce the error rate from $1.64\%$ to $1.54\%$, which is the lowest value reported on this dataset to the best of our knowledge. \section{Discussion} Regularization by feature decorrelation can reduce Neural Networks overfitting even in the presence of other kinds of regularizations. However, especially when the number of feature detectors is higher than the input dimensionality, its decorrelation capacity is limited due to the effects of negatively correlated features. We showed that imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing regularizers to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with the constrained regularization present lower overfitting even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of 110-layer ResNets and wide ResNets on CIFAR-10, CIFAR-100, and SVHN improving their performance. Note that despite OrthoReg consistently improves state of the art ReLU networks, the choice of the activation function could affect regularizers like the one presented in this work. In this sense, the effect of asymmetrical activations on feature correlations and regularizers should be further investigated in the future. \section*{Acknowledgements} Authors acknowledge the support of the Spanish project TIN2015-65464-R (MINECO/FEDER), the 2016FI\_B 01163 grant of Generalitat de Catalunya, and the COST Action IC1307 iV\&L Net (European Network on Integrating Vision and Language) supported by COST (European Cooperation in Science and Technology). We also gratefully acknowledge the support of NVIDIA Corporation with the donation of a Tesla K40 GPU and a GTX TITAN GPU, used for this research. \bibliographystyle{iclr2017_conference} \end{document}
Generative Adversarial Parallelization
1612.04021
Table 3: GRAN versus GAP(GRAN) evaluation using GAM-II.
[ "DATASET", "Models Measure", "GRAN Min", "GRAN Max", "GAP [ITALIC] G4 Min", "GAP [ITALIC] G4 Max", "GAP [ITALIC] C4 Min", "GAP [ITALIC] C4 Max" ]
[ [ "MNIST", "Avg.", "0.433", "0.465", "0.510", "0.533", "0.459", "0.474" ], [ "MNIST", "Worst", "0.004", "0.020", "0.008", "0.020", "0.010", "0.012" ], [ "CIFAR-10", "Avg.", "0.289", "0.355", "0.332", "0.416", "0.306", "0.319" ], [ "CIFAR-10", "Worst", "0.006", "0.019", "0.048", "0.171", "0.001", "0.023" ], [ "LSUN", "Avg.", "0.477", "0.590", "0.568", "0.649", "0.574", "0.636" ], [ "LSUN", "Worst", "0.013", "0.043", "0.022", "0.055", "0.015", "0.021" ] ]
We used GAM-II to evaluate GAP (see . We first looked at the performance over four models: DCGAN, GRAN, GAPD4, and GAPG4. We also considered combining multiple GAN-variants in a GAP model (hybrid GAP). We denote this model as GAPC4. GAPC4 consists of two DCGANs and two GRANs trained with GAP. Overall, we have ten generators and ten discriminators for DCGAN and GRAN: four discriminators from the individually-trained models, and four discriminators from GAP, and two discriminators from the GAP combination, GAPC4. We used the collection of all ten discriminators to evaluate the generators. Note that we report the minimum and maximum of average and worst error rates among four GANs. Looking at the average errors, GAPD4 strongly outperforms DCGAN on all datasets. GAPG4 outperforms GRAN on CIFAR-10 and MNIST and strongly outperforms it on LSUN. For the case of the maximum worst-case error, GAP outperforms both DCGAN and GRAN across all datasets. However, we did not find an improvement on GAPC4 based on the GAM-II metric.
\section{Introduction} The growing popularity Generative Adversarial Networks (GAN) and their variants stems from their success in producing realistic samples \citep{Denton2015, Radford2015, Im2016gran, Salimans2016, Dumoulin2016} as well as the intuitive nature of the adversarial training framework \citep{Goodfellow2014}. Compared to other unsupervised learning paradigms, GANs have several merits: \begin{itemize}[leftmargin=*] \item The objective function is not restricted to distances in input (e.g.~pixel) space, for example, reconstruction error. Moreover, there is no restriction to certain type of functional forms such as having a Bernoulli or Gaussian output distribution. \item Compared to undirected probabilistic graphical models \citep{Hinton06, Salakhutdinov2009}, samples are generated in a single pass rather than iteratively. Moreover, the time to generate a sample is much less than recurrent models like PixelRNN \citep{Oord2016}. \item Unlike inverse transformation sampling models, the latent variable size is not restricted \citep{Hyvarinen1999, Dinh2014}. \end{itemize} In contrast, GANs are known to be difficult to train, especially as the data generating distribution becomes more complex. There have been some attempts to address this issue. For example, \citet{Salimans2016} propose several tricks such as feature matching and minibatch discrimination. In this work, we attempt to address training difficulty in a different way: extending two player generative adversarial games into a multi-player game. This amounts to training many GAN-like variants in parallel, periodically swapping their discriminators such that generator-discriminator coupling is reduced. Figure \ref{fig:overview} provides a graphical depiction of our method. Besides the training dilemma, from the point of view of density estimation, GANs possess very different characteristics compared to other probabilistic generative models. Most probabilistic models distribute the probability mass over the entire domain, whereas GAN by nature puts point-wise probability mass near the data. The question of whether this is desirable property or not is still an open question\footnote{Noting that the general view in ML is that there is nothing wrong with sampling from a degenerate distribution \citep{Neal1998}.}. However, the primary concern of this property is that GAN may fail to allocate mass to some important modes of the data generating distribution. We argue that our proposed model could alleviate this problem. That our solution involves training many pairs of generators and discriminators together is a product of the fact that deep learning algorithms and distributed systems have been co-evolving for some time. Hardware accelerators, specifically Graphics Processing Units, (GPUs) have played a fundamental role in advancing deep learning, in particular because deep architectures are so well suited to parallelism \citep{Coates2013-jg}. Data-based parallelism distributes large datasets over disparate nodes. Model-based parallelism allows complex models to be split over nodes. In both cases, learning must account for the coordination and communication among processors. Our work leverages recent advances along these lines \citep{Ma2016-ed}. \section{Discussion} We have proposed Generative Adversarial Parallelization, a framework in which several adversarially-trained models are trained together, exchanging discriminators. We argue that this reduces the tight coupling between generator and discriminator and show empirically that this has a beneficial effect on mode coverage, convergence, and quality of the model under the GAM-II metric. Several directions of future investigation are possible. This includes applying GAP to the evolving variety of adversarial models, like improvedGAN \citep{Salimans2016}. We still view stability as an issue and partially address it by tricks such as clipping the gradient of the discriminator. In this work, we only explored synchronous training of GANs under GAP, however, asynchronous training may provide more stability. Recent work has explored the connection between GANs and actor-critic methods in reinforcement learning \citep{Pfau2016-fs}. Under this view, we believe that GAP may have interesting implications for multi-agent RL. Although we have assessed mode coverage qualitatively either directly or indirectly via projections, quantitatively assessing mode coverage for generative models is still an open research problem. \documentclass{article} % For LaTeX2e \title{Generative Adversarial Parallelization} \author{Daniel Jiwoong Im \\ AIFounded Inc.\\ Toronto, ON\\ \texttt{\{daniel.im\}@aifounded.com} \\ \And He Ma, Chris Dongjoo Kim and Graham W.~Taylor\\ University of Guelph\\ Guelph, ON\\ \texttt{\{hma02,ckim07,gwtaylor\}@uoguelph.ca} } \newtheorem*{definition}{Definition} \DeclareMathOperator*{\argmax}{argmax} \newcommand{\imj}[1]{{\bfseries \small \color{blue} IM: #1}} \newcommand{\gwt}[1]{{\bfseries \small \color{red} GW: #1}} \newcommand{\kim}[1]{{\bfseries \small \color{cyan} KIM: #1}} \newcommand{\he}[1]{{\bfseries \small \color{green} He: #1}} \newcommand{\fix}{\marginpar{FIX}} \begin{document} \maketitle \begin{abstract} Generative Adversarial Networks (GAN) have become one of the most studied frameworks for unsupervised learning due to their intuitive formulation. They have also been shown to be capable of generating convincing examples in limited domains, such as low-resolution images. However, they still prove difficult to train in practice and tend to ignore modes of the data generating distribution. Quantitatively capturing effects such as mode coverage and more generally the quality of the generative model still remain elusive. We propose Generative Adversarial Parallelization (GAP), a framework in which many GANs or their variants are trained simultaneously, exchanging their discriminators. This eliminates the tight coupling between a generator and discriminator, leading to improved convergence and improved coverage of modes. We also propose an improved variant of the recently proposed Generative Adversarial Metric and show how it can score individual GANs or their collections under the GAP model. \end{abstract} \input{intro} \input{background} \input{model} \input{experiments} \input{discussion} \bibliographystyle{iclr2017_conference} \clearpage \appendix \input{supp} \end{document} \section{Experiments} \label{sec:experiments} We conduct an empirical investigation of GAP using two recently proposed GAN-variants as starting points: DCGAN \citep{Radford2015} and GRAN \citep{Im2016gran}\footnote{ The Theano-based DCGAN and GRAN implementations were based on https://github.com/Newmu/dcgan and https://github.com/jiwoongim/GRAN, respectively.}. In each case, we compare individual GAN-style models to GAP-style ensembles trained in parallel. As it is difficult to quantitatively assess mode coverage, first we aim to visualize samples from GAP vs.~other GAN variants on low-dimensional (toy) datasets as well as low-dimensional projections on real data. Then to evaluate each model quantitatively, we apply the GAM-II metric which is a re-formulation of GAM \citep{Im2016gran} which can be used to compare different GAN architectures. Its motivation and use is described in Section \ref{exp_setup}. We consider, in total, five GAP variants which are summarized in Table~\ref{tab:model_names}. \subsection{Experimental setup} \label{exp_setup} All of our models are implemented in Theano \citep{Bergstra2010} -- a Python library that facilitates deep learning research. Because every update of each model is implemented as a separate process during training, swapping their parameters among different GANs necessitates interprocess communication\footnote{We used openMPI for implementing GAP -- see https://www.open-mpi.org/}. Similar to the Theano-MPI framework, we chose to do inter-GPU memory transfer instead of passing through host memory in order to reduce communication overhead. Random swapping of the two discriminators' parameters is achieved with an in-place \verb+MPI_SendRecv+ operation as DCGAN and GRAN share the same architecture and therefore the same parameterization. Throughout the experiments, all datasets were normalized between $[0,1]$. We used the same hyper-parameters reported in \citep{Radford2015} and \citep{Im2016gran} for DCGAN and GRAN, respectively. The only additional hyper-parameter introduced by GAP is the frequency of swapping discriminators during training. We also made deliberate fine-grained distinctions among each GAN trained under GAP. These were: i) the generator's prior distribution was selected as either uniform or Gaussian; ii) the order of mini-batches was permuted during learning; and iii) noise was injected at the input during learning and the amount of noise was decayed over time. The point of introducing these distinctions was to avoid multiple GANs converging to the same or very similar solutions. Lastly, we used gradient clipping \citep{Pascanu2013} on both discriminators and generators. To measure the performance of GANs, our first attempt was to apply GAM to evaluate our model. Unfortunately, we realized that GAM is not applicable when comparing GAP vs.~non-GAP models. This is because GAM requires the discriminator the GANs under comparison to have similar error rates on a held-out test set. However, as shown in Figure~\ref{fig:gap_cifar10_learning_curve}, GAP boosts the generalization of the discriminators, which causes it to have different test error rates compared to the error rate from non-GAP models. Hence, we propose a new metric that omits the GAM's constraints which we call GAM-II. It simply measures the average (or worst case) error rate among a collection of discriminators. A detailed description of GAM-II is provided in Appendix~\ref{sec:GAM2}. \subsection{Results} We report our experimental results by answering a few core questions. {\em Q: Do GAP-trained models cover more modes of the data generating distribution?} Determining whether applying GAP achieves broader mode coverage is difficult to validate in high-dimensional spaces. Therefore, we initially verified GAP and non-GAP models on two low-dimensional synthetic datasets. The R15 dataset\footnote{The R15 dataset can be found at https://cs.joensuu.fi/sipu/datasets/} contains 500 two-dimensional data points with 15 clusters as shown in Figure~\ref{fig:syn_data}. The Mixture of Gaussians dataset\footnote{The Mixture of Gaussians dataset can be found at https://github.com/IshmaelBelghazi/ALI/} contains 2,500 two-dimensional data points with 25 clusters as shown in Figure~\ref{fig:mog_data}. Both discriminator and generator had four fully-connected batch-normalized layers with ReLU activation units. We first optimized the hyper-parameters of a single GAN based on visually inspecting the samples that it generated (i.e.~Figure \ref{fig:syn_gen} shows samples from the best performing single GAN that we trained). We then trained four parallelized GANs using the same hyper-parameters of the best single GAN. The samples generated from both models are shown in Figure~\ref{fig:syn_gen} and \ref{fig:mog_gen}. We observe that GAP(GAN$\times$4) produces samples that look more similar to the original dataset compared to a single GAN. The overlap of samples generated by four GANs are consistent with Figure~\ref{fig:syn_multi}. Note that as we decrease the number of training points, the overlap of GAN samples deviates from the original dataset while GAP seems not to suffer from this phenomenon. For example, when we used all 600 examples of R15, both GAN and GAP samples matched the distribution of data in Figure~\ref{fig:syn_data}. However, as we use less training examples, GAN failed to accurately model the data distribution by dropping modes. The samples plotted in Figure~\ref{fig:syn_multi} are based on training each model with a random subset of 100 examples drawn from the original 600. Based on the synthetic experiments we confirm that GAP can improve mode coverage when a limited number of training samples are available. In order to gain a qualitative sense of models trained using a high dimensional dataset, we considered two experiments: i) we examined the class label predictions made on samples from each model to check how uniformly they were distributed. The histogram of the predicted classes is provided in Figure~\ref{fig:hist_mnist_dist}. ii) we created a t-SNE visualization of generated samples overlaid on top of the true data (see Appendix~\ref{App:t-SNE}). We find that the intersection of data points and samples generated by GAP is slightly better than samples generated by individual GANs. In addition to the synthetic data results, these visualizations suggest some favourable properties of GAP, but we hesitate to draw any strong conclusions from them. {\em Q: Does GAP enhance generalization?} To answer this question, we considered the MNIST, CIFAR-10, and LSUN church datasets which are often used to evaluate GAN variants. MNIST and CIFAR-10 consist of 50,000 training and 10,000 test images of size 27$\times$28 and 32$\times$32$\times$3 pixels, respectively. Each contains 10 different classes of objects. The LSUN church dataset contains various outdoor church images. These high resolution images were downsampled to 64$\times$64 pixels. The training set consists of 126,227 examples. One implicit but imperfect way to measure the generalization of a GAN is to observe generalization of the discriminator alone. This is because the generator is influenced by the discriminator and vice versa. If the discriminator is overfitting the training data, then the generator must be biased towards the training data as well. Here, we plot the learning curve of the discriminator during training for both GAP(DCGAN) and GAP(GRAN). Figure~\ref{fig:gap_cifar10_learning_curve} shows the learning curve for a single model versus groups of two and four models parallelized under GAP. We observe that more parallelization leads to less of a spread between the train and validation curves indicating the ability of GAP to improve generalization. Note that in order to plot a single representative learning curve while training multiple models under GAP, we averaged the learning curves of the individual models. To demonstrate that our observations are not merely attributable to smoothing by averaging, we show individual learning curves of the parallelized GANs (see Figure~\ref{fig:lc_swap_rate_single} in Appendix \ref{app:supporting}). From now on, we will work with GAP$_{D4}$ and GAP$_{G4}$. {\em Q: How does the rate at which discriminators are swapped affect training?} As noted earlier, the swapping frequency is the only additional hyper-parameter introduced by GAP. % A sensitivity analysis on swapping frequency is presented in We conduct a simple sensitivity analysis by plotting the validation cost of each GAN during training along with its standard deviation in Figure~\ref{fig:gap_sensitivity_anal}. We observe that GAP(DCGAN) varies the least at a swapping frequency of 0.5 -- swapping twice per epoch. Meanwhile, GAP(GRANs) are not too sensitive to swapping frequencies above 0.1. Figure~\ref{fig:gap_lsun_learning_curve} in Appendix \ref{app:supporting} plots learning curves at different swapping frequencies. Across all rates, we still see that the spread between the training and validation costs decreases with the number of GANs trained in parallel. {\em Q: Does GAP($\cdot$) improve the quality of generative models?} We used GAM-II to evaluate GAP (see Appendix \ref{sec:GAM2}). We first looked at the performance over four models: DCGAN, GRAN, GAP$_{D4}$, and GAP$_{G4}$. We also considered combining multiple GAN-variants in a GAP model (hybrid GAP). We denote this model as GAP$_{C4}$. % and GAP$_{F4}$ GAP$_{C4}$ consists of two DCGANs and two GRANs trained with GAP. % from scratch. Overall, we have ten generators and ten discriminators for DCGAN and GRAN: four discriminators from the individually-trained models, and four discriminators from GAP, and two discriminators from the GAP combination, GAP$_{C4}$. We used the collection of all ten discriminators to evaluate the generators. Table~\ref{tab:GAM2} presents the results. Note that we report the minimum and maximum of average and worst error rates among four GANs. Looking at the average errors, GAP$_{D4}$ strongly outperforms DCGAN on all datasets. GAP$_{G4}$ outperforms GRAN on CIFAR-10 and MNIST and strongly outperforms it on LSUN. For the case of the maximum worst-case error, GAP outperforms both DCGAN and GRAN across all datasets. However, we did not find an improvement on GAP$_{C4}$ based on the GAM-II metric. Additionally, we estimated the log-likelihood assigned by each model based on a recently proposed evaluation scheme that uses Annealed Importance Sampling \citep{Wu2016}. With the code provided by \citep{Wu2016}, we were able to evaluate DCGANs trained by GAP$_{D4}$ and GAP$_{comb}$\footnote{Unfortunately, we did not get the code provided by \citep{Wu2016} to work on GRAN.}. The results are shown in Table~\ref{tab:ais_eval}. Again, these results show that GAP$_{D4}$ improves on DCGAN's performance, but there is no advantage using combined GAP$_{C4}$. Samples from each CIFAR-10 and LSUN model for visual inspection are reproduced in Figures~\ref{fig:gap_cifar10_samples}, \ref{fig:gap_dcgan_lsun10_samples}, \ref{fig:gap_gran_lsun10_samples}, and \ref{fig:comb_gap_cifar10_samples}. \section{Supplementary Material for Generative Adversarial Parallelization} \subsection{Generative Adversarial Metric II} \label{sec:GAM2} Although GAM is a valid metric as it measures {\em the likelihood ratio} of two generative models, it is hard to apply in practice. This is due to having a test ratio constraint, which imposes the condition that the ratio between test error rates be approximately unity. However, because GAP improves the generalization of GANs as shown in Figure~\ref{fig:gap_cifar10_learning_curve}, the test ratio often does not equal one (see Section \ref{sec:experiments}). We introduce a new generative adversarial metric, and call it GAM-II. GAM-II evaluates a model based on either the average error rate or worst error rate of a collection of discriminators given a set of samples from each model to be evaluated: \begin{align} \argmax_{\lbrace G_j | S_j \sim p_{\mathcal{G}_j} \rbrace} \hat\epsilon(S_j) &= \argmax_{\lbrace G_j | S_j \sim p_{\mathcal{G}_j} \rbrace} \frac{1}{N_j} \sum^{N_j}_{i=1} \epsilon(S_j|D_{i}),\\ \argmax_{\lbrace G_j | S_j \sim p_{\mathcal{G}_j} \rbrace} \bar \epsilon(S_j) &= \argmax_{\lbrace G_j | S_j \sim p_{\mathcal{G}_j} \rbrace} \min_{i=1\cdots N_j} \epsilon(S_j|D_{i}) \end{align} where $\epsilon$ outputs the classification error rate, and $N_j$ is all discriminators except for the ones that the generator $j$ saw during training. For example, the comparison of DCGAN and GAP applied to four DCGANs is shown in Figure~\ref{fig:gamII}. \begin{definition} We say that GAP helps if at least one of the models trained with GAP performs better than a single model. Moreover, GAP strongly-helps if all models trained with GAP perform better than a single model. \end{definition} In our experiments, we assess GAP based on the definition above. \subsection{Experiments with t-SNE} In order to get a qualitative sense of models trained using a high dimensional dataset, we consider a t-SNE map of generated samples overlaid on top of the true data. Normally, a t-SNE map is used to visualize clusters of embedded high-dimensional data. Here, we are more interested in the overlap between true data and generated samples by visualizing clusters which we interpret as modes of the data generating distribution. Figure~\ref{fig:gap_tsne_cifar10} and \ref{fig:gap_tsne_lsun} present the t-SNE map of data and samples from single- and multiple- trained GANs under GAP. We find that the intersection of data points and samples generated by GAP is slightly better than samples generated by individual GANs. This provides an incomplete view but is nevertheless a helpful visualization. \label{App:t-SNE} \pagebreak \subsection{Supporting Figures} \label{app:supporting} The supporting figures as in Figure~\ref{fig:gap_cifar10_learning_curve} is presented for LSUN dataset in Figure~\ref{fig:gap_lsun_learning_curve}. There are total of four plots with different swapping frequencies. \pagebreak Figure~\ref{fig:lc_swap_rate_single} presents an instance of an individual learning curve in the case when multiple GANs are trained under GAP. The difference from from Figure~\ref{fig:gap_cifar10_learning_curve} and Figure~\ref{fig:gap_lsun_learning_curve} is that GAP curves are represented by the learning curve of a single GAN within GAP rather than an average. Fortunately, the behaviour remains the same, where the spread between training and validation cost decreases as parallelization scales up (i.e.~more models in a GAP). \pagebreak We observed the distribution of class predictions on samples from each model in order to check how closely they match the training set distribution (which is uniform for MNIST). We trained a simple logistic regression on MNIST that resulted in a $\simeq$99\% test accuracy rate. The histogram of the predicted classes is provided in Figure~\ref{fig:hist_mnist_dist}. We looked at the exponentiated expected KL divergence between the predicted distribution and the (uniform) prior distribution, also known as the ``Inception Score'' \citep{Salimans2016}. The results are shown in Table~\ref{tab:inception_score}. \vspace{-10cm} \pagebreak \subsubsection{Fine-tuning GANs using GAP} We also tried fine-tuning individually-trained GANs using GAP, which we denote as GAP$_{F4}$. GAP$_{F4}$ consists of two trained DCGANs and two trained GRANs. They are then fine-tuned using GAP for five epochs. Samples from the fine-tuned models are shown in Figure~\ref{fig:fcomb_gap_cifar10_samples}. \pagebreak \section{Background} The concept of a {\em two player zero-sum game} is borrowed from {\em game theory} in order to train a generative adversarial network \citep{Goodfellow2014}. A GAN consists of a generator $G$ and discriminator $D$, both parameterized as feed-forward neural networks. The goal of the generator is to generate samples that fool the discriminator from thinking that those samples are from the data distribution $p(\bm x)$, \emph{ad interim} the discriminative network's goal is to not get tricked by the generator. This view is formalized into a {\em minimax objective} such that the discriminator maximizes the expectation of its predictions while the generator minimizes the expectation of the discriminator's predictions, \begin{equation} \min_{\bm \theta_G} \max_{\bm \theta_D} V(D,G) = \min_{\bm \theta_G} \max_{\bm \theta_D} \Big[ \mathbb{E}_{\bm x \sim p_{\mathcal{D}}}\big[\log D(\bm x)\big] + \mathbb{E}_{\bm z \sim p_{\mathcal{G}}}\big[\log \big(1-D(G(\bm z))\big)\big] \Big]. \label{eqn:gan_obj} \end{equation} where $\bm \theta_G$ and $\bm \theta_D$ are the parameters (weights) of the neural networks, $p_{\mathcal{D}}$ is the data distribution, and $p_{\mathcal{G}}$ is the prior distribution of the generative network. Proposition 2 in \citep{Goodfellow2014} illustrates the ideal concept of the solution. For two player game, each network's gain of the utility (loss of the cost) ought to balance out the gain (loss) of the other network. In this scenario, the generator's distribution becomes the data distribution. Remark that when the objective function is convex, gradient-based training is guaranteed to converge to a saddle point. \subsection{Empirical observations} \label{sec:emp} The reality of training GANs is quite different from the ideal case due to the following reasons: \begin{enumerate}[leftmargin=*] \item The discriminative and generative networks are bounded by a finite number of parameters, which limits their modeling capacity. \item Practically speaking, the second term of the objective function in Equation~\ref{eqn:gan_obj} is a bottleneck early on in training, where the discriminator can perfectly distinguish the noisy samples coming from the generator. The argument of the log saturates and gradient will not flow to the generator. \item The GAN objective function is known to be non-convex and it is defined over a high-dimensional space. This often results in failure of gradient-based training to converge. \end{enumerate} The first issue comes from the nature of the modelling problem. Nevertheless, due to the expressiveness of deep neural networks, they have been shown empirically to be capable of generating natural images \citep{Radford2015,Im2016gran} by adopting parameter-efficient convolutional architectures. The second issue is typically addressed by inverting the generator's minimization into the maximization formulation in Equation~\ref{eqn:gan_obj} accordingly, \begin{equation} \min_{\bm \theta_G} \log (1-D(G(\bm z))) \rightarrow \max_{\bm \theta_G} \log (D(G(\bm z))). \end{equation} This provides better gradient flow in the earlier stages of training \citep{Goodfellow2014}. Although there have been cascades of success in image generation tasks using advanced GANs \citep{Radford2015, Im2016gran, Salimans2016}, all of them mention the problem of difficulty in training. For example, \citet{Radford2015} state that {\em the generator ... collapsing all samples to a single point ... is a common failure mode observed in GANs}. This scenario can occur when the generator allocates most of its probability mass to a single sample that the discriminator has difficulty learning. Empirically, convergence of the learning curve does not correspond to improved quality of samples coming from the GAN and vice-versa. %(see Figure~\ref{?}). This is primarily caused by the third issue mentioned above. Gradient-based optimization methods are only guaranteed to converge to a Nash Equilibrium for convex functions, whereas the loss surface of the neural networks used in GANs are highly non-convex and there is no guarantee that a Nash Equilibrium even exists. \iffalse \subsection{Generative Adversarial Metric} The subject of generative modeling with GANs has undergone intensive study, and model evaluation between various types of GANs is topic of increased interest and debate \citep{Theis2015-sm}. Our work is inspired by the Generative Adversarial Metric \citep{Im2016gran}, which directly compares two models $M1:=(G1,D1)$ and $M2:=(G2,D2)$ by having them exchange their discriminators and engage in a “battle” against each other (see the pictorial example in Figure~\ref{fig:gam}). The GAM consists of two ratios, \emph{sample} and \emph{test}: \begin{align} r_{sample} \eqdef \frac{\epsilon \big(D_1(G_2(\bz)) \big)}{\epsilon\big(D_2(G_1(\bz)) \big)} \text{ and } r_{test} \eqdef \frac{\hat\epsilon \big(D_1(X_{test})\big)}{\hat\epsilon\big(D_2(X_{test})\big)} ,\ \end{align} where $\epsilon(\cdot)$ outputs the classification error rate made by the respective discriminator and $\hat\epsilon(\cdot)$ is the average error rate with respect to the test examples $X_{test}$. The sample ratio measures which of the GANs can fool the other GAN more. As well, imposing the condition of the test ratio being approximately 1, calibrates the models such that the sample ratio is fair. Formally, the GAM determines a winner between the two models by computing: \begin{equation} \text{winner} = \begin{cases} $M1$ & \text{if } r_{sample} < 1 \; \; \mathrm{and} \; \; r_{test}\simeq 1\\ $M2$ & \text{if } r_{sample} > 1 \; \; \mathrm{and} \; \; r_{test}\simeq 1\\ \text{Not Applicable} & \text{otherwise } .\ \end{cases} \label{eqn:sample_ratio_metric} \end{equation} \fi \section{Parallelizing Generative Adversarial Networks} \begin{wrapfigure}{r}{0.5\textwidth} \subsection{Mode coverage} The kind of {\em overfitting problem} mentioned above further relates to the problem of assigning probability mass to different modes of the data generating distribution -- what we call \emph{mode coverage}. Let us re-consider the example introduced in Section~\ref{sec:emp}. Say, the generator was able to figure out a single mode from which samples are drawn that confuse the discriminator. As long as the discriminator does not learn to fix this problem, the generator is not motivated to consider any other modes. This kind of scenario allows the generator to cheat by staying within a single, or small set of modes rather than exploring alternatives. The story is not exactly the same when there are several different discriminators interacting with each generator. Since different discriminators may be good at distinguishing samples from different modes, each generator must put some effort into fooling all of the discriminators by generating samples from different modes. The situation where samples from a single mode fool all of the discriminators grows much less likely as the number and diversity of discriminators and generators increases (see Figure~\ref{fig:syn_gen} and \ref{fig:mog_gen}). Full details of this visualization are provided in Section \ref{exp_setup}.
Maximum Entropy Flow Networks
1701.03504
Table 1: Quantitative measure of image diversity using 20 randomly sampled images
[ "Method", "[ITALIC] dL2", "SST", "SSW", "SSB" ]
[ [ "Texture net", "11534", "128680", "109577", "19103" ], [ "MEFN", "17014", "175604", "161639", "13964" ] ]
the average pairwise Euclidean distance between randomly sampled images (dL2=meani≠j∥\vzi−\vzj∥22), and MEFN gives higher dL2, quantifying diversity across images. We also consider an ANOVA-style analysis to measure the diversity of the images, where we think of the RGB values for the same pixel across multiple images as a group, and compute the within and between group variance. Specifically, denoting zki as the pixel value for a specific pixel k=1,... ,d for an image i=1, .... ,n. We partition the total sum of square SST=∑i,k(zki−¯z)2 as the within group error SSW=∑i,k(zki−¯zk)2 and between group error SSB=∑i,kn(¯zk−¯z)2, where ¯z and ¯zk are the mean pixel values across all data and for a specific pixel k. Ideally we want the samples to exhibit large variability across images (large SSW, within a group/pixel) and no structure in the mean image (small SSB, across groups/pixels). Indeed, the MEFN has a larger SSW, implying higher variability around the mean image, a smaller SSB, implying the stationarity of the generated samples, and a larger SST, implying larger total variability also. The MEFN produces images that are conclusively more variable without sacrificing the quality of the texture, implicating the broad utility of ME.
\documentclass{article} % For LaTeX2e % http://ctan.org/pkg/algorithms % http://ctan.org/pkg/algorithmicx \title{Maximum Entropy Flow Networks} \author{Gabriel Loaiza-Ganem\thanks{These authors contributed equally.} , Yuanjun Gao$^\ast$ \& John P. Cunningham \\ Department of Statistics\\ Columbia University\\ New York, NY 10027, USA \\ \texttt{\{gl2480,yg2312,jpc2181\}@columbia.edu} } \newcommand{\var}[1]{{\ttfamily#1}}% variable \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\supp}{supp} \DeclareMathOperator*{\tr}{tr} \newcommand{\ms}{\scriptscriptstyle} \iclrfinalcopy % Uncomment for camera-ready version \input{texdefs} \begin{document} \maketitle \begin{abstract} Maximum entropy modeling is a flexible and popular framework for formulating statistical models given partial knowledge. In this paper, rather than the traditional method of optimizing over the continuous density directly, we learn a smooth and invertible transformation that maps a simple distribution to the desired maximum entropy distribution. Doing so is nontrivial in that the objective being maximized (entropy) is a function of the density itself. By exploiting recent developments in normalizing flow networks, we cast the maximum entropy problem into a finite-dimensional constrained optimization, and solve the problem by combining stochastic optimization with the augmented Lagrangian method. Simulation results demonstrate the effectiveness of our method, and applications to finance and computer vision show the flexibility and accuracy of using maximum entropy flow networks. \end{abstract} \section{Introduction} The maximum entropy (ME) principle \citep{jaynes1957information} states that subject to some given prior knowledge, typically some given list of moment constraints, the distribution that makes minimal additional assumptions -- and is therefore appropriate for a range of applications from hypothesis testing to price forecasting to texture synthesis -- is that which has the largest entropy of any distribution obeying those constraints. First introduced in statistical mechanics by \cite{jaynes1957information}, and considered both celebrated and controversial, ME has been extensively applied in areas including natural language processing \citep{berger1996maximum}, ecology \citep{phillips2006maximum}, finance \citep{buchen1996maximum}, computer vision \citep{zhu1998filters}, and many more. Continuous ME modeling problems typically include certain expectation constraints, and are usually solved by introducing Lagrange multipliers, which under typical assumptions yields an exponential family distribution (also called Gibbs distribution) with natural parameters such that the expectation constraints are obeyed. Unfortunately, fitting ME distributions in even modest dimensions poses significant challenges. First, optimizing the Lagrangian for a Gibbs distribution requires evaluating the normalizing constant, which is in general computationally very costly and error prone. Secondly, in all but the rarest cases, there is no way to draw samples independently and identically from this Gibbs distribution, even if one could derive it. Third, unlike in the discrete case where a number of recent and exciting works have addressed the problem of estimating entropy from discrete-valued data \citep{jiao2015minimax, valiant2013estimating}, estimating differential entropy from data samples remains inefficient and typically biased. These shortcomings are critical and costly, given the common use of ME distributions for generating reference data samples for a null distribution of a test statistic. There is thus ample need for a method that can both solve the ME problem and produce a solution that is easy and fast to sample. In this paper we develop maximum entropy flow networks (MEFN), a stochastic-optimization-based framework and algorithm for fitting continuous maximum entropy models. Two key steps are required. First, conceptually, we replace the idea of maximizing entropy over a density directly with maximizing, over the parameter space of an indexed function family, the entropy of the density induced by mapping a simple distribution (a Gaussian) through that optimized function. Modern neural networks, particularly in variational inference \citep{kingma2013auto, rezende2015variational}, have successfully employed this same idea to generate complex distributions, and we look to similar technologies. Secondly, unlike most other objectives in this network literature, the entropy objective itself requires evaluation of the target density directly, which is unavailable in most traditional architectures. We overcome this potential issue by learning a smooth, invertible transformation that maps a simple distribution to an (approximate) ME distribution. Recent developments in normalizing flows \citep{rezende2015variational, dinh2016density} allow us to avoid biased and computationally inefficient estimators of differential entropy (such as the nearest-neighbor class of estimators like that of Kozachenko-Leonenko; see \cite{berrett2016efficient}). Our approach avoids calculation of normalizing constants by learning a map with an easy-to-compute Jacobian, yielding tractable probability density computation. The resulting transformation also allows us to reliably generate iid samples from the learned ME distribution. We demonstrate MEFN in detail in examples where we can access ground truth, and then we demonstrate further the ability of MEFN networks in equity option prices fitting and texture synthesis. Primary contributions of this work include: \emph{(i)} addressing the substantial need for methods to sample ME distributions; \emph{(ii)} introducing ME problems, and the value of including entropy in a range of generative modeling problems, to the deep learning community; \emph{(iii)} the novel use of \emph{constrained} optimization for a deep learning application; and \emph{(iv)} the application of MEFN to option pricing and texture synthesis, where in the latter we show significant increase in the diversity of synthesized textures (over current state of the art) by using MEFN. \section{Background} \subsection{Maximum entropy modeling and Gibbs distribution} We consider a continuous random variable $\mZ \in \mathcal{Z}\subseteq \mathbb{R}^d$ with density $p$, where $p$ has differential entropy $H(p)=- \int p(\vz)\log p(\vz) d\vz$ and support $\supp(p)$. The goal of ME modeling is to find, and then be able to easily sample from, the maximum entropy distribution given a set of moment and support constraints, namely the solution to: \begin{eqnarray}\label{origprom} p^{*} &=& \text{maximize}~~~ H(p) \\ && \text{subject to} ~~~E_{\mZ\sim p}[T(\mZ)]=0 \nonumber\\ && ~~~~~~~~~~~~~~~~~~\supp(p)=\mathcal{Z} \nonumber, \end{eqnarray} where $T(\vz) = (T_1(\vz),...,T_m(\vz)):\mathcal{Z}\rightarrow \mathbb{R}^{m}$ is the vector of known (assumed sufficient) statistics, and $\mathcal{Z}$ is the given support of the distribution. Under standard regularity conditions, the optimization problem can be solved by Lagrange multipliers, yielding an exponential family $p^*$ of the form: \begin{equation}\label{curmet} p^*(\vz) \propto e^{\eta^\top T(\vz)} \mathbbm{1}(\vz \in \mathcal{Z}) \end{equation} where $\eta \in \mathbb{R}^m$ is the choice of natural parameters of $p^*$ such that $E_{p^*}[T(\mZ)]=0$. Despite this simple form, these distributions are only in rare cases tractable from the standpoint of calculating $\eta$, calculating the normalizing constant of $p^*$, and sampling from the resulting distribution. There is extensive literature on finding $\eta$ numerically \citep{darroch1972gicllm, salakhutdinov2003conopt, dellapietra1997ifrf, dudik2004maxentdenest, malouf2002maxentcomp,collins2002lrabbd}, but doing so requires computing normalizing constants, which poses a challenge even for problems with modest dimensions. Also, even if $\eta$ is correctly found, it is still not trivial to sample from $p^*$. Problem-specific sampling methods (such as importance sampling, MCMC, etc.) have to be designed and used, which is in general challenging (burn-in, mixing time, etc.) and computationally burdensome. \subsection{Normalizing flows} Following \cite{rezende2015variational}, we define a \textit{normalizing flow} as the transformation of a probability density through a sequence of invertible mappings. Normalizing flows provide an elegant way of generating a complicated distribution while maintaining tractable density evaluation. Starting with a simple distribution $\mZ_0 \in \mathbb{R}^d \sim p_0$ (usually taken to be a standard multivariate Gaussian), and by applying $k$ invertible and smooth functions $f_i: \mathbb{R}^d \rightarrow \mathbb{R}^d (i=1,...,k)$, the resulting variable $\mZ_{k} = f_{k}\circ f_{k-1}\circ\dots \circ f_{1}(\mZ_{0})$ has density: \begin{equation} p_{k}(\vz_{k}) = p_{\ms 0}(f_{1}^{-1}\circ f_{2}^{-1}\circ\dots\circ f_{k}^{-1}(\vz_{k}))\displaystyle \prod_{i=1}^{k}|\det(J_{i}(\vz_{i-1}))|^{-1}, \end{equation} where $J_i$ is the Jacobian of $f_i$. If the determinant of $J_{i}$ can be easily computed, $p_{k}$ can be computed efficiently. \cite{rezende2015variational} proposed two specific families of transformations for variational inference, namely planar flows and radial flows, respectively: \begin{equation} f_i(\vz) = \vz + \vu_i h(\vw_i^{T} \vz+ b_i)~~~~~~~~~~\text{and}~~~~~~~~~~f_i(\vz) = \vz + \beta_i h(\alpha_i, r_i)(\vz - \vz'_i), \end{equation} where $b_i \in \mathbb{R}$, $\vu_i, \vw_i \in \mathbb{R}^d$ and $h$ is an activation function in the planar case, and where $\beta_i \in \mathbb{R}$, $\alpha_i >0$, $\vz'_i \in \mathbb{R}^d$ , $h(\alpha,r)=1/(\alpha+r)$ and $r_i = ||\vz-\vz'_i||$ in the radial. %For a certain domain of the parameters, the transformations are invertible and the determinant of the Jacobian can be easily computed with the matrix determinant lemma. %\cite{rezende2015variational} illustrates that incorporating normalizing flows in variational inference framework improves the performance. Recently \cite{dinh2016density} proposed a normalizing flow with convolutional, multiscale structure that is suitable for image modeling and has shown promise in density estimation for natural images. \section{Maximum entropy flow network (MEFN) algorithm} \subsection{Formulation} Instead of solving Equation \ref{curmet}, we propose solving Equation \ref{origprom} directly by optimizing a transformation that maps a random variable $\mZ_0$, with simple distribution $p_0$, to the ME distribution. Given a parametric family of normalizing flows $\mathcal{F}=\{f_\phi, \phi \in \mathbb{R}^q\}$, we denote $p_\phi(\vz) = p_0( f_\phi^{-1}(\vz) ) | \det (J_\phi (\vz)) |^{-1}$ as the distribution of the variable $f_\phi(\mZ_0)$, where $J_\phi$ is the Jacobian of $f_\phi$. We then rewrite the ME problem as: \begin{eqnarray}\label{optobj} \phi^{*} &=& \text{maximize}~~~ H(p_\phi) \\ && \text{subject to} ~~~E_{\mZ_0\sim p_0}[T(f_\phi(\mZ_{0}))]=0 \nonumber\\ && ~~~~~~~~~~~~~~~~~~\supp(p_\phi)=\mathcal{Z}. \nonumber \end{eqnarray} When $p_0$ is continuous and $\mathcal{F}$ is suitably general, the program in Equation \ref{optobj} recovers the ME distribution $p_\phi$ exactly. With a flexible transformation family, the ME distribution can be well approximated. In experiments we found that taking $p_0$ to be a standard multivariate normal distribution achieves good empirical performance. Taking $p_0$ to be a bounded distribution (e.g. uniform distribution) is problematic for learning transformations near the boundary, and heavy tailed distributions (e.g. Cauchy distribution) caused similar trouble due to large numbers of outliers. %For all the experiments we take $p_0$ to be a multivariate standard normal distribution. \subsection{Algorithm} \label{sec:c_update} We solved Equation \ref{optobj} using the augmented Lagrangian method. Denote $R(\phi) = E(T(f_\phi( \mZ_0)))$, the augmented Lagrangian method uses the following objective: \begin{equation}\label{auglag} L(\phi; \lambda, c) = - H(p_\phi) + \lambda^{\top} R(\phi) + \dfrac{c}{2}||R(\phi)||^2 \end{equation} where $\lambda \in \mathbb{R}^m$ is the Lagrange multiplier and $c>0$ is the penalty coefficient. We minimize Equation $\ref{auglag}$ for a non-decreasing sequence of $c$ and well-chosen $\lambda$. As a technical note, the augmented Lagrangian method is guaranteed to converge under some regularity conditions \citep{bertsekas1996}. As is usual in neural networks, a proof of these conditions is challenging and not yet available, though intuitive arguments (see Appendix \S \ref{sec:augLcond}) suggest that most of them should hold. Due to the non rigorous nature of these arguments, we rely on the empirical results of the algorithm to claim that it is indeed solving the optimization problem. For a fixed $(\lambda, c)$ pair, we optimize $L$ with stochastic gradient descent. Owing to our choice of network and the resulting ability to efficiently calculate the density $p_\phi(\vz^{(i)})$ for any sample point $\vz^{(i)}$ (which are easy-to-sample iid draws from the multivariate normal $p_0$), we compute the unbiased estimator of $H(p_\phi)$ with: \begin{equation} H(p_\phi) \approx -\frac{1}{n} \sum_{i=1}^n \log p_\phi ( f_\phi(\vz^{(i)}) ) \end{equation} $R(\phi)$ can also be estimated without bias by taking a sample average of $\vz^{(i)}$ draws. The resulting optimization procedure is detailed in Algorithm \ref{alg:maxent}, of which step 9 requires some detail: denoting $\phi_k$ as the resulting $\phi$ after $i_{max}$ SGD iterations at the augmented Lagrangian iteration $k$, the usual update rule for $c$ \citep{bertsekas1996} is: \begin{equation}\label{cupdates} c_{k+1}=\begin{cases} \beta c_{k}\text{, if }||R(\phi_{k+1})||>\gamma ||R(\phi_{k})||\\ c_{k}\text{, otherwise} \end{cases} \end{equation} where $\gamma\in (0,1)$ and $\beta>1$. Monte Carlo estimation of $R(\phi)$ sometimes caused $c$ to be updated too fast, causing numerical issues. Accordingly, we changed the hard update rule for $c$ to a probabilistic update rule: a hypothesis test is carried out with null hypothesis $H_{0} : E[||R(\phi_{k+1})||]=E[\gamma||R(\phi_{k})||]$ and alternative hypothesis $H_{1} : E[||R(\phi_{k+1})||]>E[\gamma||R(\phi_{k})||]$. The $p$-value $p$ is computed, and $c_{k+1}$ is updated to $\beta c_{k}$ with probability $1-p$. We used a two-sample $t$-test to calculate the $p$-value. What results is a robust and novel algorithm for estimating maximum entropy distributions, while preserving the critical properties of being both easy to calculate densities of particular points, and being trivially able to produce truly iid samples. \begin{algorithm}[t] \caption{Training the MEFN} \begin{algorithmic}[1] \State initialize $\phi = \phi_0$, set $c_0 > 0$ and $\lambda_0$. \For{Augmented Lagrangian iteration $k = 1,...,k_{\max}$} \For{SGD iteration $i = 1,...,i_{\max}$} \State{Sample $\vz^{(1)},...,\vz^{(n)} \sim p_0$, get transformed variables $\vz^{(i)}_\phi = f_\phi(\vz^{(i)}), i = 1,...,n$} \State{Update $\phi$ by descending its stochastic gradient (using e.g. ADADELTA \citep{zeiler2012adadelta}): {\scriptsize \[\nabla_\phi L(\phi; \lambda_k, c_k) \approx \frac{1}{n} \sum_{i=1}^n \nabla_\phi \log p_\phi ( \vz^{(i)}_\phi ) + \frac{1}{n} \sum_{i=1}^n \nabla_\phi T( \vz^{(i)}_\phi ) \lambda_k + c_k \frac{2}{n} \sum_{i=1}^{\frac{n}{2}} \nabla_\phi T( \vz^{(i)}_\phi ) \cdot \frac{2}{n}\sum_{i=\frac{n}{2}+1}^n T( \vz^{(i)}_\phi ) \]}} \EndFor \State Sample $\vz^{(1)},...,\vz^{(\tilde{n})} \sim p_0$, get transformed variables $\vz^{(i)}_\phi = f_\phi(\vz^{(i)}), i = 1,...,\tilde{n}$ \State Update $\lambda_{k+1} = \lambda_k + c_k \frac{1}{\tilde{n}} \sum_{i=1}^{\tilde{n}} T( \vz^{(i)}_\phi )$ \State Update $c_{k+1}\geq c_{k}$ (see text for detail) \EndFor \end{algorithmic} \label{alg:maxent} \end{algorithm} \section{Experiments} We first construct an ME problem with a known solution (\S \ref{sec:simulation}), and we analyze the MEFN algorithm with respect to the ground truth and to an approximate Gibbs solution. These examples test the validity of our algorithm and illustrate its performance. \S \ref{sec:real} and \S \ref{sec:texture} then applies the MEFN to a financial data application (predicting equity option values) and texture synthesis, respectively, to illustrate the flexibility and practicality of our algorithm. For \S \ref{sec:simulation} and \S \ref{sec:real}, We use 10 layers of planar flow with a final transformation $g$ (specified below) that transforms samples to the specified support, and use with ADADELTA \citep{zeiler2012adadelta}. For \S \ref{sec:texture} we use real NVP structure and use ADAM \citep{kingma2014adam} with learning rate = $0.001$. For all our experiments, we use $i_{max}=3000$, $\beta=4$, $\gamma=0.25$. For \S \ref{sec:simulation} and \S \ref{sec:real} we use $n=300$, $\tilde{n}=1000$, $k_{\max}=10$; For \S \ref{sec:texture} we use $n=\tilde{n}=2$, $k_{\max}=8$. % to give good performance \subsection{A maximum entropy problem with known solution} \label{sec:simulation} Following the setup of the typical ME problem, suppose we are given a specified support $\mathcal{S}=\{\vz = (z_1,\dots,z_{d-1}):z_i\geq 0\text{ and }\sum_{k=1}^{d-1}z_k\leq 1\}$ and a set of constraints $E[\log Z_k] = \kappa_k (k = 1,...,d)$, where $Z_d = 1 - \sum_{k=1}^{d-1} Z_k$. We then write the maximum entropy program: \begin{eqnarray}\label{dirichletprob} p^{*} &=& \text{maximize}~~~ H(p) \\ && \text{subject to} ~~~E_{\mZ\sim p}[\log Z_k - \kappa_k]=0 ~~~ \forall k=1,...,d\nonumber\\ && ~~~~~~~~~~~~~~~~~~\supp(p)=\mathcal{S}\nonumber.%\left\{(z_1,\dots,z_{d-1}):z_k\geq 0\text{ and }\sum_{k=1}^{d-1}z_k\leq 1\right\} \nonumber. \end{eqnarray} This is a general ME problem that can be solved via the MEFN. Of course, we have particularly chosen this example because, though it may not obviously appear so, the solution has a standard and tractable form, namely the Dirichlet. This choice allows us to consider a complicated optimization program that happens to have known global optimum, providing a solid test bed for the MEFN (and for the Gibbs approach against which we will compare). Specifically, given a parameter $\alpha\in\mathbb{R}^d$, the Dirichlet has density: \begin{equation} p(z_1,\dots,z_{d-1})=\dfrac{1}{B(\alpha)}\prod_{k=1}^{d}z_k^{\alpha_k-1}\mathbbm{1}\left( (z_1,\dots,z_{d-1}) \in \mathcal{S} \right) \end{equation} where $B(\alpha)$ is the multivariate Beta function, and $z_d=1-\sum_{k=1}^{d-1}z_k$. Note that this Dirichlet is a distribution on $\mathcal{S}$ and not on the $(d-1)$-dimensional simplex $\mathcal{S}^{d-1}=\{(z_1,\dots,z_d):z_k\geq 0\text{ and }\sum_{k=1}^{d}z_k= 1\}$ (an often ignored and seemingly unimportant technicality that needs to be correct here to ensure the proper transformation of measure). Connecting this familiar distribution to the ME problem above, we simply have to choose $\alpha$ such that $\kappa_k = \psi(\alpha_k)-\psi(\alpha_0)$ for $k=1,...,d$, where $\alpha_0=\sum_{k=1}^{d}\alpha_k$ and $\psi$ is the digamma function. We then can pose the above ME problem to the MEFN and compare performance against ground truth. Before doing so, we must stipulate the transformation $g$ that maps the Euclidean space of the multivariate normal $p_0$ to the desired support $\mathcal{S}$. Any sensible choice will work well (another point of flexibility for the MEFN); we use the standard transformation: \begin{equation} g(z_1,...,z_{d-1}) = \left(\dfrac{e^{z_1}}{\sum_{k=1}^{d-1}e^{z_k}+1}, ..., \dfrac{e^{z_{d-1}}}{\sum_{k=1}^{d-1}e^{z_k}+1} \right)^\top \end{equation} Note that the MEFN outputs vectors in $\mathbb{R}^{d-1}$, and not $\mathbb{R}^d$, because the Dirichlet is specified as a distribution on $\mathcal{S}$ (and not on the simplex $\mathcal{S}^{d-1}$). Accordingly, the Jacobian is a square matrix and its determinant can be computed efficiently using the matrix determinant lemma. Here, $p_0$ is set to the $(d-1)$-dimensional standard normal.\\ We proceed as follows: We choose $\alpha$ and compute the constraints $\kappa_1,...,\kappa_{d}$. We run MEFN pretending we do not know $\alpha$ or the Dirichlet form. We then take a random sample from the fitted distribution and a random sample from the Dirichlet with parameter $\alpha$, and compare the two samples using the maximum mean discrepancy (MMD) kernel two sample test \cite[]{gretton2012kerneltest}, which assesses the fit quality. We take the sample size to be $300$ for the two sample kernel test. Figure 1 shows an example of the transformation from normal (left panel) to MEFN (middle panel), and comparing that to the ground truth Dirichlet (right panel). The MEFN and ground truth Dirichlet densities shown in purple match closely, and the samples drawn (red) indeed appear to be iid draws from the same (maximum entropy) distribution in both cases. Additionally, the middle panel of Figure 1 shows an important cautionary tale that foreshadows our texture synthesis results (\S \ref{sec:texture}). One might suppose that satisfying the moment matching constraints is adequate to produce a distribution which, if not technically the ME distribution, is still interestingly variable. The middle panel shows the failure of this intuition: in dark green, we show a network trained to simply match the moments specified above, and the resulting distribution quite poorly expresses the variability available to a distribution with these constraints, leading to samples that are needlessly similar. Given the substantial interest in using networks to learn implicit generative models (e.g., \cite{mohamed2016learning}), this concern is particularly relevant and highlights the importance of considering entropy. Figure 2 quantitatively analyzes these results. In the left panel, for a specific choice of $\alpha=(1,2,3)$, we show our unbiased entropy estimate of the MEFN distribution $p_\phi$ as a function of the number of SGD iterations (red), along with the ground truth maximum entropy $H(p^*)$ (green line). Note that the MEFN stabilizes at the correct value (as a stochastic estimator, variance around that value is expected). In the middle panel, we show the distribution of MMD values for the kernel two sample test, as well as the observed statistic for the MEFN (red) and for a randomly chosen Dirichlet distribution (gray; chosen to be close to the true optimum, making a conservative comparison). The MMD test does not reject MEFN as being different from the true ME distribution $p^*$, but it does reject a Dirichlet whose $KL$ to the true $p^*$ is small (see legend). In the right panel, for many different Dirichlets in a small grid around a single true $p^*$, the kernel two sample test statistic is computed, the MMD $p$-value is calculated, as is the $KL$ to the true distribution. We plot a scatter of these points in grey, and we plot the particular MEFN solution as a red star. We see that for other Dirichlets with similar $KL$ to the true distribution as the MEFN distribution, the $p$-values seem uniform, meaning that the $KL$ to the true is indeed very small. Again this is conservative, as the grey points have access to the known Dirichlet form, whereas the MEFN considered the entire space (within its network capacity) of $\mathcal{S}$ supported distributions. Given this fact, the performance of MEFN is impressive. \subsection{Risk-neutral asset pricing} \label{sec:real} We illustrate the flexibility and practicality of our algorithm extracting the risk-neutral asset price probability based on option prices, an active and interesting area for ME models. We find that MEFN and the classic Gibbs approach yield comparable performances. Owing to space limitations we have placed these results in Appendix \S \ref{sec:real}. \subsection{Modeling images of textures} \label{sec:texture} Constructing generative models to generate random images with certain texture structure is an important task in computer vision. A line of texture synthesis research proceeds by first extracting a set of features that characterizes the target texture and then generate images that match the features. The seminal work of \cite{zhu1998filters} proposes constructing texture models under the ME framework, where features (or filters) of the given texture image are adaptively added in the model and a Gibbs distribution whose expected feature matches the target texture is learnt. One major difficulty with the method is that both model learning and image generation involve sampling from a complicated Gibbs distribution. More recent works exploit more complicated features \citep{portilla2000parametric, gatys2015texture, ulyanov2016texture}. \cite{ulyanov2016texture} propose the \emph{texture net}, which uses a texture loss function by using the Gram matrices of the outputs of some convolutional layers of a pre-trained deep neural network for object recognition. While the use of these complicated features does provide high-quality synthetic texture images, that work focuses exclusively on generating images that match these feature (moments). Importantly, this network focuses only on generating feature-matching images without using the ME framework to promote the diversity of the samples. Doing so can be deeply problematic: in Figure 1 (middle panel), we showed the lack of diversity resulting from only moment matching in that Dirichlet setting, and further we note that the extreme pathology would result in a point mass on the training image -- a global optimum for this objective, but obviously a terrible generative model for synthesizing textures. Ideally, the MEFN will match the moments \emph{and} promote sample diversity. %Indeed, both evaluating the entropy and sampling from ME distribution are challenging in the high-dimensional image space. We applied MEFN to texture synthesis with an RGB representation of the $224 \times 224$ pixel images , $\vz \in \mathcal{Z} = [0,1]^{d}$, where $d=224\times 224 \times 3$. We follow \cite{ulyanov2016texture} (we adapted \url{https://github.com/ProofByConstruction/texture-networks}) to create a texture loss measure $T(\vz): [0,1]^{d} \rightarrow \mathbb{R}$, and aim to sample a diverse set of images with small moment violation. For the transformation family $\mathcal{F}$ we use the real NVP network structure proposed in \cite{dinh2016density} (we adapted \url{https://github.com/taesung89/real-nvp}). We use $3$ residual blocks with $32$ feature maps for each coupling layer and downscale $3$ times. For fair comparison, we use the same real NVP structure for both\footnote{\cite{ulyanov2016texture} use a quite different generative network structure, which is not invertible and is therefore infeasible for entropy evaluation, so we replace their generative network by the real NVP structure.}, implemented in TensorFlow \citep{abadi2016tensorflow}. As is shown in top row of figure \ref{fig:texture}, both methods generate visually pleasing images capturing the texture structure well. The bottom row of Figure \ref{fig:texture} shows that texture cost (left panel) is similar for both methods, while MEFN generates figures with much larger entropy than the texture network formulation (middle panel), which is desirable (as previously discussed). The bottom right panel of figure \ref{fig:texture} compares the marginal distribution of the RGB values sampled from the networks: we found that MEFN generates a more variable distribution of RGB values than the texture net. Further results are in Appendix \S \ref{sec:textapp}. We compute in Table \ref{tab:texture} the average pairwise Euclidean distance between randomly sampled images ($d_{L^2} = \text{mean}_{i \neq j} \|\vz_i - \vz_j \|_2^2$), and MEFN gives higher $d_{L^2}$, quantifying diversity across images. We also consider an ANOVA-style analysis to measure the diversity of the images, where we think of the RGB values for the same pixel across multiple images as a group, and compute the within and between group variance. Specifically, denoting $z_i^k$ as the pixel value for a specific pixel $k = 1,...,d$ for an image $i = 1,....,n$. We partition the total sum of square $\text{SST} = \sum_{i,k} (z_i^k - \bar{z})^2$ as the within group error $\text{SSW} = \sum_{i,k} (z_i^k - \bar{z}^k)^2$ and between group error $\text{SSB} = \sum_{i,k} n (\bar{z}^k - \bar{z})^2$, where $\bar{z}$ and $\bar{z}^k$ are the mean pixel values across all data and for a specific pixel $k$. Ideally we want the samples to exhibit large variability across images (large SSW, within a group/pixel) and no structure in the mean image (small SSB, across groups/pixels). Indeed, the MEFN has a larger SSW, implying higher variability around the mean image, a smaller SSB, implying the stationarity of the generated samples, and a larger SST, implying larger total variability also. The MEFN produces images that are conclusively more variable without sacrificing the quality of the texture, implicating the broad utility of ME. \section{Conclusion} In this paper we propose a general framework for fitting ME models. This approach is novel and has three key features. First, by learning a transformation of a simple distribution rather than the distribution itself, we are able to avoid explicitly computing an intractable normalizing constant for the ME distribution. Second, by combining stochastic optimization with the augmented Lagrangian method, we can fit the model efficiently, allowing us to evaluate the ME density of any point simply and accurately. Third, critically, this construction allows us to trivially sample iid from a ME distribution, extending the utility and efficiency of the ME framework more generally. Also, accuracy equivalent to the classic Gibbs approach is in itself a contribution (owing to these other features). We illustrate the MEFN in both a simulated case with known ground truth and real data examples. %case, where our method works well and successfully obtains the ME solution. There are a few recent works encouraging sample diversity in the setting of texture modeling. \cite{ulyanov2017improved} extended \cite{ulyanov2016texture} by adding a penalty term using the Kozachenko-Leonenko estimator \cite{kozachenko1987sample} of entropy. Their generative network is an arbitrary deep neural network rather than a normalizing flow, which is more flexible but cannot give the probability density of each sample easily so as to compute an unbiased estimator of the entropy. Kozachenko-Leonenko is a biased estimator for entropy and requires a fairly large number of samples to get good performance in high-dimensional settings, hindering the scalability and accuracy of the method; indeed, our choice of normalizing flow networks was driven by these practical issues with Kozachenko-Leonenko. \cite{LuZhuWu2016} extended \cite{zhu1998filters} by using a more flexible set of filters derived from a pre-trained deep neural networks, and using parallel MCMC chains to learn and sample from the Gibbs distribution. Running parallel MCMC chains results in diverse samples but can be computationally intensive for generating each new sample image. Our MEFN framework enables truly iid sampling with the ease of a feed forward network. \subsubsection*{Acknowledgments} We thank Evan Archer for normalizing flow code, and Xuexin Wei, Christian Andersson Naesseth and Scott Linderman for helpful discussion. This work was supported by a Sloan Fellowship and a McKnight Fellowship (JPC). \newpage \bibliographystyle{iclr2017_conference} \newpage \appendix \section{Augmented Lagrangian conditions} \label{sec:augLcond} We give a more thorough discussion of the regularity conditions which ensure that the Augmented Lagrangian method will work. The goal of this section is simply to state these conditions and give intuitive arguments about why some should hold in our case, not to attempt to prove that they indeed hold. The conditions \citep{bertsekas1996} are: \begin{itemize} \item There exists a strict local minimum $\phi^*$ of the optimization problem of Equation \ref{optobj}: If the function class $\mathcal{F}$ is rich enough that it contains a true solver of the maximum entropy problem, then a global optimum exists. Although not rigorous, we would expect that even in the finite expressivity case that a global optimum remains, and indeed, recent theoretical work \citep{raghu2016expressive,poole2016exponential} has gotten close to proving this. \item $\phi^*$ is a regular point of the optimization problem, that is, the rows of $\nabla_\phi R(\phi^*)$ are linearly independent: Again, this is not formal, but we should not expect this to cause any issues. This clearly depends on the specific form of $T$, but the condition basically says that there should not be redundant constraints at the optimum, so if $T$ is reasonable this shouldn't happen. \item $H(p_\phi)$ and $R(\phi)$ are twice continuously differentiable on a neighborhood around $\phi^*$: This holds by the smoothness of the normalizing flows. \item $y^\top \nabla_\phi^2 L(\phi^*;\lambda^*,0) y>0$ for every $y\neq 0$ such that $\nabla_\phi R(\phi^*)y=0$, where $\lambda^*$ is the true Lagrange multiplier: This condition is harder to justify. It would appear it is just asking that the Lagrangian (not the augmented Lagrangian) be strictly convex in feasible directions, but it is actually stronger than this and some simple functions might not satisfy the property. For example, if the function we are optimizing was $x^4$ and we had no constraints, the Lagrangian's Hessian would be $12x^2$, which is $0$ at $x^*=0$ thus not satisfying the condition. Importantly, these conditions are sufficient but not necessary, so even if this doesn't hold the augmented Lagrangian method might work (it certainly would for $x^4$). Because of this and the non-rigorous justifications of the first two conditions, we left these conditions for the appendix and relied instead on the empirical performance to justify that we are indeed recovering the maximum entropy distribution. \end{itemize} If all of these conditions hold, the augmented Lagrangian (for large enough $c$ and $\lambda$ close enough to $\lambda^*$) has a unique optimum in a neighborhood around $\phi^*$ that is close to $\phi^*$ (as $\lambda \rightarrow \lambda^*$ it converges to $\phi^*$) and its hessian at this optimum is positive-definite. Furthermore, $\lambda_k \rightarrow \lambda$. This implies that gradient descent (with the usual caveats of being started close enough to the solution and with the right steps) will correctly recover $\phi^*$ using the augmented Lagrangian method. This of course just guarantees convergence to a local optimum, but if there are no additional assumptions such as convexity, it can be very hard to ensure that it is indeed a global optimum. Some recent research has attempted to explain why optimization algorithms perform so well for neural networks \citep{choromanska2015loss,kawaguchi2016deep}, but we leave such attempts for our case for future research. \section{Risk-neutral asset price} \label{sec:real} We extract the risk-neutral asset price probability distribution based on option prices, an active and interesting area for ME models. We give a brief introduction of the problem and refer interested readers to see \cite{buchen1996maximum} for a more detailed explanation. Denoting $S_t$ as the price of an asset at time $t$, the buyer of a European call option for the stock that expires at time $t_e$ with strike price $K$ will receive a payoff of $c_K = (S_{t_e} - K)_{+} = \max (S_{t_e} - K, 0)$ at time $t_e$. Under the efficient market assumption, the risk-neutral probability distribution for the stock price at time $t_e$ satisfies: \begin{equation} \label{equ:option_con1} c_K = D(t_e) E_q[ (S_{t_e} - K)_{+} ], \end{equation} where $D(t_e)$ is the risk-free discount factor and $q$ is the risk-neutral measure. We also have that, under the risk-neutral measure, the current stock price $S_0$ is the discounted expected value of $S_{t_e}$: \begin{equation} \label{equ:option_con2} S_0 = D(t_e) E_q(S_{t_e}). \end{equation} When given $m$ options that expire at time $t_e$ with strikes $K_1,...,K_m$ and prices $c_{K_1},...,c_{K_m}$, we get $m$ expectation constraints on $q(S_{t_e})$ from Equation \ref{equ:option_con1}, together with Equation \ref{equ:option_con2}, we have $m+1$ expectation constraints in total. With that partial knowledge we can approximate $q(S_{t_e})$, which is helpful for understanding the market expected volatility and identify mispricing in option markets, etc. Inferring the risk-neutral density of asset price from a finite number of option prices is an important question in finance and has been studied extensively \citep{buchen1996maximum, borwein2003probability, bondarenko2003estimation, figlewski2008estimating}. One popular method proposed by \cite{buchen1996maximum} estimates the probability density as the maximum entropy distribution satisfying the expectation constraints and a positivity support constraint by fitting a Gibbs distribution, which results in a piece-wise linear log density: \begin{equation} p(z) \propto \exp \left\{ \eta_0 z + \sum_{i=1}^m \eta_i (z - K_i)_{+} \right\} \mathbbm{1}\left( z \geq 0 \right) \end{equation} and optimize the distribution with numerical methods. Here we compare the performance of the MEFN algorithm with the method proposed in \cite{buchen1996maximum}. To enforce the positivity constraint we choose $g(z)=e^{az+b}$, where $a$ and $b$ are additional parameters. We collect the closing price of European call options on Nov. 1 2016 for the stock AAPL (Apple inc.) that expires on $t_e = $ Jun. 16 2017. We use $m=4$ of the options with highest trading volume as training data and the rest as testing data. On the left panel of figure \ref{fig:option}, we show the fitted risk-neutral density of $S_{t_e}$ by MEFN (red line) with that of the fitted Gibbs distribution result (blue line). We find that while the distributions share similar location and variability, the distribution inferred by MEFN is smoother and arguably more plausible. In the middle panel we show a Q-Q plot of the quantiles of the MEFN and Gibbs distributions. We can see that the quantile pairs match the identity closely, which should happen if both methods recovered the exact same distribution. This highlights the effectiveness of MEFN. There does exist a small mismatch in the tails: the distribution inferred by MEFN has slightly heavier tails. This mismatch is difficult to interpret: given that both the Gibbs and MEFN distributions are fit with option price data (and given that one can observe at most one value from the distribution, namely the stock price at expiration), it is fundamentally unclear which distribution is superior, in the sense of better capturing the true ME distribution's tails. On the right panel we show the fitted option price for the two fitted distributions (for each strike price, we can recover the fitted option price by Equation \ref{equ:option_con1}). We noted that the fitted option price and strike price lines for both methods are very similar (they are mostly indiscernible on the right panel of figure \ref{fig:option}). We also compare the fitted performance on the test data by computing the root mean square error for the fitted and test data. We observe that the predictive performances for both methods are comparable. We note that for this specific application, there are practical concerns such as the microstructure noise in the data and inefficiency in the market, etc. Applying a pre-processing procedure and incorporating prior assumptions can be helpful for getting a more full-fledged method (see e.g. \cite{figlewski2008estimating}). Here we mainly focus on illustrating the ability of the MEFN method to approximate the ME distribution for non-typical distributions. Future work for this application includes fitting a risk-neutral distribution for multi-dimensional assets by incorporating dependence structure on assets. \section{Modeling images of textures} \label{sec:textapp} We tried our texture modeling approach with many different textures, and although MEFN samples don't always exhibit more visual diversity than samples obtained from the texture network, they always have more entropy as in figure \ref{fig:texture}. Figure \ref{fig:textapp} shows two positive examples, i.e. textures in which samples from MEFN do exhibit higher visual diversity than those from the texture network, as well as a negative example, in which MEFN achieves less visual diversity than the texture network, regardless of the fact that MEFN samples do have larger entropy. We hypothesize that this curious behavior is due to the optimization achieving a local optimum in which the brick boundaries and dark brick locations are not diverse but the entropy within each brick is large. It should also be noted that among the experiments that we ran, this was the only negative example that we got, and that slightly modifying the hyperparameters caused the issue to disappear. %\footnote{Appendix section \S \ref{sec:textapp} does not appear on the ICLR version of the paper since the results were obtained after the camera-ready deadline.} \end{document} % pmatrix*! use \begin{matrix*}[r] to center colums right \newcommand{\optionrule}{\noindent\rule{1.0\textwidth}{0.75pt}} \newenvironment{aside}{% \def\FrameCommand{\hspace{2em}} \MakeFramed {\advance\hsize-\width \small}\optionrule} {\\\optionrule\endMakeFramed} \renewcommand{\eqref}[1]{eq.~\ref{eq:#1}} \newcommand{\figref}[1]{Fig.~\ref{fig:#1}} \newcommand{\set}[1]{\{#1\}} % set notation \newcommand{\Range}{\mathcal{R}} % range (ie, of a function) \newcommand{\setcomp}[1]{{#1}^{\mathsf{c}}} % set complement \newcommand{\inv}[1]{\ensuremath{{#1}^{-1}}} \newcommand{\ind}[1]{\mathbf{1}_{\left\{ #1\right\}}} % indicator function \newcommand{\sgn}{\operatorname{sgn}} \newcommand{\st}{\mbox{ s.t. }} % such that or subject to \newcommand{\dm}[1]{\ensuremath{\,\mathrm{d}{#1}}} % Pretty symbol for differentials - ie, dx \newcommand{\deriv}[2]{\frac{d #1}{d #2}} \newcommand{\D}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\DD}[2]{\frac{\partial ^2 #1}{\partial #2 ^2}} \newcommand{\Di}[2]{\frac{\partial ^i #1}{\partial #2 ^i}} \newcommand{\evalat}[1]{\left.#1\right|} \newcommand{\parderiv}[2]{\frac{\partial #1}{\partial{#2}}} \newcommand{\parDeriv}[3]{\frac{\partial^{#3} #1}{\partial{#2}^{#3}}} \newcommand{\parwrt}[1]{\frac{\partial}{\partial{#1}}} \newcommand{\parpowrt}[2]{\frac{\partial^{#1}}{\partial {#2}^{#1}}} \newcommand{\partwowrt}[2]{\frac{\partial^{2}}{\partial {#2} \partial{#1}}} \newcommand{\ones}{\mathop{\mathbf{1}}} % vector of ones \newcommand{\diag}{\mathop{\mbox{diag}}} \newcommand{\rank}{\mathop{\mathrm{rank}}} % null space ( range is \Range, above) \newcommand{\Null}{\mathcal N} % null space ( range is \Range, above) \newcommand{\Tr}{\textrm{Tr}} % trace \newcommand{\norm}[1]{\left|\left|#1\right|\right|} % normroduct \newcommand{\tp}[1]{\ensuremath{{#1}^\top}} % transpose with argument \newcommand{\trp}{^{\mathrm{T}}} % Transpose \newcommand{\itrp}{^{-\mathrm{T}}} % inverse-transpose \newcommand{\inprod}[2]{\langle #1,#2\rangle} \newcommand{\pialg}{\pi\text{-algebra}} \newcommand{\sigalg}{\sigma\text{-algebra}} \newcommand{\alg}[1]{\mathcal{#1}} % sigma algebra notation \newcommand{\E}{\mathbb{E}} \newcommand{\V}{\mathbb{V}} \newcommand{\Prob}[1]{\mathbb{P}\left[#1\right]} \newcommand\independent{\protect\mathpalette{\protect\independenT}{\perp}} \def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}} \newcommand{\Norm}[2]{\mathcal{N}\left(#1, #2\right)} % normal distribution \newcommand{\identityMat}{\ensuremath{I}} \newcommand{\defvec}[1]{\expandafter\newcommand\csname v#1\endcsname{{\mathbf{#1}}}} \newcounter{ct} \forLoop{1}{26}{ct}{ \edef\letter{\alph{ct}} \expandafter\defvec\letter } \newcommand{\defmat}[1]{\expandafter\newcommand\csname m#1\endcsname{{\mathbf{#1}}}} \forLoop{1}{26}{ct}{ \edef\letter{\Alph{ct}} \expandafter\defmat\letter } \newcommand{\vmu}{\bm{\mu}} \newcommand{\veta}{\bm{\eta}} \newcommand{\vepsilon}{\bm{\epsilon}} \newcommand{\vlambda}{\bm{\lambda}} \newcommand{\mLambda}{\bm{\Lambda}} \newcommand{\conv}{\text{conv}} % convex hull \newcommand{\aff}{\text{Aff }} % affine hull \newcommand{\extp}{\text{Ext}} %extreme points \newcommand{\grad}{\nabla} \newcommand{\curl}{\nabla\times} \newcommand{\logdet}[1]{\log |#1|} \newtheoremstyle{evandefinition}{\topsep}{\topsep}% {}% Body font (\itshape) {}% Indent amount (empty = no indent, \parindent = para indent) {\bfseries}% Thm head font {}% Punctuation after thm head {\newline}% Space after thm head (default: 5pt plus 1pt minus 1pt) {\thmname{#1}\thmnumber{ #2}. \textit{\thmnote{ #3} }}% Thm head spec \newtheoremstyle{indenteddefinition}{\topsep}{\topsep} {\addtolength{\leftskip}{2em}} % Body font (\itshape) {-1.75em}% Indent amount (empty = no indent, \parindent = para indent) {\bfseries}% Thm head font % also: \scshape {}% Punctuation after thm head { }% Space after thm head (default: 5pt plus 1pt minus 1pt) {\thmname{#1} \thmnumber{#2}. \textbf{(\thmnote{#3})}}% Thm head spec \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \theoremstyle{evandefinition} \newtheorem{definition}{Definition} \theoremstyle{indenteddefinition} \newtheorem{example}{Example} \theoremstyle{remark} \newtheorem{exercise}{Exercise} \newtheorem{remark}{Remark} \makeatletter \renewcommand*\env@matrix[1][c]{\hskip -\arraycolsep \let\@ifnextchar\new@ifnextchar \array{*\c@MaxMatrixCols #1}} \makeatother