arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
sequencelengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2006.05076 | Stable Prediction via Leveraging Seed Variable | In this paper, we focus on the problem of stable prediction across unknown test data, where the test distribution is agnostic and might be totally different from the training one. In such a case, previous machine learning methods might exploit subtly spurious correlations in training data induced by non-causal variables for prediction. Those spurious correlations are changeable across data, leading to instability of prediction across data. By assuming the relationships between causal variables and response variable are invariant across data, to address this problem, we propose a conditional independence test based algorithm to separate those causal variables with a seed variable as priori, and adopt them for stable prediction. By assuming the independence between causal and non-causal variables, we show, both theoretically and with empirical experiments, that our algorithm can precisely separate causal and non-causal variables for stable prediction across test data. Extensive experiments on both synthetic and real-world datasets demonstrate that our algorithm outperforms state-of-the-art methods for stable prediction. | http://arxiv.org/pdf/2006.05076v1 | [
"Kun Kuang",
"Bo Li",
"Peng Cui",
"Yue Liu",
"Jianrong Tao",
"Yueting Zhuang",
"Fei Wu"
] | 2020-06-09T06:56:31Z | 2020-06-09T06:56:31Z |
2006.05082 | Learning to Stop While Learning to Predict | There is a recent surge of interest in designing deep architectures based on the update steps in traditional algorithms, or learning neural networks to improve and replace traditional algorithms. While traditional algorithms have certain stopping criteria for outputting results at different iterations, many algorithm-inspired deep models are restricted to a ``fixed-depth'' for all inputs. Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances, either to avoid ``over-thinking'', or because we want to compute less for operations converged already. In this paper, we tackle this varying depth problem using a steerable architecture, where a feed-forward deep model and a variational stopping policy are learned together to sequentially determine the optimal number of layers for each input instance. Training such architecture is very challenging. We provide a variational Bayes perspective and design a novel and effective training procedure which decomposes the task into an oracle model learning stage and an imitation stage. Experimentally, we show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks, including learning sparse recovery, few-shot meta learning, and computer vision tasks. | http://arxiv.org/pdf/2006.05082v1 | [
"Xinshi Chen",
"Hanjun Dai",
"Yu Li",
"Xin Gao",
"Le Song"
] | 2020-06-09T07:22:01Z | 2020-06-09T07:22:01Z |
2003.13256 | The Hessian Estimation Evolution Strategy | We present a novel black box optimization algorithm called Hessian Estimation Evolution Strategy. The algorithm updates the covariance matrix of its sampling distribution by directly estimating the curvature of the objective function. This algorithm design is targeted at twice continuously differentiable problems. For this, we extend the cumulative step-size adaptation algorithm of the CMA-ES to mirrored sampling. We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed. We also show that the algorithm is surprisingly robust when its core assumption of a twice continuously differentiable objective function is violated. The approach yields a new evolution strategy with competitive performance, and at the same time it also offers an interesting alternative to the usual covariance matrix update mechanism. | http://arxiv.org/pdf/2003.13256v2 | [
"Tobias Glasmachers",
"Oswin Krause"
] | 2020-06-09T07:30:53Z | 2020-03-30T08:01:16Z |
2006.05087 | Isotropic SGD: a Practical Approach to Bayesian Posterior Sampling | In this work we define a unified mathematical framework to deepen our understanding of the role of stochastic gradient (SG) noise on the behavior of Markov chain Monte Carlo sampling (SGMCMC) algorithms. Our formulation unlocks the design of a novel, practical approach to posterior sampling, which makes the SG noise isotropic using a fixed learning rate that we determine analytically, and that requires weaker assumptions than existing algorithms. In contrast, the common traits of existing sgmcmc algorithms is to approximate the isotropy condition either by drowning the gradients in additive noise (annealing the learning rate) or by making restrictive assumptions on the sg noise covariance and the geometry of the loss landscape. Extensive experimental validations indicate that our proposal is competitive with the state-of-the-art on sgmcmc, while being much more practical to use. | http://arxiv.org/pdf/2006.05087v1 | [
"Giulio Franzese",
"Rosa Candela",
"Dimitrios Milios",
"Maurizio Filippone",
"Pietro Michiardi"
] | 2020-06-09T07:31:21Z | 2020-06-09T07:31:21Z |
2002.06772 | Differentiable Bandit Exploration | Exploration policies in Bayesian bandits maximize the average reward over problem instances drawn from some distribution $mathcal{P}$. In this work, we learn such policies for an unknown distribution $mathcal{P}$ using samples from $mathcal{P}$. Our approach is a form of meta-learning and exploits properties of $mathcal{P}$ without making strong assumptions about its form. To do this, we parameterize our policies in a differentiable way and optimize them by policy gradients, an approach that is general and easy to implement. We derive effective gradient estimators and introduce novel variance reduction techniques. We also analyze and experiment with various bandit policy classes, including neural networks and a novel softmax policy. The latter has regret guarantees and is a natural starting point for our optimization. Our experiments show the versatility of our approach. We also observe that neural network policies can learn implicit biases expressed only through the sampled instances. | http://arxiv.org/pdf/2002.06772v2 | [
"Craig Boutilier",
"Chih-Wei Hsu",
"Branislav Kveton",
"Martin Mladenov",
"Csaba Szepesvari",
"Manzil Zaheer"
] | 2020-06-09T07:35:48Z | 2020-02-17T05:07:35Z |
2006.04082 | End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera | Inter-vehicle distance and relative velocity estimations are two basic functions for any ADAS (Advanced driver-assistance systems). In this paper, we propose a monocular camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network. The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames, which include deep feature clue, scene geometry clue, as well as temporal optical flow clue. We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field (i.e. optical flow). We implement the method by a light-weight deep neural network. Extensive experiments are conducted which confirm the superior performance of our method over other state-of-the-art methods, in terms of estimation accuracy, computational speed, and memory footprint. | http://arxiv.org/pdf/2006.04082v2 | [
"Zhenbo Song",
"Jianfeng Lu",
"Tong Zhang",
"Hongdong Li"
] | 2020-06-09T07:40:51Z | 2020-06-07T08:18:31Z |
2006.05096 | MLModelCI: An Automatic Cloud Platform for Efficient MLaaS | MLModelCI provides multimedia researchers and developers with a one-stop platform for efficient machine learning (ML) services. The system leverages DevOps techniques to optimize, test, and manage models. It also containerizes and deploys these optimized and validated models as cloud services (MLaaS). In its essence, MLModelCI serves as a housekeeper to help users publish models. The models are first automatically converted to optimized formats for production purpose and then profiled under different settings (e.g., batch size and hardware). The profiling information can be used as guidelines for balancing the trade-off between performance and cost of MLaaS. Finally, the system dockerizes the models for ease of deployment to cloud environments. A key feature of MLModelCI is the implementation of a controller, which allows elastic evaluation which only utilizes idle workers while maintaining online service quality. Our system bridges the gap between current ML training and serving systems and thus free developers from manual and tedious work often associated with service deployment. We release the platform as an open-source project on GitHub under Apache 2.0 license, with the aim that it will facilitate and streamline more large-scale ML applications and research projects. | http://arxiv.org/abs/2006.05096v1 | [
"Huaizheng Zhang",
"Yuanming Li",
"Yizheng Huang",
"Yonggang Wen",
"Jianxiong Yin",
"Kyle Guan"
] | 2020-06-09T07:48:20Z | 2020-06-09T07:48:20Z |
2006.05097 | GAP++: Learning to generate target-conditioned adversarial examples | Adversarial examples are perturbed inputs which can cause a serious threat for machine learning models. Finding these perturbations is such a hard task that we can only use the iterative methods to traverse. For computational efficiency, recent works use adversarial generative networks to model the distribution of both the universal or image-dependent perturbations directly. However, these methods generate perturbations only rely on input images. In this work, we propose a more general-purpose framework which infers target-conditioned perturbations dependent on both input image and target label. Different from previous single-target attack models, our model can conduct target-conditioned attacks by learning the relations of attack target and the semantics in image. Using extensive experiments on the datasets of MNIST and CIFAR10, we show that our method achieves superior performance with single target attack models and obtains high fooling rates with small perturbation norms. | http://arxiv.org/pdf/2006.05097v1 | [
"Xiaofeng Mao",
"Yuefeng Chen",
"Yuhong Li",
"Yuan He",
"Hui Xue"
] | 2020-06-09T07:49:49Z | 2020-06-09T07:49:49Z |
1910.01590 | DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps | Generating interpretable visualizations from complex data is a common problem in many applications. Two key ingredients for tackling this issue are clustering and representation learning. However, current methods do not yet successfully combine the strengths of these two approaches. Existing representation learning models which rely on latent topological structure such as self-organising maps, exhibit markedly lower clustering performance compared to recent deep clustering methods. To close this performance gap, we (a) present a novel way to fit self-organizing maps with probabilistic cluster assignments (PSOM), (b) propose a new deep architecture for probabilistic clustering (DPSOM) using a VAE, and (c) extend our architecture for time-series clustering (T-DPSOM), which also allows forecasting in the latent space using LSTMs. We show that DPSOM achieves superior clustering performance compared to current deep clustering methods on MNIST/Fashion-MNIST, while maintaining the favourable visualization properties of SOMs. On medical time series, we show that T-DPSOM outperforms baseline methods in time series clustering and time series forecasting, while providing interpretable visualizations of patient state trajectories and uncertainty estimation. | http://arxiv.org/pdf/1910.01590v3 | [
"Laura Manduchi",
"Matthias Hüser",
"Julia Vogt",
"Gunnar Rätsch",
"Vincent Fortuin"
] | 2020-06-09T08:34:05Z | 2019-10-03T16:47:33Z |
2006.05117 | Hysia: Serving DNN-Based Video-to-Retail Applications in Cloud | Combining underline{v}ideo streaming and online underline{r}etailing (V2R) has been a growing trend recently. In this paper, we provide practitioners and researchers in multimedia with a cloud-based platform named Hysia for easy development and deployment of V2R applications. The system consists of: 1) a back-end infrastructure providing optimized V2R related services including data engine, model repository, model serving and content matching; and 2) an application layer which enables rapid V2R application prototyping. Hysia addresses industry and academic needs in large-scale multimedia by: 1) seamlessly integrating state-of-the-art libraries including NVIDIA video SDK, Facebook faiss, and gRPC; 2) efficiently utilizing GPU computation; and 3) allowing developers to bind new models easily to meet the rapidly changing deep learning (DL) techniques. On top of that, we implement an orchestrator for further optimizing DL model serving performance. Hysia has been released as an open source project on GitHub, and attracted considerable attention. We have published Hysia to DockerHub as an official image for seamless integration and deployment in current cloud environments. | http://arxiv.org/abs/2006.05117v1 | [
"Huaizheng Zhang",
"Yuanming Li",
"Qiming Ai",
"Yong Luo",
"Yonggang Wen",
"Yichao Jin",
"Nguyen Binh Duong Ta"
] | 2020-06-09T08:45:53Z | 2020-06-09T08:45:53Z |
2006.08711 | Explicit Gradient Learning | Black-Box Optimization (BBO) methods can find optimal policies for systems that interact with complex environments with no analytical representation. As such, they are of interest in many Artificial Intelligence (AI) domains. Yet classical BBO methods fall short in high-dimensional non-convex problems. They are thus often overlooked in real-world AI tasks. Here we present a BBO method, termed Explicit Gradient Learning (EGL), that is designed to optimize high-dimensional ill-behaved functions. We derive EGL by finding weak-spots in methods that fit the objective function with a parametric Neural Network (NN) model and obtain the gradient signal by calculating the parametric gradient. Instead of fitting the function, EGL trains a NN to estimate the objective gradient directly. We prove the convergence of EGL in convex optimization and its robustness in the optimization of integrable functions. We evaluate EGL and achieve state-of-the-art results in two challenging problems: (1) the COCO test suite against an assortment of standard BBO methods; and (2) in a high-dimensional non-convex image generation task. | http://arxiv.org/pdf/2006.08711v1 | [
"Mor Sinay",
"Elad Sarafian",
"Yoram Louzoun",
"Noa Agmon",
"Sarit Kraus"
] | 2020-06-09T08:56:24Z | 2020-06-09T08:56:24Z |
2006.02409 | On the Promise of the Stochastic Generalized Gauss-Newton Method for
Training DNNs | Following early work on Hessian-free methods for deep learning, we study a stochastic generalized Gauss-Newton method (SGN) for training DNNs. SGN is a second-order optimization method, with efficient iterations, that we demonstrate to often require substantially fewer iterations than standard SGD to converge. As the name suggests, SGN uses a Gauss-Newton approximation for the Hessian matrix, and, in order to compute an approximate search direction, relies on the conjugate gradient method combined with forward and reverse automatic differentiation. Despite the success of SGD and its first-order variants, and despite Hessian-free methods based on the Gauss-Newton Hessian approximation having been already theoretically proposed as practical methods for training DNNs, we believe that SGN has a lot of undiscovered and yet not fully displayed potential in big mini-batch scenarios. For this setting, we demonstrate that SGN does not only substantially improve over SGD in terms of the number of iterations, but also in terms of runtime. This is made possible by an efficient, easy-to-use and flexible implementation of SGN we propose in the Theano deep learning platform, which, unlike Tensorflow and Pytorch, supports forward automatic differentiation. This enables researchers to further study and improve this promising optimization technique and hopefully reconsider stochastic second-order methods as competitive optimization techniques for training DNNs; we also hope that the promise of SGN may lead to forward automatic differentiation being added to Tensorflow or Pytorch. Our results also show that in big mini-batch scenarios SGN is more robust than SGD with respect to its hyperparameters (we never had to tune its step-size for our benchmarks!), which eases the expensive process of hyperparameter tuning that is instead crucial for the performance of first-order methods. | http://arxiv.org/pdf/2006.02409v4 | [
"Matilde Gargiani",
"Andrea Zanelli",
"Moritz Diehl",
"Frank Hutter"
] | 2020-06-09T08:58:08Z | 2020-06-03T17:35:54Z |
2006.05132 | A Survey on Generative Adversarial Networks: Variants, Applications, and
Training | The Generative Models have gained considerable attention in the field of unsupervised learning via a new and practical framework called Generative Adversarial Networks (GAN) due to its outstanding data generation capability. Many models of GAN have proposed, and several practical applications emerged in various domains of computer vision and machine learning. Despite GAN's excellent success, there are still obstacles to stable training. The problems are due to Nash-equilibrium, internal covariate shift, mode collapse, vanishing gradient, and lack of proper evaluation metrics. Therefore, stable training is a crucial issue in different applications for the success of GAN. Herein, we survey several training solutions proposed by different researchers to stabilize GAN training. We survey, (I) the original GAN model and its modified classical versions, (II) detail analysis of various GAN applications in different domains, (III) detail study about the various GAN training obstacles as well as training solutions. Finally, we discuss several new issues as well as research outlines to the topic. | http://arxiv.org/pdf/2006.05132v1 | [
"Abdul Jabbar",
"Xi Li",
"Bourahla Omar"
] | 2020-06-09T09:04:41Z | 2020-06-09T09:04:41Z |
2006.05148 | XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated
Learning | User-generated data distributions are often imbalanced across devices and labels, hampering the performance of federated learning (FL). To remedy to this non-independent and identically distributed (non-IID) data problem, in this work we develop a privacy-preserving XOR based mixup data augmentation technique, coined XorMixup, and thereby propose a novel one-shot FL framework, termed XorMixFL. The core idea is to collect other devices' encoded data samples that are decoded only using each device's own data samples. The decoding provides synthetic-but-realistic samples until inducing an IID dataset, used for model training. Both encoding and decoding procedures follow the bit-wise XOR operations that intentionally distort raw samples, thereby preserving data privacy. Simulation results corroborate that XorMixFL achieves up to 17.6% higher accuracy than Vanilla FL under a non-IID MNIST dataset. | http://arxiv.org/pdf/2006.05148v1 | [
"MyungJae Shin",
"Chihoon Hwang",
"Joongheon Kim",
"Jihong Park",
"Mehdi Bennis",
"Seong-Lyun Kim"
] | 2020-06-09T09:43:41Z | 2020-06-09T09:43:41Z |
2006.05152 | Simultaneous Perturbation Stochastic Approximation for Few-Shot Learning | Few-shot learning is an important research field of machine learning in which a classifier must be trained in such a way that it can adapt to new classes which are not included in the training set. However, only small amounts of examples of each class are available for training. This is one of the key problems with learning algorithms of this type which leads to the significant uncertainty. We attack this problem via randomized stochastic approximation. In this paper, we suggest to consider the new multi-task loss function and propose the SPSA-like few-shot learning approach based on the prototypical networks method. We provide a theoretical justification and an analysis of experiments for this approach. The results of experiments on the benchmark dataset demonstrate that the proposed method is superior to the original prototypical networks. | http://arxiv.org/pdf/2006.05152v1 | [
"Andrei Boiarov",
"Oleg Granichin",
"Olga Granichina"
] | 2020-06-09T09:47:58Z | 2020-06-09T09:47:58Z |
2006.09979 | I know why you like this movie: Interpretable Efficient Multimodal
Recommender | Recently, the Efficient Manifold Density Estimator (EMDE) model has been introduced. The model exploits Local Sensitive Hashing and Count-Min Sketch algorithms, combining them with a neural network to achieve state-of-the-art results on multiple recommender datasets. However, this model ingests a compressed joint representation of all input items for each user/session, so calculating attributions for separate items via gradient-based methods seems not applicable. We prove that interpreting this model in a white-box setting is possible thanks to the properties of EMDE item retrieval method. By exploiting multimodal flexibility of this model, we obtain meaningful results showing the influence of multiple modalities: text, categorical features, and images, on movie recommendation output. | http://arxiv.org/pdf/2006.09979v1 | [
"Barbara Rychalska",
"Dominika Basaj",
"Jacek Dąbrowski",
"Michał Daniluk"
] | 2020-06-09T09:59:28Z | 2020-06-09T09:59:28Z |
2006.05164 | AR-DAE: Towards Unbiased Neural Entropy Gradient Estimation | Entropy is ubiquitous in machine learning, but it is in general intractable to compute the entropy of the distribution of an arbitrary continuous random variable. In this paper, we propose the amortized residual denoising autoencoder (AR-DAE) to approximate the gradient of the log density function, which can be used to estimate the gradient of entropy. Amortization allows us to significantly reduce the error of the gradient approximator by approaching asymptotic optimality of a regular DAE, in which case the estimation is in theory unbiased. We conduct theoretical and experimental analyses on the approximation error of the proposed method, as well as extensive studies on heuristics to ensure its robustness. Finally, using the proposed gradient approximator to estimate the gradient of entropy, we demonstrate state-of-the-art performance on density estimation with variational autoencoders and continuous control with soft actor-critic. | http://arxiv.org/pdf/2006.05164v1 | [
"Jae Hyun Lim",
"Aaron Courville",
"Christopher Pal",
"Chin-Wei Huang"
] | 2020-06-09T10:11:28Z | 2020-06-09T10:11:28Z |
2006.05169 | DyHGCN: A Dynamic Heterogeneous Graph Convolutional Network to Learn
Users' Dynamic Preferences for Information Diffusion Prediction | Information diffusion prediction is a fundamental task for understanding the information propagation process. It has wide applications in such as misinformation spreading prediction and malicious account detection. Previous works either concentrate on utilizing the context of a single diffusion sequence or using the social network among users for information diffusion prediction. However, the diffusion paths of different messages naturally constitute a dynamic diffusion graph. For one thing, previous works cannot jointly utilize both the social network and diffusion graph for prediction, which is insufficient to model the complexity of the diffusion process and results in unsatisfactory prediction performance. For another, they cannot learn users' dynamic preferences. Intuitively, users' preferences are changing as time goes on and users' personal preference determines whether the user will repost the information. Thus, it is beneficial to consider users' dynamic preferences in information diffusion prediction. In this paper, we propose a novel dynamic heterogeneous graph convolutional network (DyHGCN) to jointly learn the structural characteristics of the social graph and dynamic diffusion graph. Then, we encode the temporal information into the heterogeneous graph to learn the users' dynamic preferences. Finally, we apply multi-head attention to capture the context-dependency of the current diffusion path to facilitate the information diffusion prediction task. Experimental results show that DyHGCN significantly outperforms the state-of-the-art models on three public datasets, which shows the effectiveness of the proposed model. | http://arxiv.org/pdf/2006.05169v1 | [
"Chunyuan Yuan",
"Jiacheng Li",
"Wei Zhou",
"Yijun Lu",
"Xiaodan Zhang",
"Songlin Hu"
] | 2020-06-09T10:34:41Z | 2020-06-09T10:34:41Z |
2004.10430 | Policy Gradient from Demonstration and Curiosity | With reinforcement learning, an agent could learn complex behaviors from high-level abstractions of the task. However, exploration and reward shaping remained challenging for existing methods, especially in scenarios where the extrinsic feedback was sparse. Expert demonstrations have been investigated to solve these difficulties, but a tremendous number of high-quality demonstrations were usually required. In this work, an integrated policy gradient algorithm was proposed to boost exploration and facilitate intrinsic reward learning from only limited number of demonstrations. We achieved this by reformulating the original reward function with two additional terms, where the first term measured the Jensen-Shannon divergence between current policy and the expert, and the second term estimated the agent's uncertainty about the environment. The presented algorithm was evaluated on a range of simulated tasks with sparse extrinsic reward signals where only one single demonstrated trajectory was provided to each task, superior exploration efficiency and high average return were demonstrated in all tasks. Furthermore, it was found that the agent could imitate the expert's behavior and meanwhile sustain high return. | http://arxiv.org/pdf/2004.10430v2 | [
"Jie Chen",
"Wenjun Xu"
] | 2020-06-09T10:57:48Z | 2020-04-22T07:57:39Z |
2006.05183 | A Note on Deepfake Detection with Low-Resources | Deepfakes are videos that include changes, quite often substituting face of a portrayed individual with a different face using neural networks. Even though the technology gained its popularity as a carrier of jokes and parodies it raises a serious threat to ones security - via biometric impersonation or besmearing. In this paper we present two methods that allow detecting Deepfakes for a user without significant computational power. In particular, we enhance MesoNet by replacing the original activation functions allowing a nearly 1% improvement as well as increasing the consistency of the results. Moreover, we introduced and verified a new activation function - Pish that at the cost of slight time overhead allows even higher consistency. Additionally, we present a preliminary results of Deepfake detection method based on Local Feature Descriptors (LFD), that allows setting up the system even faster and without resorting to GPU computation. Our method achieved Equal Error Rate of 0.28, with both accuracy and recall exceeding 0.7. | http://arxiv.org/pdf/2006.05183v1 | [
"Piotr Kawa",
"Piotr Syga"
] | 2020-06-09T11:07:08Z | 2020-06-09T11:07:08Z |
2006.05188 | Optimal Continual Learning has Perfect Memory and is NP-hard | Continual Learning (CL) algorithms incrementally learn a predictor or representation across multiple sequentially observed tasks. Designing CL algorithms that perform reliably and avoid so-called catastrophic forgetting has proven a persistent challenge. The current paper develops a theoretical approach that explains why. In particular, we derive the computational properties which CL algorithms would have to possess in order to avoid catastrophic forgetting. Our main finding is that such optimal CL algorithms generally solve an NP-hard problem and will require perfect memory to do so. The findings are of theoretical interest, but also explain the excellent performance of CL algorithms using experience replay, episodic memory and core sets relative to regularization-based approaches. | http://arxiv.org/pdf/2006.05188v1 | [
"Jeremias Knoblauch",
"Hisham Husain",
"Tom Diethe"
] | 2020-06-09T11:20:38Z | 2020-06-09T11:20:38Z |
2006.04720 | Host-Pathongen Co-evolution Inspired Algorithm Enables Robust GAN
Training | Generative adversarial networks (GANs) are pairs of artificial neural networks that are trained one against each other. The outputs from a generator are mixed with the real-world inputs to the discriminator and both networks are trained until an equilibrium is reached, where the discriminator cannot distinguish generated inputs from real ones. Since their introduction, GANs have allowed for the generation of impressive imitations of real-life films, images and texts, whose fakeness is barely noticeable to humans. Despite their impressive performance, training GANs remains to this day more of an art than a reliable procedure, in a large part due to training process stability. Generators are susceptible to mode dropping and convergence to random patterns, which have to be mitigated by computationally expensive multiple restarts. Curiously, GANs bear an uncanny similarity to a co-evolution of a pathogen and its host's immune system in biology. In a biological context, the majority of potential pathogens indeed never make it and are kept at bay by the hots' immune system. Yet some are efficient enough to present a risk of a serious condition and recurrent infections. Here, we explore that similarity to propose a more robust algorithm for GANs training. We empirically show the increased stability and a better ability to generate high-quality images while using less computational power. | http://arxiv.org/pdf/2006.04720v2 | [
"Andrei Kucharavy",
"El Mahdi El Mhamdi",
"Rachid Guerraoui"
] | 2020-06-09T11:21:03Z | 2020-05-22T09:54:06Z |
1905.11136 | Provably Powerful Graph Networks | Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressive power of graph neural networks (GNN). It was shown that the popular message passing GNN cannot distinguish between graphs that are indistinguishable by the 1-WL test (Morris et al. 2018; Xu et al. 2019). Unfortunately, many simple instances of graphs are indistinguishable by the 1-WL test. In search for more expressive graph learning models we build upon the recent k-order invariant and equivariant graph neural networks (Maron et al. 2019a,b) and present two results: First, we show that such k-order networks can distinguish between non-isomorphic graphs as good as the k-WL tests, which are provably stronger than the 1-WL test for k>2. This makes these models strictly stronger than message passing models. Unfortunately, the higher expressiveness of these models comes with a computational cost of processing high order tensors. Second, setting our goal at building a provably stronger, simple and scalable model we show that a reduced 2-order network containing just scaled identity operator, augmented with a single quadratic operation (matrix multiplication) has a provable 3-WL expressive power. Differently put, we suggest a simple model that interleaves applications of standard Multilayer-Perceptron (MLP) applied to the feature dimension and matrix multiplication. We validate this model by presenting state of the art results on popular graph classification and regression tasks. To the best of our knowledge, this is the first practical invariant/equivariant model with guaranteed 3-WL expressiveness, strictly stronger than message passing models. | http://arxiv.org/pdf/1905.11136v4 | [
"Haggai Maron",
"Heli Ben-Hamu",
"Hadar Serviansky",
"Yaron Lipman"
] | 2020-06-09T11:28:26Z | 2019-05-27T11:33:19Z |
2006.05213 | Graph-Aware Transformer: Is Attention All Graphs Need? | Graphs are the natural data structure to represent relational and structural information in many domains. To cover the broad range of graph-data applications including graph classification as well as graph generation, it is desirable to have a general and flexible model consisting of an encoder and a decoder that can handle graph data. Although the representative encoder-decoder model, Transformer, shows superior performance in various tasks especially of natural language processing, it is not immediately available for graphs due to their non-sequential characteristics. To tackle this incompatibility, we propose GRaph-Aware Transformer (GRAT), the first Transformer-based model which can encode and decode whole graphs in end-to-end fashion. GRAT is featured with a self-attention mechanism adaptive to the edge information and an auto-regressive decoding mechanism based on the two-path approach consisting of sub-graph encoding path and node-and-edge generation path for each decoding step. We empirically evaluated GRAT on multiple setups including encoder-based tasks such as molecule property predictions on QM9 datasets and encoder-decoder-based tasks such as molecule graph generation in the organic molecule synthesis domain. GRAT has shown very promising results including state-of-the-art performance on 4 regression tasks in QM9 benchmark. | http://arxiv.org/pdf/2006.05213v1 | [
"Sanghyun Yoo",
"Young-Seok Kim",
"Kang Hyun Lee",
"Kuhwan Jeong",
"Junhwi Choi",
"Hoshik Lee",
"Young Sang Choi"
] | 2020-06-09T12:13:56Z | 2020-06-09T12:13:56Z |
2006.04001 | Real-Time Model Calibration with Deep Reinforcement Learning | The dynamic, real-time, and accurate inference of model parameters from empirical data is of great importance in many scientific and engineering disciplines that use computational models (such as a digital twin) for the analysis and prediction of complex physical processes. However, fast and accurate inference for processes with large and high dimensional datasets cannot easily be achieved with state-of-the-art methods under noisy real-world conditions. The primary reason is that the inference of model parameters with traditional techniques based on optimisation or sampling often suffers from computational and statistical challenges, resulting in a trade-off between accuracy and deployment time. In this paper, we propose a novel framework for inference of model parameters based on reinforcement learning. The contribution of the paper is twofold: 1) We reformulate the inference problem as a tracking problem with the objective of learning a policy that forces the response of the physics-based model to follow the observations; 2) We propose the constrained Lyapunov-based actor-critic (CLAC) algorithm to enable the robust and accurate inference of physics-based model parameters in real time under noisy real-world conditions. The proposed methodology is demonstrated and evaluated on two model-based diagnostics test cases utilizing two different physics-based models of turbofan engines. The performance of the methodology is compared to that of two alternative approaches: a state update method (unscented Kalman filter) and a supervised end-to-end mapping with deep neural networks. The experimental results demonstrate that the proposed methodology outperforms all other tested methods in terms of speed and robustness, with high inference accuracy. | http://arxiv.org/pdf/2006.04001v2 | [
"Yuan Tian",
"Manuel Arias Chao",
"Chetan Kulkarni",
"Kai Goebel",
"Olga Fink"
] | 2020-06-09T12:25:49Z | 2020-06-07T00:11:42Z |
2001.04253 | Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation | Inductive transfer learning has had a big impact on computer vision and NLP domains but has not been used in the area of recommender systems. Even though there has been a large body of research on generating recommendations based on modeling user-item interaction sequences, few of them attempt to represent and transfer these models for serving downstream tasks where only limited data exists. In this paper, we delve on the task of effectively learning a single user representation that can be applied to a diversity of tasks, from cross-domain recommendations to user profile predictions. Fine-tuning a large pre-trained network and adapting it to downstream tasks is an effective way to solve such tasks. However, fine-tuning is parameter inefficient considering that an entire model needs to be re-trained for every new task. To overcome this issue, we develop a parameter efficient transfer learning architecture, termed as PeterRec, which can be configured on-the-fly to various downstream tasks. Specifically, PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks, which are small but as expressive as learning the entire network. We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks. Moreover, we show that PeterRec performs efficient transfer learning in multiple domains, where it achieves comparable or sometimes better performance relative to fine-tuning the entire model parameters. Codes and datasets are available at https://github.com/fajieyuan/sigir2020_peterrec. | http://arxiv.org/pdf/2001.04253v4 | [
"Fajie Yuan",
"Xiangnan He",
"Alexandros Karatzoglou",
"Liguang Zhang"
] | 2020-06-09T12:36:19Z | 2020-01-13T14:09:54Z |
2006.03810 | An Empirical Analysis of the Impact of Data Augmentation on Knowledge
Distillation | Generalization Performance of Deep Learning models trained using Empirical Risk Minimization can be improved significantly by using Data Augmentation strategies such as simple transformations, or using Mixed Samples. We attempt to empirically analyze the impact of such strategies on the transfer of generalization between teacher and student models in a distillation setup. We observe that if a teacher is trained using any of the mixed sample augmentation strategies, such as MixUp or CutMix, the student model distilled from it is impaired in its generalization capabilities. We hypothesize that such strategies limit a model's capability to learn example-specific features, leading to a loss in quality of the supervision signal during distillation. We present a novel Class-Discrimination metric to quantitatively measure this dichotomy in performance and link it to the discriminative capacity induced by the different strategies on a network's latent space. | http://arxiv.org/pdf/2006.03810v2 | [
"Deepan Das",
"Haley Massa",
"Abhimanyu Kulkarni",
"Theodoros Rekatsinas"
] | 2020-06-09T13:01:00Z | 2020-06-06T08:20:48Z |
2006.05232 | Detecting structural perturbations from time series with deep learning | Small disturbances can trigger functional breakdowns in complex systems. A challenging task is to infer the structural cause of a disturbance in a networked system, soon enough to prevent a catastrophe. We present a graph neural network approach, borrowed from the deep learning paradigm, to infer structural perturbations from functional time series. We show our data-driven approach outperforms typical reconstruction methods while meeting the accuracy of Bayesian inference. We validate the versatility and performance of our approach with epidemic spreading, population dynamics, and neural dynamics, on various network structures: random networks, scale-free networks, 25 real food-web systems, and the C. Elegans connectome. Moreover, we report that our approach is robust to data corruption. This work uncovers a practical avenue to study the resilience of real-world complex systems. | http://arxiv.org/pdf/2006.05232v1 | [
"Edward Laurence",
"Charles Murphy",
"Guillaume St-Onge",
"Xavier Roy-Pomerleau",
"Vincent Thibeault"
] | 2020-06-09T13:08:40Z | 2020-06-09T13:08:40Z |
2006.05252 | A bio-inspired bistable recurrent cell allows for long-lasting memory | Recurrent neural networks (RNNs) provide state-of-the-art performances in a wide variety of tasks that require memory. These performances can often be achieved thanks to gated recurrent cells such as gated recurrent units (GRU) and long short-term memory (LSTM). Standard gated cells share a layer internal state to store information at the network level, and long term memory is shaped by network-wide recurrent connection weights. Biological neurons on the other hand are capable of holding information at the cellular level for an arbitrary long amount of time through a process called bistability. Through bistability, cells can stabilize to different stable states depending on their own past state and inputs, which permits the durable storing of past information in neuron state. In this work, we take inspiration from biological neuron bistability to embed RNNs with long-lasting memory at the cellular level. This leads to the introduction of a new bistable biologically-inspired recurrent cell that is shown to strongly improves RNN performance on time-series which require very long memory, despite using only cellular connections (all recurrent connections are from neurons to themselves, i.e. a neuron state is not influenced by the state of other neurons). Furthermore, equipping this cell with recurrent neuromodulation permits to link them to standard GRU cells, taking a step towards the biological plausibility of GRU. | http://arxiv.org/abs/2006.05252v1 | [
"Nicolas Vecoven",
"Damien Ernst",
"Guillaume Drion"
] | 2020-06-09T13:36:31Z | 2020-06-09T13:36:31Z |
2006.05255 | DeepFair: Deep Learning for Improving Fairness in Recommender Systems | The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations. Moreover, the trade-off between equity and precision makes it difficult to obtain recommendations that meet both criteria. Here we propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users. Experimental results show that it is possible to make fair recommendations without losing a significant proportion of accuracy. | http://arxiv.org/abs/2006.05255v1 | [
"Jesús Bobadilla",
"Raúl Lara-Cabrera",
"Ángel González-Prieto",
"Fernando Ortega"
] | 2020-06-09T13:39:38Z | 2020-06-09T13:39:38Z |
2006.05276 | A Flexible and Intelligent Framework for Remote Health Monitoring
Dashboards | Developing and maintaining monitoring panels is undoubtedly the main task in the remote patient monitoring (RPM) systems. Due to the significant variations in desired functionalities, data sources, and objectives, designing an efficient dashboard that responds to the various needs in an RPM project is generally a cumbersome task to carry out. In this work, we present ViSierra, a framework for designing data monitoring dashboards in RPM projects. The abstractions and different components of this open-source project are explained, and examples are provided to support our claim concerning the effectiveness of this framework in preparing fast, efficient, and accurate monitoring platforms with minimal coding. These platforms will cover all the necessary aspects in a traditional RPM project and combine them with novel functionalities such as machine learning solutions and provide better data analysis instruments for the experts to track the information. | http://arxiv.org/pdf/2006.05276v1 | [
"Shayan Fazeli",
"Majid Sarrafzadeh"
] | 2020-06-09T14:07:45Z | 2020-06-09T14:07:45Z |
2006.05281 | Extensive Error Analysis and a Learning-Based Evaluation of Medical
Entity Recognition Systems to Approximate User Experience | When comparing entities extracted by a medical entity recognition system with gold standard annotations over a test set, two types of mismatches might occur, label mismatch or span mismatch. Here we focus on span mismatch and show that its severity can vary from a serious error to a fully acceptable entity extraction due to the subjectivity of span annotations. For a domain-specific BERT-based NER system, we showed that 25% of the errors have the same labels and overlapping span with gold standard entities. We collected expert judgement which shows more than 90% of these mismatches are accepted or partially accepted by the user. Using the training set of the NER system, we built a fast and lightweight entity classifier to approximate the user experience of such mismatches through accepting or rejecting them. The decisions made by this classifier are used to calculate a learning-based F-score which is shown to be a better approximation of a forgiving user's experience than the relaxed F-score. We demonstrated the results of applying the proposed evaluation metric for a variety of deep learning medical entity recognition models trained with two datasets. | http://arxiv.org/pdf/2006.05281v1 | [
"Isar Nejadgholi",
"Kathleen C. Fraser",
"Berry De Bruijn"
] | 2020-06-09T14:15:33Z | 2020-06-09T14:15:33Z |
2002.08676 | Learning with Differentiable Perturbed Optimizers | Machine learning pipelines often rely on optimization procedures to make discrete decisions (e.g., sorting, picking closest neighbors, or shortest paths). Although these discrete decisions are easily computed, they break the back-propagation of computational graphs. In order to expand the scope of learning problems that can be solved in an end-to-end fashion, we propose a systematic method to transform optimizers into operations that are differentiable and never locally constant. Our approach relies on stochastically perturbed optimizers, and can be used readily together with existing solvers. Their derivatives can be evaluated efficiently, and smoothness tuned via the chosen noise amplitude. We also show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks. We demonstrate experimentally the performance of our approach on various tasks. | http://arxiv.org/pdf/2002.08676v2 | [
"Quentin Berthet",
"Mathieu Blondel",
"Olivier Teboul",
"Marco Cuturi",
"Jean-Philippe Vert",
"Francis Bach"
] | 2020-06-09T15:09:00Z | 2020-02-20T11:11:32Z |
2006.05336 | On the Effectiveness of Regularization Against Membership Inference
Attacks | Deep learning models often raise privacy concerns as they leak information about their training data. This enables an adversary to determine whether a data point was in a model's training set by conducting a membership inference attack (MIA). Prior work has conjectured that regularization techniques, which combat overfitting, may also mitigate the leakage. While many regularization mechanisms exist, their effectiveness against MIAs has not been studied systematically, and the resulting privacy properties are not well understood. We explore the lower bound for information leakage that practical attacks can achieve. First, we evaluate the effectiveness of 8 mechanisms in mitigating two recent MIAs, on three standard image classification tasks. We find that certain mechanisms, such as label smoothing, may inadvertently help MIAs. Second, we investigate the potential of improving the resilience to MIAs by combining complementary mechanisms. Finally, we quantify the opportunity of future MIAs to compromise privacy by designing a white-box `distance-to-confident' (DtC) metric, based on adversarial sample crafting. Our metric reveals that, even when existing MIAs fail, the training samples may remain distinguishable from test samples. This suggests that regularization mechanisms can provide a false sense of privacy, even when they appear effective against existing MIAs. | http://arxiv.org/pdf/2006.05336v1 | [
"Yigitcan Kaya",
"Sanghyun Hong",
"Tudor Dumitras"
] | 2020-06-09T15:17:21Z | 2020-06-09T15:17:21Z |
2006.05345 | Statistical Estimation of High-Dimensional Vector Autoregressive Models | High-dimensional vector autoregressive (VAR) models are important tools for the analysis of multivariate time series. This paper focuses on high-dimensional time series and on the different regularized estimation procedures proposed for fitting sparse VAR models to such time series. Attention is paid to the different sparsity assumptions imposed on the VAR parameters and how these sparsity assumptions are related to the particular consistency properties of the estimators established. A sparsity scheme for high-dimensional VAR models is proposed which is found to be more appropriate for the time series setting considered. Furthermore, it is shown that, under this sparsity setting, threholding extents the consistency properties of regularized estimators to a wide range of matrix norms. Among other things, this enables application of the VAR parameters estimators to different inference problems, like forecasting or estimating the second-order characteristics of the underlying VAR process. Extensive simulations compare the finite sample behavior of the different regularized estimators proposed using a variety of performance criteria. | http://arxiv.org/pdf/2006.05345v1 | [
"Jonas Krampe",
"Efstathios Paparoditis"
] | 2020-06-09T15:25:20Z | 2020-06-09T15:25:20Z |
1909.03267 | A Tree-based Dictionary Learning Framework | We propose a new outline for adaptive dictionary learning methods for sparse encoding based on a hierarchical clustering of the training data. Through recursive application of a clustering method, the data is organized into a binary partition tree representing a multiscale structure. The dictionary atoms are defined adaptively based on the data clusters in the partition tree. This approach can be interpreted as a generalization of a discrete Haar wavelet transform. Furthermore, any prior knowledge on the wanted structure of the dictionary elements can be simply incorporated. The computational complexity of our proposed algorithm depends on the employed clustering method and on the chosen similarity measure between data points. Thanks to the multiscale properties of the partition tree, our dictionary is structured: when using Orthogonal Matching Pursuit to reconstruct patches from a natural image, dictionary atoms corresponding to nodes being closer to the root node in the tree have a tendency to be used with greater coefficients. | http://arxiv.org/abs/1909.03267v2 | [
"Renato Budinich",
"Gerlind Plonka"
] | 2020-06-09T15:55:31Z | 2019-09-07T13:48:23Z |
2006.05385 | Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement | Disentangled representation learning has recently attracted a significant amount of attention, particularly in the field of image representation learning. However, learning the disentangled representations behind a graph remains largely unexplored, especially for the attributed graph with both node and edge features. Disentanglement learning for graph generation has substantial new challenges including 1) the lack of graph deconvolution operations to jointly decode node and edge attributes; and 2) the difficulty in enforcing the disentanglement among latent factors that respectively influence: i) only nodes, ii) only edges, and iii) joint patterns between them. To address these challenges, we propose a new disentanglement enhancement framework for deep generative models for attributed graphs. In particular, a novel variational objective is proposed to disentangle the above three types of latent factors, with novel architecture for node and edge deconvolutions. Moreover, within each type, individual-factor-wise disentanglement is further enhanced, which is shown to be a generalization of the existing framework for images. Qualitative and quantitative experiments on both synthetic and real-world datasets demonstrate the effectiveness of the proposed model and its extensions. | http://arxiv.org/abs/2006.05385v1 | [
"Xiaojie Guo",
"Liang Zhao",
"Zhao Qin",
"Lingfei Wu",
"Amarda Shehu",
"Yanfang Ye"
] | 2020-06-09T16:33:49Z | 2020-06-09T16:33:49Z |
2006.05388 | End-to-end User Recognition using Touchscreen Biometrics | We study the touchscreen data as behavioural biometrics. The goal was to create an end-to-end system that can transparently identify users using raw data from mobile devices. The touchscreen biometrics was researched only few times in series of works with disparity in used methodology and databases. In the proposed system data from the touchscreen goes directly, without any processing, to the input of a deep neural network, which is able to decide on the identity of the user. No hand-crafted features are used. The implemented classification algorithm tries to find patterns by its own from raw data. The achieved results show that the proposed deep model is sufficient enough for the given identification task. The performed tests indicate high accuracy of user identification and better EER results compared to state of the art systems. The best result achieved by our system is 0.65% EER. | http://arxiv.org/pdf/2006.05388v1 | [
"Michał Krzemiński",
"Javier Hernando"
] | 2020-06-09T16:38:09Z | 2020-06-09T16:38:09Z |
2006.05397 | Real-time Localization Using Radio Maps | This paper deals with the problem of localization in a cellular network in a dense urban scenario. Global Navigation Satellite System typically performs poorly in urban environments when there is no line-of-sight between the devices and the satellites, and thus alternative localization methods are often required. We present a simple yet effective method for localization based on pathloss. In our approach, the user to be localized reports the received signal strength from a set of base stations with known locations. For each base station we have a good approximation of the pathloss at each location in the map, provided by RadioUNet, an efficient deep learning-based simulator of pathloss functions in urban environment, akin to ray-tracing. Using the approximations of the pathloss functions of all base stations and the reported signal strengths, we are able to extract a very accurate approximation of the location of the user. | http://arxiv.org/pdf/2006.05397v1 | [
"Çağkan Yapar",
"Ron Levie",
"Gitta Kutyniok",
"Giuseppe Caire"
] | 2020-06-09T16:51:17Z | 2020-06-09T16:51:17Z |
2006.05398 | Deep Visual Reasoning: Learning to Predict Action Sequences for Task and
Motion Planning from an Initial Scene Image | In this paper, we propose a deep convolutional recurrent neural network that predicts action sequences for task and motion planning (TAMP) from an initial scene image. Typical TAMP problems are formalized by combining reasoning on a symbolic, discrete level (e.g. first-order logic) with continuous motion planning such as nonlinear trajectory optimization. Due to the great combinatorial complexity of possible discrete action sequences, a large number of optimization/motion planning problems have to be solved to find a solution, which limits the scalability of these approaches. To circumvent this combinatorial complexity, we develop a neural network which, based on an initial image of the scene, directly predicts promising discrete action sequences such that ideally only one motion planning problem has to be solved to find a solution to the overall TAMP problem. A key aspect is that our method generalizes to scenes with many and varying number of objects, although being trained on only two objects at a time. This is possible by encoding the objects of the scene in images as input to the neural network, instead of a fixed feature vector. Results show runtime improvements of several magnitudes. Video: https://youtu.be/i8yyEbbvoEk | http://arxiv.org/pdf/2006.05398v1 | [
"Danny Driess",
"Jung-Su Ha",
"Marc Toussaint"
] | 2020-06-09T16:52:02Z | 2020-06-09T16:52:02Z |
2006.05403 | Distributed Learning on Heterogeneous Resource-Constrained Devices | We consider a distributed system, consisting of a heterogeneous set of devices, ranging from low-end to high-end. These devices have different profiles, e.g., different energy budgets, or different hardware specifications, determining their capabilities on performing certain learning tasks. We propose the first approach that enables distributed learning in such a heterogeneous system. Applying our approach, each device employs a neural network (NN) with a topology that fits its capabilities; however, part of these NNs share the same topology, so that their parameters can be jointly learned. This differs from current approaches, such as federated learning, which require all devices to employ the same NN, enforcing a trade-off between achievable accuracy and computational overhead of training. We evaluate heterogeneous distributed learning for reinforcement learning (RL) and observe that it greatly improves the achievable reward on more powerful devices, compared to current approaches, while still maintaining a high reward on the weaker devices. We also explore supervised learning, observing similar gains. | http://arxiv.org/pdf/2006.05403v1 | [
"Martin Rapp",
"Ramin Khalili",
"Jörg Henkel"
] | 2020-06-09T16:58:49Z | 2020-06-09T16:58:49Z |
2006.05409 | Fast Modeling and Understanding Fluid Dynamics Systems with
Encoder-Decoder Networks | Is a deep learning model capable of understanding systems governed by certain first principle laws by only observing the system's output? Can deep learning learn the underlying physics and honor the physics when making predictions? The answers are both positive. In an effort to simulate two-dimensional subsurface fluid dynamics in porous media, we found that an accurate deep-learning-based proxy model can be taught efficiently by a computationally expensive finite-volume-based simulator. We pose the problem as an image-to-image regression, running the simulator with different input parameters to furnish a synthetic training dataset upon which we fit the deep learning models. Since the data is spatiotemporal, we compare the performance of two alternative treatments of time; a convolutional LSTM versus an autoencoder network that treats time as a direct input. Adversarial methods are adopted to address the sharp spatial gradient in the fluid dynamic problems. Compared to traditional simulation, the proposed deep learning approach enables much faster forward computation, which allows us to explore more scenarios with a much larger parameter space given the same time. It is shown that the improved forward computation efficiency is particularly valuable in solving inversion problems, where the physics model has unknown parameters to be determined by history matching. By computing the pixel-level attention of the trained model, we quantify the sensitivity of the deep learning model to key physical parameters and hence demonstrate that the inversion problems can be solved with great acceleration. We assess the efficacy of the machine learning surrogate in terms of its training speed and accuracy. The network can be trained within minutes using limited training data and achieve accuracy that scales desirably with the amount of training data supplied. | http://arxiv.org/pdf/2006.05409v1 | [
"Rohan Thavarajah",
"Xiang Zhai",
"Zheren Ma",
"David Castineira"
] | 2020-06-09T17:14:08Z | 2020-06-09T17:14:08Z |
1910.03225 | NGBoost: Natural Gradient Boosting for Probabilistic Prediction | We present Natural Gradient Boosting (NGBoost), an algorithm for generic probabilistic prediction via gradient boosting. Typical regression models return a point estimate, conditional on covariates, but probabilistic regression models output a full probability distribution over the outcome space, conditional on the covariates. This allows for predictive uncertainty estimation -- crucial in applications like healthcare and weather forecasting. NGBoost generalizes gradient boosting to probabilistic regression by treating the parameters of the conditional distribution as targets for a multiparameter boosting algorithm. Furthermore, we show how the Natural Gradient is required to correct the training dynamics of our multiparameter boosting approach. NGBoost can be used with any base learner, any family of distributions with continuous parameters, and any scoring rule. NGBoost matches or exceeds the performance of existing methods for probabilistic prediction while offering additional benefits in flexibility, scalability, and usability. An open-source implementation is available at github.com/stanfordmlgroup/ngboost. | http://arxiv.org/pdf/1910.03225v4 | [
"Tony Duan",
"Anand Avati",
"Daisy Yi Ding",
"Khanh K. Thai",
"Sanjay Basu",
"Andrew Y. Ng",
"Alejandro Schuler"
] | 2020-06-09T17:25:09Z | 2019-10-08T06:07:13Z |
2006.05415 | Neuroevolution in Deep Neural Networks: Current Trends and Future
Challenges | A variety of methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNN). These methods play a crucial role in the success or failure of the DNN for most problems and applications. Evolutionary Algorithms (EAs) are gaining momentum as a computationally feasible method for the automated optimisation and training of DNNs. Neuroevolution is a term which describes these processes of automated configuration and training of DNNs using EAs. While many works exist in the literature, no comprehensive surveys currently exist focusing exclusively on the strengths and limitations of using neuroevolution approaches in DNNs. Prolonged absence of such surveys can lead to a disjointed and fragmented field preventing DNNs researchers potentially adopting neuroevolutionary methods in their own research, resulting in lost opportunities for improving performance and wider application within real-world deep learning problems. This paper presents a comprehensive survey, discussion and evaluation of the state-of-the-art works on using EAs for architectural configuration and training of DNNs. Based on this survey, the paper highlights the most pertinent current issues and challenges in neuroevolution and identifies multiple promising future research directions. | http://arxiv.org/abs/2006.05415v1 | [
"Edgar Galván",
"Peter Mooney"
] | 2020-06-09T17:28:25Z | 2020-06-09T17:28:25Z |
2006.05419 | Cost-effective Interactive Attention Learning with Neural Attention
Processes | We propose a novel interactive learning framework which we refer to as Interactive Attention Learning (IAL), in which the human supervisors interactively manipulate the allocated attentions, to correct the model's behavior by updating the attention-generating network. However, such a model is prone to overfitting due to scarcity of human annotations, and requires costly retraining. Moreover, it is almost infeasible for the human annotators to examine attentions on tons of instances and features. We tackle these challenges by proposing a sample-efficient attention mechanism and a cost-effective reranking algorithm for instances and features. First, we propose Neural Attention Process (NAP), which is an attention generator that can update its behavior by incorporating new attention-level supervisions without any retraining. Secondly, we propose an algorithm which prioritizes the instances and the features by their negative impacts, such that the model can yield large improvements with minimal human feedback. We validate IAL on various time-series datasets from multiple domains (healthcare, real-estate, and computer vision) on which it significantly outperforms baselines with conventional attention mechanisms, or without cost-effective reranking, with substantially less retraining and human-model interaction cost. | http://arxiv.org/pdf/2006.05419v1 | [
"Jay Heo",
"Junhyeon Park",
"Hyewon Jeong",
"Kwang Joon Kim",
"Juho Lee",
"Eunho Yang",
"Sung Ju Hwang"
] | 2020-06-09T17:36:41Z | 2020-06-09T17:36:41Z |
2002.12499 | On Catastrophic Interference in Atari 2600 Games | Model-free deep reinforcement learning is sample inefficient. One hypothesis -- speculated, but not confirmed -- is that catastrophic interference within an environment inhibits learning. We test this hypothesis through a large-scale empirical study in the Arcade Learning Environment (ALE) and, indeed, find supporting evidence. We show that interference causes performance to plateau; the network cannot train on segments beyond the plateau without degrading the policy used to reach there. By synthetically controlling for interference, we demonstrate performance boosts across architectures, learning algorithms and environments. A more refined analysis shows that learning one segment of a game often increases prediction errors elsewhere. Our study provides a clear empirical link between catastrophic interference and sample efficiency in reinforcement learning. | http://arxiv.org/pdf/2002.12499v2 | [
"William Fedus",
"Dibya Ghosh",
"John D. Martin",
"Marc G. Bellemare",
"Yoshua Bengio",
"Hugo Larochelle"
] | 2020-06-09T17:36:46Z | 2020-02-28T00:55:03Z |
2006.01482 | Multi-Agent Determinantal Q-Learning | Centralized training with decentralized execution has become an important paradigm in multi-agent learning. Though practical, current methods rely on restrictive assumptions to decompose the centralized value function across agents for execution. In this paper, we eliminate this restriction by proposing multi-agent determinantal Q-learning. Our method is established on Q-DPP, an extension of determinantal point process (DPP) with partition-matroid constraint to multi-agent setting. Q-DPP promotes agents to acquire diverse behavioral models; this allows a natural factorization of the joint Q-functions with no need for emph{a priori} structural constraints on the value function or special network architectures. We demonstrate that Q-DPP generalizes major solutions including VDN, QMIX, and QTRAN on decentralizable cooperative tasks. To efficiently draw samples from Q-DPP, we adopt an existing sample-by-projection sampler with theoretical approximation guarantee. The sampler also benefits exploration by coordinating agents to cover orthogonal directions in the state space during multi-agent training. We evaluate our algorithm on various cooperative benchmarks; its effectiveness has been demonstrated when compared with the state-of-the-art. | http://arxiv.org/pdf/2006.01482v4 | [
"Yaodong Yang",
"Ying Wen",
"Liheng Chen",
"Jun Wang",
"Kun Shao",
"David Mguni",
"Weinan Zhang"
] | 2020-06-09T17:50:25Z | 2020-06-02T09:32:48Z |
2003.08040 | Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation | We consider the problem of unsupervised domain adaptation for semantic segmentation by easing the domain shift between the source domain (synthetic data) and the target domain (real data) in this work. State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue. Based on the observation that stuff categories usually share similar appearances across images of different domains while things (i.e. object instances) have much larger differences, we propose to improve the semantic-level alignment with different strategies for stuff regions and for things: 1) for the stuff categories, we generate feature representation for each class and conduct the alignment operation from the target domain to the source domain; 2) for the thing categories, we generate feature representation for each individual instance and encourage the instance in the target domain to align with the most similar one in the source domain. In this way, the individual differences within thing categories will also be considered to alleviate over-alignment. In addition to our proposed method, we further reveal the reason why the current adversarial loss is often unstable in minimizing the distribution discrepancy and show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains. We conduct extensive experiments in two unsupervised domain adaptation tasks, i.e. GTA5 to Cityscapes and SYNTHIA to Cityscapes, and achieve the new state-of-the-art segmentation accuracy. | http://arxiv.org/pdf/2003.08040v3 | [
"Zhonghao Wang",
"Mo Yu",
"Yunchao Wei",
"Rogerio Feris",
"Jinjun Xiong",
"Wen-mei Hwu",
"Thomas S. Huang",
"Humphrey Shi"
] | 2020-06-09T17:56:27Z | 2020-03-18T04:43:25Z |
2006.05441 | Faster PAC Learning and Smaller Coresets via Smoothed Analysis | PAC-learning usually aims to compute a small subset ($varepsilon$-sample/net) from $n$ items, that provably approximates a given loss function for every query (model, classifier, hypothesis) from a given set of queries, up to an additive error $varepsilonin(0,1)$. Coresets generalize this idea to support multiplicative error $1pmvarepsilon$. Inspired by smoothed analysis, we suggest a natural generalization: approximate the emph{average} (instead of the worst-case) error over the queries, in the hope of getting smaller subsets. The dependency between errors of different queries implies that we may no longer apply the Chernoff-Hoeffding inequality for a fixed query, and then use the VC-dimension or union bound. This paper provides deterministic and randomized algorithms for computing such coresets and $varepsilon$-samples of size independent of $n$, for any finite set of queries and loss function. Example applications include new and improved coreset constructions for e.g. streaming vector summarization [ICML'17] and $k$-PCA [NIPS'16]. Experimental results with open source code are provided. | http://arxiv.org/pdf/2006.05441v1 | [
"Alaa Maalouf",
"Ibrahim Jubran",
"Murad Tukan",
"Dan Feldman"
] | 2020-06-09T18:25:34Z | 2020-06-09T18:25:34Z |
2006.05442 | Tensor train decompositions on recurrent networks | Recurrent neural networks (RNN) such as long-short-term memory (LSTM) networks are essential in a multitude of daily live tasks such as speech, language, video, and multimodal learning. The shift from cloud to edge computation intensifies the need to contain the growth of RNN parameters. Current research on RNN shows that despite the performance obtained on convolutional neural networks (CNN), keeping a good performance in compressed RNNs is still a challenge. Most of the literature on compression focuses on CNNs using matrix product (MPO) operator tensor trains. However, matrix product state (MPS) tensor trains have more attractive features than MPOs, in terms of storage reduction and computing time at inference. We show that MPS tensor trains should be at the forefront of LSTM network compression through a theoretical analysis and practical experiments on NLP task. | http://arxiv.org/pdf/2006.05442v1 | [
"Alejandro Murua",
"Ramchalam Ramakrishnan",
"Xinlin Li",
"Rui Heng Yang",
"Vahid Partovi Nia"
] | 2020-06-09T18:25:39Z | 2020-06-09T18:25:39Z |
2006.04655 | Random Hypervolume Scalarizations for Provable Multi-Objective Black Box
Optimization | Single-objective black box optimization (also known as zeroth-order optimization) is the process of minimizing a scalar objective $f(x)$, given evaluations at adaptively chosen inputs $x$. In this paper, we consider multi-objective optimization, where $f(x)$ outputs a vector of possibly competing objectives and the goal is to converge to the Pareto frontier. Quantitatively, we wish to maximize the standard hypervolume indicator metric, which measures the dominated hypervolume of the entire set of chosen inputs. In this paper, we introduce a novel scalarization function, which we term the hypervolume scalarization, and show that drawing random scalarizations from an appropriately chosen distribution can be used to efficiently approximate the hypervolume indicator metric. We utilize this connection to show that Bayesian optimization with our scalarization via common acquisition functions, such as Thompson Sampling or Upper Confidence Bound, provably converges to the whole Pareto frontier by deriving tight hypervolume regret bounds on the order of $widetilde{O}(sqrt{T})$. Furthermore, we highlight the general utility of our scalarization framework by showing that any provably convergent single-objective optimization process can be effortlessly converted to a multi-objective optimization process with provable convergence guarantees. | http://arxiv.org/pdf/2006.04655v2 | [
"Daniel Golovin",
"Qiuyi Zhang"
] | 2020-06-09T18:29:23Z | 2020-06-08T15:00:30Z |
2006.05444 | Hierarchical regularization networks for sparsification based learning
on noisy datasets | We propose a hierarchical learning strategy aimed at generating sparse representations and associated models for large noisy datasets. The hierarchy follows from approximation spaces identified at successively finer scales. For promoting model generalization at each scale, we also introduce a novel, projection based penalty operator across multiple dimension, using permutation operators for incorporating proximity and ordering information. The paper presents a detailed analysis of approximation properties in the reconstruction Reproducing Kernel Hilbert Spaces (RKHS) with emphasis on optimality and consistency of predictions and behavior of error functionals associated with the produced sparse representations. Results show the performance of the approach as a data reduction and modeling strategy on both synthetic (univariate and multivariate) and real datasets (time series). The sparse model for the test datasets, generated by the presented approach, is also shown to efficiently reconstruct the underlying process and preserve generalizability. | http://arxiv.org/pdf/2006.05444v1 | [
"Prashant Shekhar",
"Abani Patra"
] | 2020-06-09T18:32:24Z | 2020-06-09T18:32:24Z |
2006.05469 | Examination and Extension of Strategies for Improving Personalized
Language Modeling via Interpolation | In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80% of users receive a lift in perplexity, with an average of 5.2% in perplexity lift per user. In doing this research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks. | http://arxiv.org/pdf/2006.05469v1 | [
"Liqun Shao",
"Sahitya Mantravadi",
"Tom Manzini",
"Alejandro Buendia",
"Manon Knoertzer",
"Soundar Srinivasan",
"Chris Quirk"
] | 2020-06-09T19:29:41Z | 2020-06-09T19:29:41Z |
2006.05477 | Unsupervised Paraphrase Generation using Pre-trained Language Models | Large scale Pre-trained Language Models have proven to be very powerful approach in various Natural language tasks. OpenAI's GPT-2 cite{radford2019language} is notable for its capability to generate fluent, well formulated, grammatically consistent text and for phrase completions. In this paper we leverage this generation capability of GPT-2 to generate paraphrases without any supervision from labelled data. We examine how the results compare with other supervised and unsupervised approaches and the effect of using paraphrases for data augmentation on downstream tasks such as classification. Our experiments show that paraphrases generated with our model are of good quality, are diverse and improves the downstream task performance when used for data augmentation. | http://arxiv.org/pdf/2006.05477v1 | [
"Chaitra Hegde",
"Shrikumar Patil"
] | 2020-06-09T19:40:19Z | 2020-06-09T19:40:19Z |
2005.08323 | TG-GAN: Continuous-time Temporal Graph Generation with Deep Generative
Models | The recent deep generative models for static graphs that are now being actively developed have achieved significant success in areas such as molecule design. However, many real-world problems involve temporal graphs whose topology and attribute values evolve dynamically over time, including important applications such as protein folding, human mobility networks, and social network growth. As yet, deep generative models for temporal graphs are not yet well understood and existing techniques for static graphs are not adequate for temporal graphs since they cannot 1) encode and decode continuously-varying graph topology chronologically, 2) enforce validity via temporal constraints, or 3) ensure efficiency for information-lossless temporal resolution. To address these challenges, we propose a new model, called ``Temporal Graph Generative Adversarial Network'' (TG-GAN) for continuous-time temporal graph generation, by modeling the deep generative process for truncated temporal random walks and their compositions. Specifically, we first propose a novel temporal graph generator that jointly model truncated edge sequences, time budgets, and node attributes, with novel activation functions that enforce temporal validity constraints under recurrent architecture. In addition, a new temporal graph discriminator is proposed, which combines time and node encoding operations over a recurrent architecture to distinguish the generated sequences from the real ones sampled by a newly-developed truncated temporal random walk sampler. Extensive experiments on both synthetic and real-world datasets demonstrate TG-GAN significantly outperforms the comparison methods in efficiency and effectiveness. | http://arxiv.org/abs/2005.08323v2 | [
"Liming Zhang",
"Liang Zhao",
"Shan Qin",
"Dieter Pfoser"
] | 2020-06-09T19:47:40Z | 2020-05-17T17:59:12Z |
2006.05485 | Off-the-shelf sensor vs. experimental radar -- How much resolution is
necessary in automotive radar classification? | Radar-based road user detection is an important topic in the context of autonomous driving applications. The resolution of conventional automotive radar sensors results in a sparse data representation which is tough to refine during subsequent signal processing. On the other hand, a new sensor generation is waiting in the wings for its application in this challenging field. In this article, two sensors of different radar generations are evaluated against each other. The evaluation criterion is the performance on moving road user object detection and classification tasks. To this end, two data sets originating from an off-the-shelf radar and a high resolution next generation radar are compared. Special attention is given on how the two data sets are assembled in order to make them comparable. The utilized object detector consists of a clustering algorithm, a feature extraction module, and a recurrent neural network ensemble for classification. For the assessment, all components are evaluated both individually and, for the first time, as a whole. This allows for indicating where overall performance improvements have their origin in the pipeline. Furthermore, the generalization capabilities of both data sets are evaluated and important comparison metrics for automotive radar object detection are discussed. Results show clear benefits of the next generation radar. Interestingly, those benefits do not actually occur due to better performance at the classification stage, but rather because of the vast improvements at the clustering stage. | http://arxiv.org/abs/2006.05485v1 | [
"Nicolas Scheiner",
"Ole Schumann",
"Florian Kraus",
"Nils Appenrodt",
"Jürgen Dickmann",
"Bernhard Sick"
] | 2020-06-09T19:51:34Z | 2020-06-09T19:51:34Z |
2005.02191 | Localized active learning of Gaussian process state space models | The performance of learning-based control techniques crucially depends on how effectively the system is explored. While most exploration techniques aim to achieve a globally accurate model, such approaches are generally unsuited for systems with unbounded state spaces. Furthermore, a globally accurate model is not required to achieve good performance in many common control applications, e.g., local stabilization tasks. In this paper, we propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space. Our approach aims to maximize the mutual information of the exploration trajectories with respect to a discretization of the region of interest. By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy. To enable computational tractability, we decouple the choice of most informative data points from the model predictive control optimization step. This yields two optimization problems that can be solved in parallel. We apply the proposed method to explore the state space of various dynamical systems and compare our approach to a commonly used entropy-based exploration strategy. In all experiments, our method yields a better model within the region of interest than the entropy-based method. | http://arxiv.org/pdf/2005.02191v3 | [
"Alexandre Capone",
"Jonas Umlauft",
"Thomas Beckers",
"Armin Lederer",
"Sandra Hirche"
] | 2020-06-09T19:57:11Z | 2020-05-04T05:35:02Z |
2006.05491 | Regret Balancing for Bandit and RL Model Selection | We consider model selection in stochastic bandit and reinforcement learning problems. Given a set of base learning algorithms, an effective model selection strategy adapts to the best learning algorithm in an online fashion. We show that by estimating the regret of each algorithm and playing the algorithms such that all empirical regrets are ensured to be of the same order, the overall regret balancing strategy achieves a regret that is close to the regret of the optimal base algorithm. Our strategy requires an upper bound on the optimal base regret as input, and the performance of the strategy depends on the tightness of the upper bound. We show that having this prior knowledge is necessary in order to achieve a near-optimal regret. Further, we show that any near-optimal model selection strategy implicitly performs a form of regret balancing. | http://arxiv.org/pdf/2006.05491v1 | [
"Yasin Abbasi-Yadkori",
"Aldo Pacchiano",
"My Phan"
] | 2020-06-09T20:11:19Z | 2020-06-09T20:11:19Z |
2006.05493 | Predicting and Analyzing Law-Making in Kenya | Modelling and analyzing parliamentary legislation, roll-call votes and order of proceedings in developed countries has received significant attention in recent years. In this paper, we focused on understanding the bills introduced in a developing democracy, the Kenyan bicameral parliament. We developed and trained machine learning models on a combination of features extracted from the bills to predict the outcome - if a bill will be enacted or not. We observed that the texts in a bill are not as relevant as the year and month the bill was introduced and the category the bill belongs to. | http://arxiv.org/pdf/2006.05493v1 | [
"Oyinlola Babafemi",
"Adewale Akinfaderin"
] | 2020-06-09T20:21:50Z | 2020-06-09T20:21:50Z |
1905.10474v | A view of Estimation of Distribution Algorithms through the lens of
Expectation-Maximization | We show that a large class of Estimation of Distribution Algorithms, including, but not limited to, Covariance Matrix Adaption, can be written as a Monte Carlo Expectation-Maximization algorithm, and as exact EM in the limit of infinite samples. Because EM sits on a rigorous statistical foundation and has been thoroughly analyzed, this connection provides a new coherent framework with which to reason about EDAs. | http://arxiv.org/pdf/1905.10474v11 | [
"David H. Brookes",
"Akosua Busia",
"Clara Fannjiang",
"Kevin Murphy",
"Jennifer Listgarten"
] | 2022-06-10T22:19:09Z | 2019-05-24T23:06:02Z |
2004.08957 | Reconstruction of high-resolution 6x6-mm OCT angiograms using deep
learning | Typical optical coherence tomographic angiography (OCTA) acquisition areas on commercial devices are 3x3- or 6x6-mm. Compared to 3x3-mm angiograms with proper sampling density, 6x6-mm angiograms have significantly lower scan quality, with reduced signal-to-noise ratio and worse shadow artifacts due to undersampling. Here, we propose a deep-learning-based high-resolution angiogram reconstruction network (HARNet) to generate enhanced 6x6-mm superficial vascular complex (SVC) angiograms. The network was trained on data from 3x3-mm and 6x6-mm angiograms from the same eyes. The reconstructed 6x6-mm angiograms have significantly lower noise intensity and better vascular connectivity than the original images. The algorithm did not generate false flow signal at the noise level presented by the original angiograms. The image enhancement produced by our algorithm may improve biomarker measurements and qualitative clinical assessment of 6x6-mm OCTA. | http://arxiv.org/pdf/2004.08957v2 | [
"Min Gao",
"Yukun Guo",
"Tristan T. Hormel",
"Jiande Sun",
"Thomas Hwang",
"Yali Jia"
] | 2020-06-09T21:18:03Z | 2020-04-19T20:43:13Z |
2006.05514 | A Machine Learning Early Warning System: Multicenter Validation in
Brazilian Hospitals | Early recognition of clinical deterioration is one of the main steps for reducing inpatient morbidity and mortality. The challenging task of clinical deterioration identification in hospitals lies in the intense daily routines of healthcare practitioners, in the unconnected patient data stored in the Electronic Health Records (EHRs) and in the usage of low accuracy scores. Since hospital wards are given less attention compared to the Intensive Care Unit, ICU, we hypothesized that when a platform is connected to a stream of EHR, there would be a drastic improvement in dangerous situations awareness and could thus assist the healthcare team. With the application of machine learning, the system is capable to consider all patient's history and through the use of high-performing predictive models, an intelligent early warning system is enabled. In this work we used 121,089 medical encounters from six different hospitals and 7,540,389 data points, and we compared popular ward protocols with six different scalable machine learning methods (three are classic machine learning models, logistic and probabilistic-based models, and three gradient boosted models). The results showed an advantage in AUC (Area Under the Receiver Operating Characteristic Curve) of 25 percentage points in the best Machine Learning model result compared to the current state-of-the-art protocols. This is shown by the generalization of the algorithm with leave-one-group-out (AUC of 0.949) and the robustness through cross-validation (AUC of 0.961). We also perform experiments to compare several window sizes to justify the use of five patient timestamps. A sample dataset, experiments, and code are available for replicability purposes. | http://arxiv.org/pdf/2006.05514v1 | [
"Jhonatan Kobylarz",
"Henrique D. P. dos Santos",
"Felipe Barletta",
"Mateus Cichelero da Silva",
"Renata Vieira",
"Hugo M. P. Morales",
"Cristian da Costa Rocha"
] | 2020-06-09T21:21:38Z | 2020-06-09T21:21:38Z |
2001.02728 | Learning Generative Models using Denoising Density Estimators | Learning probabilistic models that can estimate the density of a given set of samples, and generate samples from that density, is one of the fundamental challenges in unsupervised machine learning. We introduce a new generative model based on denoising density estimators (DDEs), which are scalar functions parameterized by neural networks, that are efficiently trained to represent kernel density estimators of the data. Leveraging DDEs, our main contribution is a novel technique to obtain generative models by minimizing the KL-divergence directly. We prove that our algorithm for obtaining generative models is guaranteed to converge to the correct solution. Our approach does not require specific network architecture as in normalizing flows, nor use ordinary differential equation solvers as in continuous normalizing flows. Experimental results demonstrate substantial improvement in density estimation and competitive performance in generative model training. | http://arxiv.org/pdf/2001.02728v2 | [
"Siavash A. Bigdeli",
"Geng Lin",
"Tiziano Portenier",
"L. Andrea Dunbar",
"Matthias Zwicker"
] | 2020-06-09T21:26:44Z | 2020-01-08T20:30:40Z |
2006.11374 | Bombus Species Image Classification | Entomologists, ecologists and others struggle to rapidly and accurately identify the species of bumble bees they encounter in their field work and research. The current process requires the bees to be mounted, then physically shipped to a taxonomic expert for proper categorization. We investigated whether an image classification system derived from transfer learning can do this task. We used Google Inception, Oxford VGG16 and VGG19 and Microsoft ResNet 50. We found Inception and VGG classifiers were able to make some progress at identifying bumble bee species from the available data, whereas ResNet was not. Individual classifiers achieved accuracies of up to 23% for single species identification and 44% top-3 labels, where a composite model performed better, 27% and 50%. We feel the performance was most hampered by our limited data set of 5,000-plus labeled images of 29 species, with individual species represented by 59 -315 images. | http://arxiv.org/pdf/2006.11374v1 | [
"Venkat Margapuri",
"George Lavezzi",
"Robert Stewart",
"Dan Wagner"
] | 2020-06-09T21:28:32Z | 2020-06-09T21:28:32Z |
2005.10349 | Adversarial Canonical Correlation Analysis | Canonical Correlation Analysis (CCA) is a statistical technique used to extract common information from multiple data sources or views. It has been used in various representation learning problems, such as dimensionality reduction, word embedding, and clustering. Recent work has given CCA probabilistic footing in a deep learning context and uses a variational lower bound for the data log likelihood to estimate model parameters. Alternatively, adversarial techniques have arisen in recent years as a powerful alternative to variational Bayesian methods in autoencoders. In this work, we explore straightforward adversarial alternatives to recent work in Deep Variational CCA (VCCA and VCCA-Private) we call ACCA and ACCA-Private and show how these approaches offer a stronger and more flexible way to match the approximate posteriors coming from encoders to much larger classes of priors than the VCCA and VCCA-Private models. This allows new priors for what constitutes a good representation, such as disentangling underlying factors of variation, to be more directly pursued. We offer further analysis on the multi-level disentangling properties of VCCA-Private and ACCA-Private through the use of a newly designed dataset we call Tangled MNIST. We also design a validation criteria for these models that is theoretically grounded, task-agnostic, and works well in practice. Lastly, we fill a minor research gap by deriving an additional variational lower bound for VCCA that allows the representation to use view-specific information from both input views. | http://arxiv.org/pdf/2005.10349v2 | [
"Benjamin Dutton"
] | 2020-06-09T21:31:21Z | 2020-05-20T20:46:35Z |
2006.01981 | Training End-to-End Analog Neural Networks with Equilibrium Propagation | We introduce a principled method to train end-to-end analog neural networks by stochastic gradient descent. In these analog neural networks, the weights to be adjusted are implemented by the conductances of programmable resistive devices such as memristors [Chua, 1971], and the nonlinear transfer functions (or `activation functions') are implemented by nonlinear components such as diodes. We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models: they possess an energy function as a consequence of Kirchhoff's laws governing electrical circuits. This property enables us to train them using the Equilibrium Propagation framework [Scellier and Bengio, 2017]. Our update rule for each conductance, which is local and relies solely on the voltage drop across the corresponding resistor, is shown to compute the gradient of the loss function. Our numerical simulations, which use the SPICE-based Spectre simulation framework to simulate the dynamics of electrical circuits, demonstrate training on the MNIST classification task, performing comparably or better than equivalent-size software-based neural networks. Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning. | http://arxiv.org/pdf/2006.01981v2 | [
"Jack Kendall",
"Ross Pantone",
"Kalpana Manickavasagam",
"Yoshua Bengio",
"Benjamin Scellier"
] | 2020-06-09T22:26:05Z | 2020-06-02T23:38:35Z |
1905.04579 | Are Powerful Graph Neural Nets Necessary? A Dissection on Graph
Classification | Graph Neural Nets (GNNs) have received increasing attentions, partially due to their superior performance in many node and graph classification tasks. However, there is a lack of understanding on what they are learning and how sophisticated the learned graph functions are. In this work, we propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction. To study the importance of both parts, we propose to linearize them separately. We first linearize the graph filtering function, resulting Graph Feature Network (GFN), which is a simple lightweight neural net defined on a textit{set} of graph augmented features. Further linearization of GFN's set function results in Graph Linear Network (GLN), which is a linear function. Empirically we perform evaluations on common graph classification benchmarks. To our surprise, we find that, despite the simplification, GFN could match or exceed the best accuracies produced by recently proposed GNNs (with a fraction of computation cost), while GLN underperforms significantly. Our results demonstrate the importance of non-linear set function, and suggest that linear graph filtering with non-linear set function is an efficient and powerful scheme for modeling existing graph classification benchmarks. | http://arxiv.org/pdf/1905.04579v3 | [
"Ting Chen",
"Song Bian",
"Yizhou Sun"
] | 2020-06-09T22:32:19Z | 2019-05-11T19:47:19Z |
2004.00794 | Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation | Learning segmentation from synthetic data and adapting to real data can significantly relieve human efforts in labelling pixel-level masks. A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains, i.e. reducing domain shift. The common approach to this problem is to minimize the discrepancy between feature distributions from different domains through adversarial training. However, directly aligning the feature distribution globally cannot guarantee consistency from a local view (i.e. semantic-level), which prevents certain semantic knowledge learned on the source domain from being applied to the target domain. To tackle this issue, we propose a semi-supervised approach named Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views. Specifically, leveraging a small number of labeled data from the target domain, we directly extract semantic-level feature representations from both the source and the target domains by averaging the features corresponding to same categories advised by pixel-level masks. We then feed the produced features to the discriminator to conduct semantic-level adversarial learning, which collaborates with the adversarial learning from the global view to better alleviate the domain shift. We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes. Extensive experiments demonstrate that: (1) ASS can significantly outperform the current unsupervised state-of-the-arts by employing a small number of annotated samples from the target domain; (2) ASS can beat the oracle model trained on the whole target dataset by over 3 points by augmenting the synthetic source data with annotated samples from the target domain without suffering from the prevalent problem of overfitting to the source domain. | http://arxiv.org/pdf/2004.00794v2 | [
"Zhonghao Wang",
"Yunchao Wei",
"Rogerior Feris",
"Jinjun Xiong",
"Wen-Mei Hwu",
"Thomas S. Huang",
"Humphrey Shi"
] | 2020-06-09T22:38:27Z | 2020-04-02T03:25:05Z |
2006.09978 | Directional Multivariate Ranking | User-provided multi-aspect evaluations manifest users' detailed feedback on the recommended items and enable fine-grained understanding of their preferences. Extensive studies have shown that modeling such data greatly improves the effectiveness and explainability of the recommendations. However, as ranking is essential in recommendation, there is no principled solution yet for collectively generating multiple item rankings over different aspects. In this work, we propose a directional multi-aspect ranking criterion to enable a holistic ranking of items with respect to multiple aspects. Specifically, we view multi-aspect evaluation as an integral effort from a user that forms a vector of his/her preferences over aspects. Our key insight is that the direction of the difference vector between two multi-aspect preference vectors reveals the pairwise order of comparison. Hence, it is necessary for a multi-aspect ranking criterion to preserve the observed directions from such pairwise comparisons. We further derive a complete solution for the multi-aspect ranking problem based on a probabilistic multivariate tensor factorization model. Comprehensive experimental analysis on a large TripAdvisor multi-aspect rating dataset and a Yelp review text dataset confirms the effectiveness of our solution. | http://arxiv.org/abs/2006.09978v1 | [
"Nan Wang",
"Hongning Wang"
] | 2020-06-09T22:43:03Z | 2020-06-09T22:43:03Z |
2006.05538 | Dual-stream Maximum Self-attention Multi-instance Learning | Multi-instance learning (MIL) is a form of weakly supervised learning where a single class label is assigned to a bag of instances while the instance-level labels are not available. Training classifiers to accurately determine the bag label and instance labels is a challenging but critical task in many practical scenarios, such as computational histopathology. Recently, MIL models fully parameterized by neural networks have become popular due to the high flexibility and superior performance. Most of these models rely on attention mechanisms that assign attention scores across the instance embeddings in a bag and produce the bag embedding using an aggregation operator. In this paper, we proposed a dual-stream maximum self-attention MIL model (DSMIL) parameterized by neural networks. The first stream deploys a simple MIL max-pooling while the top-activated instance embedding is determined and used to obtain self-attention scores across instance embeddings in the second stream. Different from most of the previous methods, the proposed model jointly learns an instance classifier and a bag classifier based on the same instance embeddings. The experiments results show that our method achieves superior performance compared to the best MIL methods and demonstrates state-of-the-art performance on benchmark MIL datasets. | http://arxiv.org/pdf/2006.05538v1 | [
"Bin Li",
"Kevin W. Eliceiri"
] | 2020-06-09T22:44:58Z | 2020-06-09T22:44:58Z |
2006.05543 | Machine Learning for Imaging Cherenkov Detectors | Imaging Cherenkov detectors are largely used in modern nuclear and particle physics experiments where cutting-edge solutions are needed to face always more growing computing demands. This is a fertile ground for AI-based approaches and at present we are witnessing the onset of new highly efficient and fast applications. This paper focuses on novel directions with applications to Cherenkov detectors. In particular, recent advances on detector design and calibration, as well as particle identification are presented. | http://arxiv.org/abs/2006.05543v1 | [
"Cristiano Fanelli"
] | 2020-06-09T22:57:14Z | 2020-06-09T22:57:14Z |
2006.05547 | Deep Adversarial Koopman Model for Reaction-Diffusion systems | Reaction-diffusion systems are ubiquitous in nature and in engineering applications, and are often modeled using a non-linear system of governing equations. While robust numerical methods exist to solve them, deep learning-based reduced ordermodels (ROMs) are gaining traction as they use linearized dynamical models to advance the solution in time. One such family of algorithms is based on Koopman theory, and this paper applies this numerical simulation strategy to reaction-diffusion systems. Adversarial and gradient losses are introduced, and are found to robustify the predictions. The proposed model is extended to handle missing training data as well as recasting the problem from a control perspective. The efficacy of these developments are demonstrated for two different reaction-diffusion problems: (1) the Kuramoto-Sivashinsky equation of chaos and (2) the Turing instability using the Gray-Scott model. | http://arxiv.org/pdf/2006.05547v1 | [
"Kaushik Balakrishnan",
"Devesh Upadhyay"
] | 2020-06-09T23:12:12Z | 2020-06-09T23:12:12Z |
2006.05554 | Causal Discovery from Incomplete Data using An Encoder and Reinforcement
Learning | Discovering causal structure among a set of variables is a fundamental problem in many domains. However, state-of-the-art methods seldom consider the possibility that the observational data has missing values (incomplete data), which is ubiquitous in many real-world situations. The missing value will significantly impair the performance and even make the causal discovery algorithms fail. In this paper, we propose an approach to discover causal structures from incomplete data by using a novel encoder and reinforcement learning (RL). The encoder is designed for missing data imputation as well as feature extraction. In particular, it learns to encode the currently available information (with missing values) into a robust feature representation which is then used to determine where to search the best graph. The encoder is integrated into a RL framework that can be optimized using the actor-critic algorithm. Our method takes the incomplete observational data as input and generates a causal structure graph. Experimental results on synthetic and real data demonstrate that our method can robustly generate causal structures from incomplete data. Compared with the direct combination of data imputation and causal discovery methods, our method performs generally better and can even obtain a performance gain as much as 43.2%. | http://arxiv.org/pdf/2006.05554v1 | [
"Xiaoshui Huang",
"Fujin Zhu",
"Lois Holloway",
"Ali Haidar"
] | 2020-06-09T23:33:47Z | 2020-06-09T23:33:47Z |
1912.05014 | Hybrid Style Siamese Network: Incorporating style loss in complementary
apparels retrieval | Image Retrieval grows to be an integral part of fashion e-commerce ecosystem as it keeps expanding in multitudes. Other than the retrieval of visually similar items, the retrieval of visually compatible or complementary items is also an important aspect of it. Normal Siamese Networks tend to work well on complementary items retrieval. But it fails to identify low level style features which make items compatible in human eyes. These low level style features are captured to a large extent in techniques used in neural style transfer. This paper proposes a mechanism of utilising those methods in this retrieval task and capturing the low level style features through a hybrid siamese network coupled with a hybrid loss. The experimental results indicate that the proposed method outperforms traditional siamese networks in retrieval tasks for complementary items. | http://arxiv.org/pdf/1912.05014v2 | [
"Mayukh Bhattacharyya",
"Sayan Nag"
] | 2020-06-09T23:48:47Z | 2019-11-23T05:56:50Z |
1906.08044 | Robust End-to-End Speaker Verification Using EEG | In this paper we demonstrate that performance of a speaker verification system can be improved by concatenating electroencephalography (EEG) signal features with speech signal features or only using EEG signal features. We use state-of-the-art end-to-end deep learning model for performing speaker verification and we demonstrate our results for noisy speech. Our results indicate that EEG signals can improve the robustness of speaker verification systems, especially in noiser environment. | http://arxiv.org/pdf/1906.08044v5 | [
"Yan Han",
"Gautam Krishna",
"Co Tran",
"Mason Carnahan",
"Ahmed H Tewfik"
] | 2020-06-10T00:42:49Z | 2019-06-17T20:11:24Z |
1910.07099 | Entire Space Multi-Task Modeling via Post-Click Behavior Decomposition
for Conversion Rate Prediction | Recommender system, as an essential part of modern e-commerce, consists of two fundamental modules, namely Click-Through Rate (CTR) and Conversion Rate (CVR) prediction. While CVR has a direct impact on the purchasing volume, its prediction is well-known challenging due to the Sample Selection Bias (SSB) and Data Sparsity (DS) issues. Although existing methods, typically built on the user sequential behavior path ``impression$to$click$to$purchase'', is effective for dealing with SSB issue, they still struggle to address the DS issue due to rare purchase training samples. Observing that users always take several purchase-related actions after clicking, we propose a novel idea of post-click behavior decomposition. Specifically, disjoint purchase-related Deterministic Action (DAction) and Other Action (OAction) are inserted between click and purchase in parallel, forming a novel user sequential behavior graph ``impression$to$click$to$D(O)Action$to$purchase''. Defining model on this graph enables to leverage all the impression samples over the entire space and extra abundant supervised signals from D(O)Action, which will effectively address the SSB and DS issues together. To this end, we devise a novel deep recommendation model named Elaborated Entire Space Supervised Multi-task Model ($ESM^{2}$). According to the conditional probability rule defined on the graph, it employs multi-task learning to predict some decomposed sub-targets in parallel and compose them sequentially to formulate the final CVR. Extensive experiments on both offline and online environments demonstrate the superiority of $ESM^{2}$ over state-of-the-art models. The source code and dataset will be released. | http://arxiv.org/pdf/1910.07099v2 | [
"Hong Wen",
"Jing Zhang",
"Yuan Wang",
"Fuyu Lv",
"Wentian Bao",
"Quan Lin",
"Keping Yang"
] | 2020-06-10T00:44:54Z | 2019-10-15T23:15:42Z |
2006.05582 | Contrastive Multi-View Representation Learning on Graphs | We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks. Source code is released at: https://github.com/kavehhassani/mvgrl | http://arxiv.org/pdf/2006.05582v1 | [
"Kaveh Hassani",
"Amir Hosein Khasahmadi"
] | 2020-06-10T00:49:15Z | 2020-06-10T00:49:15Z |
2006.05583 | Variational Optimization for the Submodular Maximum Coverage Problem | We examine the emph{submodular maximum coverage problem} (SMCP), which is related to a wide range of applications. We provide the first variational approximation for this problem based on the Nemhauser divergence, and show that it can be solved efficiently using variational optimization. The algorithm alternates between two steps: (1) an E step that estimates a variational parameter to maximize a parameterized emph{modular} lower bound; and (2) an M step that updates the solution by solving the local approximate problem. We provide theoretical analysis on the performance of the proposed approach and its curvature-dependent approximate factor, and empirically evaluate it on a number of public data sets and several application tasks. | http://arxiv.org/pdf/2006.05583v1 | [
"Jian Du",
"Zhigang Hua",
"Shuang Yang"
] | 2020-06-10T00:50:25Z | 2020-06-10T00:50:25Z |
2006.05584 | Exploring Quality and Generalizability in Parameterized Neural Audio
Effects | Deep neural networks have shown promise for music audio signal processing applications, often surpassing prior approaches, particularly as end-to-end models in the waveform domain. Yet results to date have tended to be constrained by low sample rates, noise, narrow domains of signal types, and/or lack of parameterized controls (i.e. "knobs"), making their suitability for professional audio engineering workflows still lacking. This work expands on prior research published on modeling nonlinear time-dependent signal processing effects associated with music production by means of a deep neural network, one which includes the ability to emulate the parameterized settings you would see on an analog piece of equipment, with the goal of eventually producing commercially viable, high quality audio, i.e. 44.1 kHz sampling rate at 16-bit resolution. The results in this paper highlight progress in modeling these effects through architecture and optimization changes, towards increasing computational efficiency, lowering signal-to-noise ratio, and extending to a larger variety of nonlinear audio effects. Toward these ends, the strategies employed involved a three-pronged approach: model speed, model accuracy, and model generalizability. Most of the presented methods provide marginal or no increase in output accuracy over the original model, with the exception of dataset manipulation. We found that limiting the audio content of the dataset, for example using datasets of just a single instrument, provided a significant improvement in model accuracy over models trained on more general datasets. | http://arxiv.org/pdf/2006.05584v1 | [
"William Mitchell",
"Scott H. Hawley"
] | 2020-06-10T00:52:08Z | 2020-06-10T00:52:08Z |
2006.05594 | Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based
Classifiers | Being an emerging class of in-memory computing architecture, brain-inspired hyperdimensional computing (HDC) mimics brain cognition and leverages random hypervectors (i.e., vectors with a dimensionality of thousands or even more) to represent features and to perform classification tasks. The unique hypervector representation enables HDC classifiers to exhibit high energy efficiency, low inference latency and strong robustness against hardware-induced bit errors. Consequently, they have been increasingly recognized as an appealing alternative to or even replacement of traditional deep neural networks (DNNs) for local on device classification, especially on low-power Internet of Things devices. Nonetheless, unlike their DNN counterparts, state-of-the-art designs for HDC classifiers are mostly security-oblivious, casting doubt on their safety and immunity to adversarial inputs. In this paper, we study for the first time adversarial attacks on HDC classifiers and highlight that HDC classifiers can be vulnerable to even minimally-perturbed adversarial samples. Concretely, using handwritten digit classification as an example, we construct a HDC classifier and formulate a grey-box attack problem, where an attacker's goal is to mislead the target HDC classifier to produce erroneous prediction labels while keeping the amount of added perturbation noise as little as possible. Then, we propose a modified genetic algorithm to generate adversarial samples within a reasonably small number of queries. Our results show that adversarial images generated by our algorithm can successfully mislead the HDC classifier to produce wrong prediction labels with a high probability (i.e., 78% when the HDC classifier uses a fixed majority rule for decision). Finally, we also present two defense strategies -- adversarial training and retraining-- to strengthen the security of HDC classifiers. | http://arxiv.org/pdf/2006.05594v1 | [
"Fangfang Yang",
"Shaolei Ren"
] | 2020-06-10T01:09:30Z | 2020-06-10T01:09:30Z |
2006.05595 | Fitted Q-Learning for Relational Domains | We consider the problem of Approximate Dynamic Programming in relational domains. Inspired by the success of fitted Q-learning methods in propositional settings, we develop the first relational fitted Q-learning algorithms by representing the value function and Bellman residuals. When we fit the Q-functions, we show how the two steps of Bellman operator; application and projection steps can be performed using a gradient-boosting technique. Our proposed framework performs reasonably well on standard domains without using domain models and using fewer training trajectories. | http://arxiv.org/pdf/2006.05595v1 | [
"Srijita Das",
"Sriraam Natarajan",
"Kaushik Roy",
"Ronald Parr",
"Kristian Kersting"
] | 2020-06-10T01:18:47Z | 2020-06-10T01:18:47Z |
2006.05601 | Robust Estimation of Tree Structured Ising Models | We consider the task of learning Ising models when the signs of different random variables are flipped independently with possibly unequal, unknown probabilities. In this paper, we focus on the problem of robust estimation of tree-structured Ising models. Without any additional assumption of side information, this is an open problem. We first prove that this problem is unidentifiable, however, this unidentifiability is limited to a small equivalence class of trees formed by leaf nodes exchanging positions with their neighbors. Next, we propose an algorithm to solve the above problem with logarithmic sample complexity in the number of nodes and polynomial run-time complexity. Lastly, we empirically demonstrate that, as expected, existing algorithms are not inherently robust in the proposed setting whereas our algorithm correctly recovers the underlying equivalence class. | http://arxiv.org/pdf/2006.05601v1 | [
"Ashish Katiyar",
"Vatsal Shah",
"Constantine Caramanis"
] | 2020-06-10T01:32:45Z | 2020-06-10T01:32:45Z |
2006.05603 | Using an expert deviation carrying the knowledge of climate data in
usual clustering algorithms | In order to help physicists to expand their knowledge of the climate in the Lesser Antilles, we aim to identify the spatio-temporal configurations using clustering analysis on wind speed and cumulative rainfall datasets. But we show that using the L2 norm in conventional clustering methods as K-Means (KMS) and Hierarchical Agglomerative Clustering (HAC) can induce undesirable effects. So, we propose to replace Euclidean distance (L2) by a dissimilarity measure named Expert Deviation (ED). Based on the symmetrized Kullback-Leibler divergence, the ED integrates the properties of the observed physical parameters and climate knowledge. This measure helps comparing histograms of four patches, corresponding to geographical zones, that are influenced by atmospheric structures. The combined evaluation of the internal homogeneity and the separation of the clusters obtained using ED and L2 was performed. The results, which are compared using the silhouette index, show five clusters with high indexes. For the two available datasets one can see that, unlike KMS-L2, KMS-ED discriminates the daily situations favorably, giving more physical meaning to the clusters discovered by the algorithm. The effect of patches is observed in the spatial analysis of representative elements for KMS-ED. The ED is able to produce different configurations which makes the usual atmospheric structures clearly identifiable. Atmospheric physicists can interpret the locations of the impact of each cluster on a specific zone according to atmospheric structures. KMS-L2 does not lead to such an interpretability, because the situations represented are spatially quite smooth. This climatological study illustrates the advantage of using ED as a new approach. | http://arxiv.org/pdf/2006.05603v1 | [
"Emmanuel Biabiany",
"Vincent Page",
"Didier Bernard",
"Hélène Paugam-Moisy"
] | 2020-06-10T01:42:40Z | 2020-06-10T01:42:40Z |
2006.05604 | Machine Learning and Control Theory | We survey in this article the connections between Machine Learning and Control Theory. Control Theory provide useful concepts and tools for Machine Learning. Conversely Machine Learning can be used to solve large control problems. In the first part of the paper, we develop the connections between reinforcement learning and Markov Decision Processes, which are discrete time control problems. In the second part, we review the concept of supervised learning and the relation with static optimization. Deep learning which extends supervised learning, can be viewed as a control problem. In the third part, we present the links between stochastic gradient descent and mean-field theory. Conversely, in the fourth and fifth parts, we review machine learning approaches to stochastic control problems, and focus on the deterministic case, to explain, more easily, the numerical algorithms. | http://arxiv.org/pdf/2006.05604v1 | [
"Alain Bensoussan",
"Yiqun Li",
"Dinh Phan Cao Nguyen",
"Minh-Binh Tran",
"Sheung Chi Phillip Yam",
"Xiang Zhou"
] | 2020-06-10T01:47:34Z | 2020-06-10T01:47:34Z |
2006.05616 | Regret Minimization for Causal Inference on Large Treatment Space | Predicting which action (treatment) will lead to a better outcome is a central task in decision support systems. To build a prediction model in real situations, learning from biased observational data is a critical issue due to the lack of randomized controlled trial (RCT) data. To handle such biased observational data, recent efforts in causal inference and counterfactual machine learning have focused on debiased estimation of the potential outcomes on a binary action space and the difference between them, namely, the individual treatment effect. When it comes to a large action space (e.g., selecting an appropriate combination of medicines for a patient), however, the regression accuracy of the potential outcomes is no longer sufficient in practical terms to achieve a good decision-making performance. This is because the mean accuracy on the large action space does not guarantee the nonexistence of a single potential outcome misestimation that might mislead the whole decision. Our proposed loss minimizes a classification error of whether or not the action is relatively good for the individual target among all feasible actions, which further improves the decision-making performance, as we prove. We also propose a network architecture and a regularizer that extracts a debiased representation not only from the individual feature but also from the biased action for better generalization in large action spaces. Extensive experiments on synthetic and semi-synthetic datasets demonstrate the superiority of our method for large combinatorial action spaces. | http://arxiv.org/pdf/2006.05616v1 | [
"Akira Tanimoto",
"Tomoya Sakai",
"Takashi Takenouchi",
"Hisashi Kashima"
] | 2020-06-10T02:19:48Z | 2020-06-10T02:19:48Z |
2006.05623 | Training with Multi-Layer Embeddings for Model Reduction | Modern recommendation systems rely on real-valued embeddings of categorical features. Increasing the dimension of embedding vectors improves model accuracy but comes at a high cost to model size. We introduce a multi-layer embedding training (MLET) architecture that trains embeddings via a sequence of linear layers to derive superior embedding accuracy vs. model size trade-off. Our approach is fundamentally based on the ability of factorized linear layers to produce superior embeddings to that of a single linear layer. We focus on the analysis and implementation of a two-layer scheme. Harnessing the recent results in dynamics of backpropagation in linear neural networks, we explain the ability to get superior multi-layer embeddings via their tendency to have lower effective rank. We show that substantial advantages are obtained in the regime where the width of the hidden layer is much larger than that of the final embedding (d). Crucially, at conclusion of training, we convert the two-layer solution into a single-layer one: as a result, the inference-time model size scales as d. We prototype the MLET scheme within Facebook's PyTorch-based open-source Deep Learning Recommendation Model. We show that it allows reducing d by 4-8X, with a corresponding improvement in memory footprint, at given model accuracy. The experiments are run on two publicly available click-through-rate prediction benchmarks (Criteo-Kaggle and Avazu). The runtime cost of MLET is 25%, on average. | http://arxiv.org/pdf/2006.05623v1 | [
"Benjamin Ghaemmaghami",
"Zihao Deng",
"Benjamin Cho",
"Leo Orshansky",
"Ashish Kumar Singh",
"Mattan Erez",
"Michael Orshansky"
] | 2020-06-10T02:47:40Z | 2020-06-10T02:47:40Z |
2006.05627 | A survey on deep hashing for image retrieval | Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH. | http://arxiv.org/pdf/2006.05627v1 | [
"Xiaopeng Zhang"
] | 2020-06-10T03:01:59Z | 2020-06-10T03:01:59Z |
2006.05635 | Data Augmentation for Training Dialog Models Robust to Speech
Recognition Errors | Speech-based virtual assistants, such as Amazon Alexa, Google assistant, and Apple Siri, typically convert users' audio signals to text data through automatic speech recognition (ASR) and feed the text to downstream dialog models for natural language understanding and response generation. The ASR output is error-prone; however, the downstream dialog models are often trained on error-free text data, making them sensitive to ASR errors during inference time. To bridge the gap and make dialog models more robust to ASR errors, we leverage an ASR error simulator to inject noise into the error-free text data, and subsequently train the dialog models with the augmented data. Compared to other approaches for handling ASR errors, such as using ASR lattice or end-to-end methods, our data augmentation approach does not require any modification to the ASR or downstream dialog models; our approach also does not introduce any additional latency during inference time. We perform extensive experiments on benchmark data and show that our approach improves the performance of downstream dialog models in the presence of ASR errors, and it is particularly effective in the low-resource situations where there are constraints on model size or the training data is scarce. | http://arxiv.org/pdf/2006.05635v1 | [
"Longshaokan Wang",
"Maryam Fazel-Zarandi",
"Aditya Tiwari",
"Spyros Matsoukas",
"Lazaros Polymenakos"
] | 2020-06-10T03:18:15Z | 2020-06-10T03:18:15Z |
1904.06194 | Compressing deep neural networks by matrix product operators | A deep neural network is a parametrization of a multilayer mapping of signals in terms of many alternatively arranged linear and nonlinear transformations. The linear transformations, which are generally used in the fully connected as well as convolutional layers, contain most of the variational parameters that are trained and stored. Compressing a deep neural network to reduce its number of variational parameters but not its prediction power is an important but challenging problem toward the establishment of an optimized scheme in training efficiently these parameters and in lowering the risk of overfitting. Here we show that this problem can be effectively solved by representing linear transformations with matrix product operators (MPOs), which is a tensor network originally proposed in physics to characterize the short-range entanglement in one-dimensional quantum states. We have tested this approach in five typical neural networks, including FC2, LeNet-5, VGG, ResNet, and DenseNet on two widely used data sets, namely, MNIST and CIFAR-10, and found that this MPO representation indeed sets up a faithful and efficient mapping between input and output signals, which can keep or even improve the prediction accuracy with a dramatically reduced number of parameters. Our method greatly simplifies the representations in deep learning, and opens a possible route toward establishing a framework of modern neural networks which might be simpler and cheaper, but more efficient. | http://arxiv.org/abs/1904.06194v2 | [
"Ze-Feng Gao",
"Song Cheng",
"Rong-Qiang He",
"Z. Y. Xie",
"Hui-Hai Zhao",
"Zhong-Yi Lu",
"Tao Xiang"
] | 2020-06-10T03:26:01Z | 2019-04-11T17:59:00Z |
2006.06434 | TableQA: a Large-Scale Chinese Text-to-SQL Dataset for Table-Aware SQL
Generation | Parsing natural language to corresponding SQL (NL2SQL) with data driven approaches like deep neural networks attracts much attention in recent years. Existing NL2SQL datasets assume that condition values should appear exactly in natural language questions and the queries are answerable given the table. However, these assumptions may fail in practical scenarios, because user may use different expressions for the same content in the table, and query information outside the table without the full picture of contents in table. Therefore we present TableQA, a large-scale cross-domain Natural Language to SQL dataset in Chinese language consisting 64,891 questions and 20,311 unique SQL queries on over 6,000 tables. Different from exisiting NL2SQL datasets, TableQA requires to generalize well not only to SQL skeletons of different questions and table schemas, but also to the various expressions for condition values. Experiment results show that the state-of-the-art model with 95.1% condition value accuracy on WikiSQL only gets 46.8% condition value accuracy and 43.0% logic form accuracy on TableQA, indicating the proposed dataset is challenging and necessary to handle. Two table-aware approaches are proposed to alleviate the problem, the end-to-end approaches obtains 51.3% and 47.4% accuracy on the condition value and logic form tasks, with improvement of 4.7% and 3.4% respectively. | http://arxiv.org/pdf/2006.06434v1 | [
"Ningyuan Sun",
"Xuefeng Yang",
"Yunfeng Liu"
] | 2020-06-10T03:49:08Z | 2020-06-10T03:49:08Z |
2002.09518 | Memory-Based Graph Networks | Graph neural networks (GNNs) are a class of deep models that operate on data with arbitrary topology represented as graphs. We introduce an efficient memory layer for GNNs that can jointly learn node representations and coarsen the graph. We also introduce two new networks based on this layer: memory-based GNN (MemGNN) and graph memory network (GMN) that can learn hierarchical graph representations. The experimental results shows that the proposed models achieve state-of-the-art results in eight out of nine graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data. Code and reference implementations are released at: https://github.com/amirkhas/GraphMemoryNet | http://arxiv.org/pdf/2002.09518v2 | [
"Amir Hosein Khasahmadi",
"Kaveh Hassani",
"Parsa Moradi",
"Leo Lee",
"Quaid Morris"
] | 2020-06-10T04:50:41Z | 2020-02-21T19:26:31Z |
1911.06479 | On Model Robustness Against Adversarial Examples | We study the model robustness against adversarial examples, referred to as small perturbed input data that may however fool many state-of-the-art deep learning models. Unlike previous research, we establish a novel theory addressing the robustness issue from the perspective of stability of the loss function in the small neighborhood of natural examples. We propose to exploit an energy function to describe the stability and prove that reducing such energy guarantees the robustness against adversarial examples. We also show that the traditional training methods including adversarial training with the $l_2$ norm constraint (AT) and Virtual Adversarial Training (VAT) tend to minimize the lower bound of our proposed energy function. We make an analysis showing that minimization of such lower bound can however lead to insufficient robustness within the neighborhood around the input sample. Furthermore, we design a more rational method with the energy regularization which proves to achieve better robustness than previous methods. Through a series of experiments, we demonstrate the superiority of our model on both supervised tasks and semi-supervised tasks. In particular, our proposed adversarial framework achieves the best performance compared with previous adversarial training methods on benchmark datasets MNIST, CIFAR-10, and SVHN. Importantly, they demonstrate much better robustness against adversarial examples than all the other comparison methods. | http://arxiv.org/pdf/1911.06479v2 | [
"Shufei Zhang",
"Kaizhu Huang",
"Zenglin Xu"
] | 2020-06-10T05:26:51Z | 2019-11-15T05:02:25Z |
2006.05669 | Interpretable Multimodal Learning for Intelligent Regulation in Online
Payment Systems | With the explosive growth of transaction activities in online payment systems, effective and realtime regulation becomes a critical problem for payment service providers. Thanks to the rapid development of artificial intelligence (AI), AI-enable regulation emerges as a promising solution. One main challenge of the AI-enabled regulation is how to utilize multimedia information, i.e., multimodal signals, in Financial Technology (FinTech). Inspired by the attention mechanism in nature language processing, we propose a novel cross-modal and intra-modal attention network (CIAN) to investigate the relation between the text and transaction. More specifically, we integrate the text and transaction information to enhance the text-trade jointembedding learning, which clusters positive pairs and push negative pairs away from each other. Another challenge of intelligent regulation is the interpretability of complicated machine learning models. To sustain the requirements of financial regulation, we design a CIAN-Explainer to interpret how the attention mechanism interacts the original features, which is formulated as a low-rank matrix approximation problem. With the real datasets from the largest online payment system, WeChat Pay of Tencent, we conduct experiments to validate the practical application value of CIAN, where our method outperforms the state-of-the-art methods. | http://arxiv.org/pdf/2006.05669v1 | [
"Shuoyao Wang",
"Diwei Zhu"
] | 2020-06-10T06:08:20Z | 2020-06-10T06:08:20Z |
1902.06562 | Intra- and Inter-epoch Temporal Context Network (IITNet) Using Sub-epoch
Features for Automatic Sleep Scoring on Raw Single-channel EEG | A deep learning model, named IITNet, is proposed to learn intra- and inter-epoch temporal contexts from raw single-channel EEG for automatic sleep scoring. To classify the sleep stage from half-minute EEG, called an epoch, sleep experts investigate sleep-related events and consider the transition rules between the found events. Similarly, IITNet extracts representative features at a sub-epoch level by a residual neural network and captures intra- and inter-epoch temporal contexts from the sequence of the features via bidirectional LSTM. The performance was investigated for three datasets as the sequence length (L) increased from one to ten. IITNet achieved the comparable performance with other state-of-the-art results. The best accuracy, MF1, and Cohen's kappa ($kappa$) were 83.9%, 77.6%, 0.78 for SleepEDF (L=10), 86.5%, 80.7%, 0.80 for MASS (L=9), and 86.7%, 79.8%, 0.81 for SHHS (L=10), respectively. Even though using four epochs, the performance was still comparable. Compared to using a single epoch, on average, accuracy and MF1 increased by 2.48%p and 4.90%p and F1 of N1, N2, and REM increased by 16.1%p, 1.50%p, and 6.42%p, respectively. Above four epochs, the performance improvement was not significant. The results support that considering the latest two-minute raw single-channel EEG can be a reasonable choice for sleep scoring via deep neural networks with efficiency and reliability. Furthermore, the experiments with the baselines showed that introducing intra-epoch temporal context learning with a deep residual network contributes to the improvement in the overall performance and has the positive synergy effect with the inter-epoch temporal context learning. | http://arxiv.org/abs/1902.06562v2 | [
"Hogeon Seo",
"Seunghyeok Back",
"Seongju Lee",
"Deokhwan Park",
"Tae Kim",
"Kyoobin Lee"
] | 2020-06-10T06:38:33Z | 2019-02-18T13:32:21Z |
1907.07035 | Structured Variational Inference in Unstable Gaussian Process State
Space Models | We propose a new variational inference algorithm for learning in Gaussian Process State-Space Models (GPSSMs). Our algorithm enables learning of unstable and partially observable systems, where previous algorithms fail. Our main algorithmic contribution is a novel approximate posterior that can be calculated efficiently using a single forward and backward pass along the training trajectories. The forward-backward pass is inspired on Kalman smoothing for linear dynamical systems but generalizes to GPSSMs. Our second contribution is a modification of the conditioning step that effectively lowers the Kalman gain. This modification is crucial to attaining good test performance where no measurements are available. Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states. | http://arxiv.org/pdf/1907.07035v3 | [
"Silvan Melchior",
"Sebastian Curi",
"Felix Berkenkamp",
"Andreas Krause"
] | 2020-06-10T07:06:25Z | 2019-07-16T14:34:47Z |
2006.05698 | Rendering Natural Camera Bokeh Effect with Deep Learning | Bokeh is an important artistic effect used to highlight the main object of interest on the photo by blurring all out-of-focus areas. While DSLR and system camera lenses can render this effect naturally, mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics. Unlike the current solutions simulating bokeh by applying Gaussian blur to image background, in this paper we propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras. For this, we present a large-scale bokeh dataset consisting of 5K shallow / wide depth-of-field image pairs captured using the Canon 7D DSLR with 50mm f/1.8 lenses. We use these images to train a deep learning model to reproduce a natural bokeh effect based on a single narrow-aperture image. The experimental results show that the proposed approach is able to render a plausible non-uniform bokeh even in case of complex input data with multiple objects. The dataset, pre-trained models and codes used in this paper are available on the project website. | http://arxiv.org/pdf/2006.05698v1 | [
"Andrey Ignatov",
"Jagruti Patel",
"Radu Timofte"
] | 2020-06-10T07:28:06Z | 2020-06-10T07:28:06Z |
2006.03860 | Do RNN and LSTM have Long Memory? | The LSTM network was proposed to overcome the difficulty in learning long-term dependence, and has made significant advancements in applications. With its success and drawbacks in mind, this paper raises the question - do RNN and LSTM have long memory? We answer it partially by proving that RNN and LSTM do not have long memory from a statistical perspective. A new definition for long memory networks is further introduced, and it requires the model weights to decay at a polynomial rate. To verify our theory, we convert RNN and LSTM into long memory networks by making a minimal modification, and their superiority is illustrated in modeling long-term dependence of various datasets. | http://arxiv.org/pdf/2006.03860v2 | [
"Jingyu Zhao",
"Feiqing Huang",
"Jia Lv",
"Yanjie Duan",
"Zhen Qin",
"Guodong Li",
"Guangjian Tian"
] | 2020-06-10T07:28:18Z | 2020-06-06T13:30:03Z |
2006.05702 | Few-shot Slot Tagging with Collapsed Dependency Transfer and
Label-enhanced Task-adaptive Projection Network | In this paper, we explore the slot tagging with only a few labeled support sentences (a.k.a. few-shot). Few-shot slot tagging faces a unique challenge compared to the other few-shot classification problems as it calls for modeling the dependencies between labels. But it is hard to apply previously learned label dependencies to an unseen domain, due to the discrepancy of label sets. To tackle this, we introduce a collapsed dependency transfer mechanism into the conditional random field (CRF) to transfer abstract label dependency patterns as transition scores. In the few-shot setting, the emission score of CRF can be calculated as a word's similarity to the representation of each label. To calculate such similarity, we propose a Label-enhanced Task-Adaptive Projection Network (L-TapNet) based on the state-of-the-art few-shot classification model -- TapNet, by leveraging label name semantics in representing labels. Experimental results show that our model significantly outperforms the strongest few-shot learning baseline by 14.64 F1 scores in the one-shot setting. | http://arxiv.org/pdf/2006.05702v1 | [
"Yutai Hou",
"Wanxiang Che",
"Yongkui Lai",
"Zhihan Zhou",
"Yijia Liu",
"Han Liu",
"Ting Liu"
] | 2020-06-10T07:50:44Z | 2020-06-10T07:50:44Z |
1906.05212 | Is Deep Learning a Renormalization Group Flow? | Although there has been a rapid development of practical applications, theoretical explanations of deep learning are in their infancy. Deep learning performs a sophisticated coarse graining. Since coarse graining is a key ingredient of the renormalization group (RG), RG may provide a useful theoretical framework directly relevant to deep learning. In this study we pursue this possibility. A statistical mechanics model for a magnet, the Ising model, is used to train an unsupervised restricted Boltzmann machine (RBM). The patterns generated by the trained RBM are compared to the configurations generated through an RG treatment of the Ising model. Although we are motivated by the connection between deep learning and RG flow, in this study we focus mainly on comparing a single layer of a deep network to a single step in the RG flow. We argue that correlation functions between hidden and visible neurons are capable of diagnosing RG-like coarse graining. Numerical experiments show the presence of RG-like patterns in correlators computed using the trained RBMs. The observables we consider are also able to exhibit important differences between RG and deep learning. | http://arxiv.org/abs/1906.05212v2 | [
"Ellen de Mello Koch",
"Robert de Mello Koch",
"Ling Cheng"
] | 2020-06-10T07:51:51Z | 2019-06-12T15:33:43Z |
2006.07115 | Simulating Tariff Impact in Electrical Energy Consumption Profiles with
Conditional Variational Autoencoders | The implementation of efficient demand response (DR) programs for household electricity consumption would benefit from data-driven methods capable of simulating the impact of different tariffs schemes. This paper proposes a novel method based on conditional variational autoencoders (CVAE) to generate, from an electricity tariff profile combined with exogenous weather and calendar variables, daily consumption profiles of consumers segmented in different clusters. First, a large set of consumers is gathered into clusters according to their consumption behavior and price-responsiveness. The clustering method is based on a causality model that measures the effect of a specific tariff on the consumption level. Then, daily electrical energy consumption profiles are generated for each cluster with CVAE. This non-parametric approach is compared to a semi-parametric data generator based on generalized additive models and that uses prior knowledge of energy consumption. Experiments in a publicly available data set show that, the proposed method presents comparable performance to the semi-parametric one when it comes to generating the average value of the original data. The main contribution from this new method is the capacity to reproduce rebound and side effects in the generated consumption profiles. Indeed, the application of a special electricity tariff over a time window may also affect consumption outside this time window. Another contribution is that the clustering approach segments consumers according to their daily consumption profile and elasticity to tariff changes. These two results combined are very relevant for an ex-ante testing of future DR policies by system operators, retailers and energy regulators. | http://arxiv.org/pdf/2006.07115v1 | [
"Margaux Brégère",
"Ricardo J. Bessa"
] | 2020-06-10T08:05:35Z | 2020-06-10T08:05:35Z |