arxiv_id
stringlengths
7
11
title
stringlengths
7
243
abstract
stringlengths
3
2.79k
link
stringlengths
21
49
authors
sequencelengths
1
451
updated
stringlengths
20
20
published
stringlengths
20
20
2003.05311
A Safety Framework for Critical Systems Utilising Deep Neural Networks
Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data. However, the performance and explainability of these models within practical critical systems requires a rigorous and continuous verification of their safe utilisation. Working towards addressing this challenge, this paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks. The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level. It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning. The prediction is conservative -- it starts with partial prior knowledge obtained from lifecycle activities and then determines the worst-case prediction. Open challenges are also identified.
http://arxiv.org/abs/2003.05311v3
[ "Xingyu Zhao", "Alec Banks", "James Sharp", "Valentin Robu", "David Flynn", "Michael Fisher", "Xiaowei Huang" ]
2020-06-06T10:49:23Z
2020-03-07T23:35:05Z
2002.09594
One-Class Graph Neural Networks for Anomaly Detection in Attributed Networks
Nowadays, graph-structured data are increasingly used to model complex systems. Meanwhile, detecting anomalies from graph has become a vital research problem of pressing societal concerns. Anomaly detection is an unsupervised learning task of identifying rare data that differ from the majority. As one of the dominant anomaly detection algorithms, One Class Support Vector Machine has been widely used to detect outliers. However, those traditional anomaly detection methods lost their effectiveness in graph data. Since traditional anomaly detection methods are stable, robust and easy to use, it is vitally important to generalize them to graph data. In this work, we propose One Class Graph Neural Network (OCGNN), a one-class classification framework for graph anomaly detection. OCGNN is designed to combine the powerful representation ability of Graph Neural Networks along with the classical one-class objective. Compared with other baselines, OCGNN achieves significant improvements in extensive experiments.
http://arxiv.org/abs/2002.09594v2
[ "Xuhong Wang", "Baihong Jin", "Ying Du", "Ping Cui", "Yupu Yang" ]
2020-06-06T11:28:03Z
2020-02-22T01:25:49Z
1811.00974
Neural Likelihoods via Cumulative Distribution Functions
We leverage neural networks as universal approximators of monotonic functions to build a parameterization of conditional cumulative distribution functions (CDFs). By the application of automatic differentiation with respect to response variables and then to parameters of this CDF representation, we are able to build black box CDF and density estimators. A suite of families is introduced as alternative constructions for the multivariate case. At one extreme, the simplest construction is a competitive density estimator against state-of-the-art deep learning methods, although it does not provide an easily computable representation of multivariate CDFs. At the other extreme, we have a flexible construction from which multivariate CDF evaluations and marginalizations can be obtained by a simple forward pass in a deep neural net, but where the computation of the likelihood scales exponentially with dimensionality. Alternatives in between the extremes are discussed. We evaluate the different representations empirically on a variety of tasks involving tail area probabilities, tail dependence and (partial) density estimation.
http://arxiv.org/pdf/1811.00974v2
[ "Pawel Chilinski", "Ricardo Silva" ]
2020-06-06T12:08:43Z
2018-11-02T16:40:21Z
2005.12359
Path Imputation Strategies for Signature Models of Irregular Time Series
The signature transform is a 'universal nonlinearity' on the space of continuous vector-valued paths, and has received attention for use in machine learning on time series. However, real-world temporal data is typically observed at discrete points in time, and must first be transformed into a continuous path before signature techniques can be applied. We make this step explicit by characterising it as an imputation problem, and empirically assess the impact of various imputation strategies when applying signature-based neural nets to irregular time series data. For one of these strategies, Gaussian process (GP) adapters, we propose an extension~(GP-PoM) that makes uncertainty information directly available to the subsequent classifier while at the same time preventing costly Monte-Carlo (MC) sampling. In our experiments, we find that the choice of imputation drastically affects shallow signature models, whereas deeper architectures are more robust. Next, we observe that uncertainty-aware predictions (based on GP-PoM or indicator imputations) are beneficial for predictive performance, even compared to the uncertainty-aware training of conventional GP adapters. In conclusion, we have demonstrated that the path construction is indeed crucial for signature models and that our proposed strategy leads to competitive performance in general, while improving robustness of signature models in particular.
http://arxiv.org/pdf/2005.12359v2
[ "Michael Moor", "Max Horn", "Christian Bock", "Karsten Borgwardt", "Bastian Rieck" ]
2020-06-06T13:05:35Z
2020-05-25T19:31:21Z
1903.04003
Multinomial Random Forest: Toward Consistency and Privacy-Preservation
Despite the impressive performance of random forests (RF), its theoretical properties have not been thoroughly understood. In this paper, we propose a novel RF framework, dubbed multinomial random forest (MRF), to analyze the emph{consistency} and emph{privacy-preservation}. Instead of deterministic greedy split rule or with simple randomness, the MRF adopts two impurity-based multinomial distributions to randomly select a split feature and a split value respectively. Theoretically, we prove the consistency of the proposed MRF and analyze its privacy-preservation within the framework of differential privacy. We also demonstrate with multiple datasets that its performance is on par with the standard RF. To the best of our knowledge, MRF is the first consistent RF variant that has comparable performance to the standard RF.
http://arxiv.org/pdf/1903.04003v3
[ "Yiming Li", "Jiawang Bai", "Jiawei Li", "Xue Yang", "Yong Jiang", "Chun Li", "Shutao Xia" ]
2020-06-06T13:35:28Z
2019-03-10T14:47:16Z
2004.10019
Almost Optimal Model-Free Reinforcement Learning via Reference-Advantage Decomposition
We study the reinforcement learning problem in the setting of finite-horizon episodic Markov Decision Processes (MDPs) with $S$ states, $A$ actions, and episode length $H$. We propose a model-free algorithm UCB-Advantage and prove that it achieves $tilde{O}(sqrt{H^2SAT})$ regret where $T = KH$ and $K$ is the number of episodes to play. Our regret bound improves upon the results of [Jin et al., 2018] and matches the best known model-based algorithms as well as the information theoretic lower bound up to logarithmic factors. We also show that UCB-Advantage achieves low local switching cost and applies to concurrent reinforcement learning, improving upon the recent results of [Bai et al., 2019].
http://arxiv.org/pdf/2004.10019v2
[ "Zihan Zhang", "Yuan Zhou", "Xiangyang Ji" ]
2020-06-06T13:35:38Z
2020-04-21T14:00:06Z
1910.14072
Unsupervised inference approach to facial attractiveness
The perception of facial beauty is a complex phenomenon depending on many, detailed and global facial features influencing each other. In the machine learning community this problem is typically tackled as a problem of supervised inference. However, it has been conjectured that this approach does not capture the complexity of the phenomenon. A recent original experiment (Ib'a~nez-Berganza et al., Scientific Reports 9, 8364, 2019) allowed different human subjects to navigate the face-space and ``sculpt'' their preferred modification of a reference facial portrait. Here we present an unsupervised inference study of the set of sculpted facial vectors in that experiment. We first infer minimal, interpretable, and faithful probabilistic models (through Maximum Entropy and artificial neural networks) of the preferred facial variations, that capture the origin of the observed inter-subject diversity in the sculpted faces. The application of such generative models to the supervised classification of the gender of the sculpting subjects, reveals an astonishingly high prediction accuracy. This result suggests that much relevant information regarding the subjects may influence (and be elicited from) her/his facial preference criteria, in agreement with the multiple motive theory of attractiveness proposed in previous works.
http://arxiv.org/pdf/1910.14072v3
[ "Miguel Ibáñez-Berganza", "Ambra Amico", "Gian Luca Lancia", "Federico Maggiore", "Bernardo Monechi", "Vittorio Loreto" ]
2020-06-06T13:58:18Z
2019-10-30T18:23:56Z
2002.08665
Computationally Tractable Riemannian Manifolds for Graph Embeddings
Representing graphs as sets of node embeddings in certain curved Riemannian manifolds has recently gained momentum in machine learning due to their desirable geometric inductive biases, e.g., hierarchical structures benefit from hyperbolic geometry. However, going beyond embedding spaces of constant sectional curvature, while potentially more representationally powerful, proves to be challenging as one can easily lose the appeal of computationally tractable tools such as geodesic distances or Riemannian gradients. Here, we explore computationally efficient matrix manifolds, showcasing how to learn and optimize graph embeddings in these Riemannian spaces. Empirically, we demonstrate consistent improvements over Euclidean geometry while often outperforming hyperbolic and elliptical embeddings based on various metrics that capture different graph properties. Our results serve as new evidence for the benefits of non-Euclidean embeddings in machine learning pipelines.
http://arxiv.org/pdf/2002.08665v2
[ "Calin Cruceru", "Gary Bécigneul", "Octavian-Eugen Ganea" ]
2020-06-06T14:04:49Z
2020-02-20T10:55:47Z
2006.03873
Unique properties of adversarially trained linear classifiers on Gaussian data
Machine learning models are vulnerable to adversarial perturbations, that when added to an input, can cause high confidence misclassifications. The adversarial learning research community has made remarkable progress in the understanding of the root causes of adversarial perturbations. However, most problems that one may consider important to solve for the deployment of machine learning in safety critical tasks involve high dimensional complex manifolds that are difficult to characterize and study. It is common to develop adversarially robust learning theory on simple problems, in the hope that insights will transfer to `real world datasets'. In this work, we discuss a setting where this approach fails. In particular, we show with a linear classifier, it is always possible to solve a binary classification problem on Gaussian data under arbitrary levels of adversarial corruption during training, and that this property is not observed with non-linear classifiers on the CIFAR-10 dataset.
http://arxiv.org/pdf/2006.03873v1
[ "Jamie Hayes" ]
2020-06-06T14:06:38Z
2020-06-06T14:06:38Z
2006.03874
Knowledge-Based Learning through Feature Generation
Machine learning algorithms have difficulties to generalize over a small set of examples. Humans can perform such a task by exploiting vast amount of background knowledge they possess. One method for enhancing learning algorithms with external knowledge is through feature generation. In this paper, we introduce a new algorithm for generating features based on a collection of auxiliary datasets. We assume that, in addition to the training set, we have access to additional datasets. Unlike the transfer learning setup, we do not assume that the auxiliary datasets represent learning tasks that are similar to our original one. The algorithm finds features that are common to the training set and the auxiliary datasets. Based on these features and examples from the auxiliary datasets, it induces predictors for new features from the auxiliary datasets. The induced predictors are then added to the original training set as generated features. Our method was tested on a variety of learning tasks, including text classification and medical prediction, and showed a significant improvement over using just the given features.
http://arxiv.org/pdf/2006.03874v1
[ "Michal Badian", "Shaul Markovitch" ]
2020-06-06T14:13:36Z
2020-06-06T14:13:36Z
2005.07151
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances and potential for broad applications such as autonomous and anonymous surveillance. With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment. On the other hand, research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant, which may raise security concerns about deploying such techniques into real-world systems. However, filling this research gap is challenging due to the unique physical constraints of skeletons and human actions. In this paper, we attempt to conduct a thorough study towards understanding the adversarial vulnerability of skeleton-based action recognition. We first formulate generation of adversarial skeleton actions as a constrained optimization problem by representing or approximating the physiological and physical constraints with mathematical formulations. Since the primal optimization problem with equality constraints is intractable, we propose to solve it by optimizing its unconstrained dual problem using ADMM. We then specify an efficient plug-in defense, inspired by recent theories and empirical observations, against the adversarial skeleton actions. Extensive evaluations demonstrate the effectiveness of the attack and defense method under different settings.
http://arxiv.org/pdf/2005.07151v2
[ "Tianhang Zheng", "Sheng Liu", "Changyou Chen", "Junsong Yuan", "Baochun Li", "Kui Ren" ]
2020-06-06T15:21:59Z
2020-05-14T17:12:52Z
2006.03899
A Multi-step and Resilient Predictive Q-learning Algorithm for IoT with Human Operators in the Loop: A Case Study in Water Supply Networks
We consider the problem of recommending resilient and predictive actions for an IoT network in the presence of faulty components, considering the presence of human operators manipulating the information of the environment the agent sees for containment purposes. The IoT network is formulated as a directed graph with a known topology whose objective is to maintain a constant and resilient flow between a source and a destination node. The optimal route through this network is evaluated via a predictive and resilient Q-learning algorithm which takes into account historical data about irregular operation, due to faults, as well as the feedback from the human operators that are considered to have extra information about the status of the network concerning locations likely to be targeted by attacks. To showcase our method, we utilize anonymized data from Arlington County, Virginia, to compute predictive and resilient scheduling policies for a smart water supply system, while avoiding (i) all the locations indicated to be attacked according to human operators (ii) as many as possible neighborhoods detected to have leaks or other faults. This method incorporates both the adaptability of the human and the computation capability of the machine to achieve optimal implementation containment and recovery actions in water distribution.
http://arxiv.org/pdf/2006.03899v1
[ "Maria Grammatopoulou", "Aris Kanellopoulos", "Kyriakos G. ~Vamvoudakis", "Nathan Lau" ]
2020-06-06T15:51:52Z
2020-06-06T15:51:52Z
2006.03900
Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies
Offline reinforcement learning, wherein one uses off-policy data logged by a fixed behavior policy to evaluate and learn new policies, is crucial in applications where experimentation is limited such as medicine. We study the estimation of policy value and gradient of a deterministic policy from off-policy data when actions are continuous. Targeting deterministic policies, for which action is a deterministic function of state, is crucial since optimal policies are always deterministic (up to ties). In this setting, standard importance sampling and doubly robust estimators for policy value and gradient fail because the density ratio does not exist. To circumvent this issue, we propose several new doubly robust estimators based on different kernelization approaches. We analyze the asymptotic mean-squared error of each of these under mild rate conditions for nuisance estimators. Specifically, we demonstrate how to obtain a rate that is independent of the horizon length.
http://arxiv.org/pdf/2006.03900v1
[ "Nathan Kallus", "Masatoshi Uehara" ]
2020-06-06T15:52:05Z
2020-06-06T15:52:05Z
2006.03923
Learning to Model Opponent Learning
Multi-Agent Reinforcement Learning (MARL) considers settings in which a set of coexisting agents interact with one another and their environment. The adaptation and learning of other agents induces non-stationarity in the environment dynamics. This poses a great challenge for value function-based algorithms whose convergence usually relies on the assumption of a stationary environment. Policy search algorithms also struggle in multi-agent settings as the partial observability resulting from an opponent's actions not being known introduces high variance to policy training. Modelling an agent's opponent(s) is often pursued as a means of resolving the issues arising from the coexistence of learning opponents. An opponent model provides an agent with some ability to reason about other agents to aid its own decision making. Most prior works learn an opponent model by assuming the opponent is employing a stationary policy or switching between a set of stationary policies. Such an approach can reduce the variance of training signals for policy search algorithms. However, in the multi-agent setting, agents have an incentive to continually adapt and learn. This means that the assumptions concerning opponent stationarity are unrealistic. In this work, we develop a novel approach to modelling an opponent's learning dynamics which we term Learning to Model Opponent Learning (LeMOL). We show our structured opponent model is more accurate and stable than naive behaviour cloning baselines. We further show that opponent modelling can improve the performance of algorithmic agents in multi-agent settings.
http://arxiv.org/pdf/2006.03923v1
[ "Ian Davies", "Zheng Tian", "Jun Wang" ]
2020-06-06T17:19:04Z
2020-06-06T17:19:04Z
1905.13409
Bypassing Backdoor Detection Algorithms in Deep Learning
Deep learning models are vulnerable to various adversarial manipulations of their training data, parameters, and input sample. In particular, an adversary can modify the training data and model parameters to embed backdoors into the model, so the model behaves according to the adversary's objective if the input contains the backdoor features, referred to as the backdoor trigger (e.g., a stamp on an image). The poisoned model's behavior on clean data, however, remains unchanged. Many detection algorithms are designed to detect backdoors on input samples or model parameters, through the statistical difference between the latent representations of adversarial and clean input samples in the poisoned model. In this paper, we design an adversarial backdoor embedding algorithm that can bypass the existing detection algorithms including the state-of-the-art techniques. We design an adaptive adversarial training algorithm that optimizes the original loss function of the model, and also maximizes the indistinguishability of the hidden representations of poisoned data and clean data. This work calls for designing adversary-aware defense mechanisms for backdoor detection.
http://arxiv.org/pdf/1905.13409v2
[ "Te Juin Lester Tan", "Reza Shokri" ]
2020-06-06T17:56:42Z
2019-05-31T04:28:00Z
2006.03929
Sparse representation for damage identification of structural systems
Identifying damage of structural systems is typically characterized as an inverse problem which might be ill-conditioned due to aleatory and epistemic uncertainties induced by measurement noise and modeling error. Sparse representation can be used to perform inverse analysis for the case of sparse damage. In this paper, we propose a novel two-stage sensitivity analysis-based framework for both model updating and sparse damage identification. Specifically, an $ell_2$ Bayesian learning method is firstly developed for updating the intact model and uncertainty quantification so as to set forward a baseline for damage detection. A sparse representation pipeline built on a quasi-$ell_0$ method, e.g., Sequential Threshold Least Squares (STLS) regression, is then presented for damage localization and quantification. Additionally, Bayesian optimization together with cross validation is developed to heuristically learn hyperparameters from data, which saves the computational cost of hyperparameter tuning and produces more reliable identification result. The proposed framework is verified by three examples, including a 10-story shear-type building, a complex truss structure, and a shake table test of an eight-story steel frame. Results show that the proposed approach is capable of both localizing and quantifying structural damage with high accuracy.
http://arxiv.org/pdf/2006.03929v1
[ "Zhao Chen", "Hao Sun" ]
2020-06-06T18:04:35Z
2020-06-06T18:04:35Z
1812.00910
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
http://arxiv.org/abs/1812.00910v2
[ "Milad Nasr", "Reza Shokri", "Amir Houmansadr" ]
2020-06-06T18:22:55Z
2018-12-03T17:11:21Z
2006.03941
Learning and Optimization of Blackbox Combinatorial Solvers in Neural Networks
The use of blackbox solvers inside neural networks is a relatively new area which aims to improve neural network performance by including proven, efficient solvers for complex problems. Existing work has created methods for learning networks with these solvers as components while treating them as a blackbox. This work attempts to improve upon existing techniques by optimizing not only over the primary loss function, but also over the performance of the solver itself by using Time-cost Regularization. Additionally, we propose a method to learn blackbox parameters such as which blackbox solver to use or the heuristic function for a particular solver. We do this by introducing the idea of a hyper-blackbox which is a blackbox around one or more internal blackboxes.
http://arxiv.org/pdf/2006.03941v1
[ "T. J. Wilder" ]
2020-06-06T18:58:44Z
2020-06-06T18:58:44Z
2005.06706
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
Parallelism is a ubiquitous method for accelerating machine learning algorithms. However, theoretical analysis of parallel learning is usually done in an algorithm- and protocol-specific setting, giving little insight about how changes in the structure of communication could affect convergence. In this paper we propose MixML, a general framework for analyzing convergence of weakly consistent parallel machine learning. Our framework includes: (1) a unified way of modeling the communication process among parallel workers; (2) a new parameter, the mixing time tmix, that quantifies how the communication process affects convergence; and (3) a principled way of converting a convergence proof for a sequential algorithm into one for a parallel version that depends only on tmix. We show MixML recovers and improves on known convergence bounds for asynchronous and/or decentralized versions of many algorithms, includingSGD and AMSGrad. Our experiments substantiate the theory and show the dependency of convergence on the underlying mixing time.
http://arxiv.org/pdf/2005.06706v2
[ "Yucheng Lu", "Jack Nash", "Christopher De Sa" ]
2020-06-06T19:12:55Z
2020-05-14T03:38:20Z
2006.05274
UMLS-ChestNet: A deep convolutional neural network for radiological findings, differential diagnoses and localizations of COVID-19 in chest x-rays
In this work we present a method for the detection of radiological findings, their location and differential diagnoses from chest x-rays. Unlike prior works that focus on the detection of few pathologies, we use a hierarchical taxonomy mapped to the Unified Medical Language System (UMLS) terminology to identify 189 radiological findings, 22 differential diagnosis and 122 anatomic locations, including ground glass opacities, infiltrates, consolidations and other radiological findings compatible with COVID-19. We train the system on one large database of 92,594 frontal chest x-rays (AP or PA, standing, supine or decubitus) and a second database of 2,065 frontal images of COVID-19 patients identified by at least one positive Polymerase Chain Reaction (PCR) test. The reference labels are obtained through natural language processing of the radiological reports. On 23,159 test images, the proposed neural network obtains an AUC of 0.94 for the diagnosis of COVID-19. To our knowledge, this work uses the largest chest x-ray dataset of COVID-19 positive cases to date and is the first one to use a hierarchical labeling schema and to provide interpretability of the results, not only by using network attention methods, but also by indicating the radiological findings that have led to the diagnosis.
http://arxiv.org/pdf/2006.05274v1
[ "Germán González", "Aurelia Bustos", "José María Salinas", "María de la Iglesia-Vaya", "Joaquín Galant", "Carlos Cano-Espinosa", "Xavier Barber", "Domingo Orozco-Beltrán", "Miguel Cazorla", "Antonio Pertusa" ]
2020-06-06T19:24:35Z
2020-06-06T19:24:35Z
2002.11863
GATCluster: Self-Supervised Gaussian-Attention Network for Image Clustering
We propose a self-supervised Gaussian ATtention network for image Clustering (GATCluster). Rather than extracting intermediate features first and then performing the traditional clustering algorithm, GATCluster directly outputs semantic cluster labels without further post-processing. Theoretically, we give a Label Feature Theorem to guarantee the learned features are one-hot encoded vectors, and the trivial solutions are avoided. To train the GATCluster in a completely unsupervised manner, we design four self-learning tasks with the constraints of transformation invariance, separability maximization, entropy analysis, and attention mapping. Specifically, the transformation invariance and separability maximization tasks learn the relationships between sample pairs. The entropy analysis task aims to avoid trivial solutions. To capture the object-oriented semantics, we design a self-supervised attention mechanism that includes a parameterized attention module and a soft-attention loss. All the guiding signals for clustering are self-generated during the training process. Moreover, we develop a two-step learning algorithm that is memory-efficient for clustering large-size images. Extensive experiments demonstrate the superiority of our proposed method in comparison with the state-of-the-art image clustering benchmarks. Our code has been made publicly available at https://github.com/niuchuangnn/GATCluster.
http://arxiv.org/pdf/2002.11863v2
[ "Chuang Niu", "Jun Zhang", "Ge Wang", "Jimin Liang" ]
2020-06-06T20:09:39Z
2020-02-27T00:57:18Z
2006.03960
Frank-Wolfe optimization for deep networks
Deep neural networks is today one of the most popular choices in classification, regression and function approximation. However, the training of such deep networks is far from trivial as there are often millions of parameters to tune. Typically, one use some optimization method that hopefully converges towards some minimum. The most popular and successful methods are based on gradient descent. In this paper, another optimization method, Frank-Wolfe optimization, is applied to a small deep network and compared to gradient descent. Although the optimization does converge, it does so slowly and not close to the speed of gradient descent. Further, in a stochastic setting, the optimization becomes very unstable and does not seem to converge unless one uses a line search approach.
http://arxiv.org/pdf/2006.03960v1
[ "Jakob Stigenberg" ]
2020-06-06T20:20:18Z
2020-06-06T20:20:18Z
2006.03962
Tuning a variational autoencoder for data accountability problem in the Mars Science Laboratory ground data system
The Mars Curiosity rover is frequently sending back engineering and science data that goes through a pipeline of systems before reaching its final destination at the mission operations center making it prone to volume loss and data corruption. A ground data system analysis (GDSA) team is charged with the monitoring of this flow of information and the detection of anomalies in that data in order to request a re-transmission when necessary. This work presents $Delta$-MADS, a derivative-free optimization method applied for tuning the architecture and hyperparameters of a variational autoencoder trained to detect the data with missing patches in order to assist the GDSA team in their mission.
http://arxiv.org/pdf/2006.03962v1
[ "Dounia Lakhmiri", "Ryan Alimo", "Sebastien Le Digabel" ]
2020-06-06T20:25:07Z
2020-06-06T20:25:07Z
2004.04582
DeepCOVIDExplainer: Explainable COVID-19 Diagnosis Based on Chest X-ray Images
Amid the coronavirus disease(COVID-19) pandemic, humanity experiences a rapid increase in infection numbers across the world. Challenge hospitals are faced with, in the fight against the virus, is the effective screening of incoming patients. One methodology is the assessment of chest radiography(CXR) images, which usually requires expert radiologist's knowledge. In this paper, we propose an explainable deep neural networks(DNN)-based method for automatic detection of COVID-19 symptoms from CXR images, which we call DeepCOVIDExplainer. We used 15,959 CXR images of 15,854 patients, covering normal, pneumonia, and COVID-19 cases. CXR images are first comprehensively preprocessed, before being augmented and classified with a neural ensemble method, followed by highlighting class-discriminating regions using gradient-guided class activation maps(Grad-CAM++) and layer-wise relevance propagation(LRP). Further, we provide human-interpretable explanations of the predictions. Evaluation results based on hold-out data show that our approach can identify COVID-19 confidently with a positive predictive value(PPV) of 91.6%, 92.45%, and 96.12%; precision, recall, and F1 score of 94.6%, 94.3%, and 94.6%, respectively for normal, pneumonia, and COVID-19 cases, respectively, making it comparable or improved results over recent approaches. We hope that our findings will be a useful contribution to the fight against COVID-19 and, in more general, towards an increasing acceptance and adoption of AI-assisted applications in the clinical practice.
http://arxiv.org/pdf/2004.04582v3
[ "Md. Rezaul Karim", "Till Döhmen", "Dietrich Rebholz-Schuhmann", "Stefan Decker", "Michael Cochez", "Oya Beyan" ]
2020-06-06T20:31:13Z
2020-04-09T15:03:58Z
2006.03965
Generative Adversarial Phonology: Modeling unsupervised phonetic and phonological learning with neural networks
Training deep neural networks on well-understood dependencies in speech data can provide new insights into how they learn internal representations. This paper argues that acquisition of speech can be modeled as a dependency between random space and generated speech data in the Generative Adversarial Network architecture and proposes a methodology to uncover the network's internal representations that correspond to phonetic and phonological properties. The Generative Adversarial architecture is uniquely appropriate for modeling phonetic and phonological learning because the network is trained on unannotated raw acoustic data and learning is unsupervised without any language-specific assumptions or pre-assumed levels of abstraction. A Generative Adversarial Network was trained on an allophonic distribution in English. The network successfully learns the allophonic alternation: the network's generated speech signal contains the conditional distribution of aspiration duration. The paper proposes a technique for establishing the network's internal representations that identifies latent variables that correspond to, for example, presence of [s] and its spectral properties. By manipulating these variables, we actively control the presence of [s] and its frication amplitude in the generated outputs. This suggests that the network learns to use latent variables as an approximation of phonetic and phonological representations. Crucially, we observe that the dependencies learned in training extend beyond the training interval, which allows for additional exploration of learning representations. The paper also discusses how the network's architecture and innovative outputs resemble and differ from linguistic behavior in language acquisition, speech disorders, and speech errors, and how well-understood dependencies in speech data can help us interpret how neural networks learn their representations.
http://arxiv.org/abs/2006.03965v1
[ "Gašper Beguš" ]
2020-06-06T20:31:23Z
2020-06-06T20:31:23Z
2006.03969
Conditional Neural Architecture Search
Designing resource-efficient Deep Neural Networks (DNNs) is critical to deploy deep learning solutions over edge platforms due to diverse performance, power, and memory budgets. Unfortunately, it is often the case a well-trained ML model does not fit to the constraint of deploying edge platforms, causing a long iteration of model reduction and retraining process. Moreover, a ML model optimized for platform-A often may not be suitable when we deploy it on another platform-B, causing another iteration of model retraining. We propose a conditional neural architecture search method using GAN, which produces feasible ML models for different platforms. We present a new workflow to generate constraint-optimized DNN models. This is the first work of bringing in condition and adversarial technique into Neural Architecture Search domain. We verify the method with regression problems and classification on CIFAR-10. The proposed workflow can successfully generate resource-optimized MLP or CNN-based networks.
http://arxiv.org/pdf/2006.03969v1
[ "Sheng-Chun Kao", "Arun Ramamurthy", "Reed Williams", "Tushar Krishna" ]
2020-06-06T20:39:33Z
2020-06-06T20:39:33Z
2006.03970
An Efficient Semi-smooth Newton Augmented Lagrangian Method for Elastic Net
Feature selection is an important and active research area in statistics and machine learning. The Elastic Net is often used to perform selection when the features present non-negligible collinearity or practitioners wish to incorporate additional known structure. In this article, we propose a new Semi-smooth Newton Augmented Lagrangian Method to efficiently solve the Elastic Net in ultra-high dimensional settings. Our new algorithm exploits both the sparsity induced by the Elastic Net penalty and the sparsity due to the second order information of the augmented Lagrangian. This greatly reduces the computational cost of the problem. Using simulations on both synthetic and real datasets, we demonstrate that our approach outperforms its best competitors by at least an order of magnitude in terms of CPU time. We also apply our approach to a Genome Wide Association Study on childhood obesity.
http://arxiv.org/pdf/2006.03970v1
[ "Tobia Boschi", "Matthew Reimherr", "Francesca Chiaromonte" ]
2020-06-06T20:42:53Z
2020-06-06T20:42:53Z
2006.03972
Regularization of Inverse Problems by Neural Networks
Inverse problems arise in a variety of imaging applications including computed tomography, non-destructive testing, and remote sensing. The characteristic features of inverse problems are the non-uniqueness and instability of their solutions. Therefore, any reasonable solution method requires the use of regularization tools that select specific solutions and at the same time stabilize the inversion process. Recently, data-driven methods using deep learning techniques and neural networks demonstrated to significantly outperform classical solution methods for inverse problems. In this chapter, we give an overview of inverse problems and demonstrate the necessity of regularization concepts for their solution. We show that neural networks can be used for the data-driven solution of inverse problems and review existing deep learning methods for inverse problems. In particular, we view these deep learning methods from the perspective of regularization theory, the mathematical foundation of stable solution methods for inverse problems. This chapter is more than just a review as many of the presented theoretical results extend existing ones.
http://arxiv.org/pdf/2006.03972v1
[ "Markus Haltmeier", "Linh V. Nguyen" ]
2020-06-06T20:49:12Z
2020-06-06T20:49:12Z
2006.05314
Regularized Off-Policy TD-Learning
We present a novel $l_1$ regularized off-policy convergent TD-learning method (termed RO-TD), which is able to learn sparse representations of value functions with low computational complexity. The algorithmic framework underlying RO-TD integrates two key ideas: off-policy convergent gradient TD methods, such as TDC, and a convex-concave saddle-point formulation of non-smooth convex optimization, which enables first-order solvers and feature selection using online convex regularization. A detailed theoretical and experimental analysis of RO-TD is presented. A variety of experiments are presented to illustrate the off-policy convergence, sparse feature selection capability and low computational cost of the RO-TD algorithm.
http://arxiv.org/pdf/2006.05314v1
[ "Bo Liu", "Sridhar Mahadevan", "Ji Liu" ]
2020-06-06T20:49:53Z
2020-06-06T20:49:53Z
2006.03976
Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity
In this paper, we introduce proximal gradient temporal difference learning, which provides a principled way of designing and analyzing true stochastic gradient temporal difference learning algorithms. We show how gradient TD (GTD) reinforcement learning methods can be formally derived, not by starting from their original objective functions, as previously attempted, but rather from a primal-dual saddle-point objective function. We also conduct a saddle-point error analysis to obtain finite-sample bounds on their performance. Previous analyses of this class of algorithms use stochastic approximation techniques to prove asymptotic convergence, and do not provide any finite-sample analysis. We also propose an accelerated algorithm, called GTD2-MP, that uses proximal ``mirror maps'' to yield an improved convergence rate. The results of our theoretical analysis imply that the GTD family of algorithms are comparable and may indeed be preferred over existing least squares TD methods for off-policy learning, due to their linear complexity. We provide experimental results showing the improved performance of our accelerated gradient TD methods.
http://arxiv.org/pdf/2006.03976v1
[ "Bo Liu", "Ian Gemp", "Mohammad Ghavamzadeh", "Ji Liu", "Sridhar Mahadevan", "Marek Petrik" ]
2020-06-06T21:04:21Z
2020-06-06T21:04:21Z
2006.03979
Visual Prediction of Priors for Articulated Object Interaction
Exploration in novel settings can be challenging without prior experience in similar domains. However, humans are able to build on prior experience quickly and efficiently. Children exhibit this behavior when playing with toys. For example, given a toy with a yellow and blue door, a child will explore with no clear objective, but once they have discovered how to open the yellow door, they will most likely be able to open the blue door much faster. Adults also exhibit this behavior when entering new spaces such as kitchens. We develop a method, Contextual Prior Prediction, which provides a means of transferring knowledge between interactions in similar domains through vision. We develop agents that exhibit exploratory behavior with increasing efficiency, by learning visual features that are shared across environments, and how they correlate to actions. Our problem is formulated as a Contextual Multi-Armed Bandit where the contexts are images, and the robot has access to a parameterized action space. Given a novel object, the objective is to maximize reward with few interactions. A domain which strongly exhibits correlations between visual features and motion is kinemetically constrained mechanisms. We evaluate our method on simulated prismatic and revolute joints.
http://arxiv.org/pdf/2006.03979v1
[ "Caris Moses", "Michael Noseworthy", "Leslie Pack Kaelbling", "Tomás Lozano-Pérez", "Nicholas Roy" ]
2020-06-06T21:17:03Z
2020-06-06T21:17:03Z
1909.11625
Deep Predictive Motion Tracking in Magnetic Resonance Imaging: Application to Fetal Imaging
Fetal magnetic resonance imaging (MRI) is challenged by uncontrollable, large, and irregular fetal movements. It is, therefore, performed through visual monitoring of fetal motion and repeated acquisitions to ensure diagnostic-quality images are acquired. Nevertheless, visual monitoring of fetal motion based on displayed slices, and navigation at the level of stacks-of-slices is inefficient. The current process is highly operator-dependent, increases scanner usage and cost, and significantly increases the length of fetal MRI scans which makes them hard to tolerate for pregnant women. To help build automatic MRI motion tracking and navigation systems to overcome the limitations of the current process and improve fetal imaging, we have developed a new real time image-based motion tracking method based on deep learning that learns to predict fetal motion directly from acquired images. Our method is based on a recurrent neural network, composed of spatial and temporal encoder-decoders, that infers motion parameters from anatomical features extracted from sequences of acquired slices. We compared our trained network on held out test sets (including data with different characteristics, e.g. different fetuses scanned at different ages, and motion trajectories recorded from volunteer subjects) with networks designed for estimation as well as methods adopted to make predictions. The results show that our method outperformed alternative techniques, and achieved real-time performance with average errors of 3.5 and 8 degrees for the estimation and prediction tasks, respectively. Our real-time deep predictive motion tracking technique can be used to assess fetal movements, to guide slice acquisitions, and to build navigation systems for fetal MRI.
http://arxiv.org/abs/1909.11625v3
[ "Ayush Singh", "Seyed Sadegh Mohseni Salehi", "Ali Gholipour" ]
2020-06-06T23:15:28Z
2019-09-25T17:12:40Z
1911.05797
AI-optimized detector design for the future Electron-Ion Collider: the dual-radiator RICH case
Advanced detector R&D requires performing computationally intensive and detailed simulations as part of the detector-design optimization process. We propose a general approach to this process based on Bayesian optimization and machine learning that encodes detector requirements. As a case study, we focus on the design of the dual-radiator Ring Imaging Cherenkov (dRICH) detector under development as part of the particle-identification system at the future Electron-Ion Collider (EIC). The EIC is a US-led frontier accelerator project for nuclear physics, which has been proposed to further explore the structure and interactions of nuclear matter at the scale of sea quarks and gluons. We show that the detector design obtained with our automated and highly parallelized framework outperforms the baseline dRICH design within the assumptions of the current model. Our approach can be applied to any detector R&D, provided that realistic simulations are available.
http://arxiv.org/abs/1911.05797v2
[ "E. Cisbani", "A. Del Dotto", "C. Fanelli", "M. Williams", "M. Alfred", "F. Barbosa", "L. Barion", "V. Berdnikov", "W. Brooks", "T. Cao", "M. Contalbrigo", "S. Danagoulian", "A. Datta", "M. Demarteau", "A. Denisov", "M. Diefenthaler", "A. Durum", "D. Fields", "Y. Furletova", "C. Gleason", "M. Grosse-Perdekamp", "M. Hattawy", "X. He", "H. van Hecke", "D. Higinbotham", "T. Horn", "C. Hyde", "Y. Ilieva", "G. Kalicy", "A. Kebede", "B. Kim", "M. Liu", "J. McKisson", "R. Mendez", "P. Nadel-Turonski", "I. Pegg", "D. Romanov", "M. Sarsour", "C. L. da Silva", "J. Stevens", "X. Sun", "S. Syed", "R. Towell", "J. Xie", "Z. W. Zhao", "B. Zihlmann", "C. Zorn" ]
2020-06-06T23:24:06Z
2019-11-13T20:12:49Z
2006.04016
A Multitask Learning Approach for Diacritic Restoration
In many languages like Arabic, diacritics are used to specify pronunciations as well as meanings. Such diacritics are often omitted in written text, increasing the number of possible pronunciations and meanings for a word. This results in a more ambiguous text making computational processing on such text more difficult. Diacritic restoration is the task of restoring missing diacritics in the written text. Most state-of-the-art diacritic restoration models are built on character level information which helps generalize the model to unseen data, but presumably lose useful information at the word level. Thus, to compensate for this loss, we investigate the use of multi-task learning to jointly optimize diacritic restoration with related NLP problems namely word segmentation, part-of-speech tagging, and syntactic diacritization. We use Arabic as a case study since it has sufficient data resources for tasks that we consider in our joint modeling. Our joint models significantly outperform the baselines and are comparable to the state-of-the-art models that are more complex relying on morphological analyzers and/or a lot more data (e.g. dialectal data).
http://arxiv.org/pdf/2006.04016v1
[ "Sawsan Alqahtani", "Ajay Mishra", "Mona Diab" ]
2020-06-07T01:20:40Z
2020-06-07T01:20:40Z
2002.02061
Mitigating Query-Flooding Parameter Duplication Attack on Regression Models with High-Dimensional Gaussian Mechanism
Public intelligent services enabled by machine learning algorithms are vulnerable to model extraction attacks that can steal confidential information of the learning models through public queries. Differential privacy (DP) has been considered a promising technique to mitigate this attack. However, we find that the vulnerability persists when regression models are being protected by current DP solutions. We show that the adversary can launch a query-flooding parameter duplication (QPD) attack to infer the model information by repeated queries. To defend against the QPD attack on logistic and linear regression models, we propose a novel High-Dimensional Gaussian (HDG) mechanism to prevent unauthorized information disclosure without interrupting the intended services. In contrast to prior work, the proposed HDG mechanism will dynamically generate the privacy budget and random noise for different queries and their results to enhance the obfuscation. Besides, for the first time, HDG enables an optimal privacy budget allocation that automatically determines the minimum amount of noise to be added per user-desired privacy level on each dimension. We comprehensively evaluate the performance of HDG using real-world datasets and shows that HDG effectively mitigates the QPD attack while satisfying the privacy requirements. We also prepare to open-source the relevant codes to the community for further research.
http://arxiv.org/pdf/2002.02061v3
[ "Xiaoguang Li", "Hui Li", "Haonan Yan", "Zelei Cheng", "Wenhai Sun", "Hui Zhu" ]
2020-06-07T01:40:09Z
2020-02-06T01:47:08Z
2006.04021
Skill Discovery of Coordination in Multi-agent Reinforcement Learning
Unsupervised skill discovery drives intelligent agents to explore the unknown environment without task-specific reward signal, and the agents acquire various skills which may be useful when the agents adapt to new tasks. In this paper, we propose "Multi-agent Skill Discovery"(MASD), a method for discovering skills for coordination patterns of multiple agents. The proposed method aims to maximize the mutual information between a latent code Z representing skills and the combination of the states of all agents. Meanwhile it suppresses the empowerment of Z on the state of any single agent by adversarial training. In another word, it sets an information bottleneck to avoid empowerment degeneracy. First we show the emergence of various skills on the level of coordination in a general particle multi-agent environment. Second, we reveal that the "bottleneck" prevents skills from collapsing to a single agent and enhances the diversity of learned skills. Finally, we show the pretrained policies have better performance on supervised RL tasks.
http://arxiv.org/pdf/2006.04021v1
[ "Shuncheng He", "Jianzhun Shao", "Xiangyang Ji" ]
2020-06-07T02:04:15Z
2020-06-07T02:04:15Z
2006.04033
A Comparative Analysis of E-Scooter and E-Bike Usage Patterns: Findings from the City of Austin, TX
E-scooter-sharing and e-bike-sharing systems are accommodating and easing the increased traffic in dense cities and are expanding considerably. However, these new micro-mobility transportation modes raise numerous operational and safety concerns. This study analyzes e-scooter and dockless e-bike sharing system user behavior. We investigate how average trip speed change depending on the day of the week and the time of the day. We used a dataset from the city of Austin, TX from December 2018 to May 2019. Our results generally show that the trip average speed for e-bikes ranges between 3.01 and 3.44 m/s, which is higher than that for e-scooters (2.19 to 2.78 m/s). Results also show a similar usage pattern for the average speed of e-bikes and e-scooters throughout the days of the week and a different usage pattern for the average speed of e-bikes and e-scooters over the hours of the day. We found that users tend to ride e-bikes and e-scooters with a slower average speed for recreational purposes compared to when they are ridden for commuting purposes. This study is a building block in this field, which serves as a first of its kind, and sheds the light of significant new understanding of this emerging class of shared-road users.
http://arxiv.org/pdf/2006.04033v1
[ "Mohammed Hamad Almannaa", "Huthaifa I. Ashqar", "Mohammed Elhenawy", "Mahmoud Masoud", "Andry Rakotonirainy", "Hesham Rakha" ]
2020-06-07T03:27:44Z
2020-06-07T03:27:44Z
2006.05312
Feature Interaction based Neural Network for Click-Through Rate Prediction
Click-Through Rate (CTR) prediction is one of the most important and challenging in calculating advertisements and recommendation systems. To build a machine learning system with these data, it is important to properly model the interaction among features. However, many current works calculate the feature interactions in a simple way such as inner product and element-wise product. This paper aims to fully utilize the information between features and improve the performance of deep neural networks in the CTR prediction task. In this paper, we propose a Feature Interaction based Neural Network (FINN) which is able to model feature interaction via a 3-dimention relation tensor. FINN provides representations for the feature interactions on the the bottom layer and the non-linearity of neural network in modelling higher-order feature interactions. We evaluate our models on CTR prediction tasks compared with classical baselines and show that our deep FINN model outperforms other state-of-the-art deep models such as PNN and DeepFM. Evaluation results demonstrate that feature interaction contains significant information for better CTR prediction. It also indicates that our models can effectively learn the feature interactions, and achieve better performances in real-world datasets.
http://arxiv.org/pdf/2006.05312v1
[ "Dafang Zou", "Leiming Zhang", "Jiafa Mao", "Weiguo Sheng" ]
2020-06-07T03:53:24Z
2020-06-07T03:53:24Z
2006.04037
Reinforcement Learning for Multi-Product Multi-Node Inventory Management in Supply Chains
This paper describes the application of reinforcement learning (RL) to multi-product inventory management in supply chains. The problem description and solution are both adapted from a real-world business solution. The novelty of this problem with respect to supply chain literature is (i) we consider concurrent inventory management of a large number (50 to 1000) of products with shared capacity, (ii) we consider a multi-node supply chain consisting of a warehouse which supplies three stores, (iii) the warehouse, stores, and transportation from warehouse to stores have finite capacities, (iv) warehouse and store replenishment happen at different time scales and with realistic time lags, and (v) demand for products at the stores is stochastic. We describe a novel formulation in a multi-agent (hierarchical) reinforcement learning framework that can be used for parallelised decision-making, and use the advantage actor critic (A2C) algorithm with quantised action spaces to solve the problem. Experiments show that the proposed approach is able to handle a multi-objective reward comprised of maximising product sales and minimising wastage of perishable products.
http://arxiv.org/pdf/2006.04037v1
[ "Nazneen N Sultana", "Hardik Meisheri", "Vinita Baniwal", "Somjit Nath", "Balaraman Ravindran", "Harshad Khadilkar" ]
2020-06-07T04:02:59Z
2020-06-07T04:02:59Z
2006.04041
What needles do sparse neural networks find in nonlinear haystacks
Using a sparsity inducing penalty in artificial neural networks (ANNs) avoids over-fitting, especially in situations where noise is high and the training set is small in comparison to the number of features. For linear models, such an approach provably also recovers the important features with high probability in regimes for a well-chosen penalty parameter. The typical way of setting the penalty parameter is by splitting the data set and performing the cross-validation, which is (1) computationally expensive and (2) not desirable when the data set is already small to be further split (for example, whole-genome sequence data). In this study, we establish the theoretical foundation to select the penalty parameter without cross-validation based on bounding with a high probability the infinite norm of the gradient of the loss function at zero under the zero-feature assumption. Our approach is a generalization of the universal threshold of Donoho and Johnstone (1994) to nonlinear ANN learning. We perform a set of comprehensive Monte Carlo simulations on a simple model, and the numerical results show the effectiveness of the proposed approach.
http://arxiv.org/pdf/2006.04041v1
[ "Sylvain Sardy", "Nicolas W Hengartner", "Nikolai Bonenko", "Yen Ting Lin" ]
2020-06-07T04:46:55Z
2020-06-07T04:46:55Z
2006.04046
On Suboptimality of Least Squares with Application to Estimation of Convex Bodies
We develop a technique for establishing lower bounds on the sample complexity of Least Squares (or, Empirical Risk Minimization) for large classes of functions. As an application, we settle an open problem regarding optimality of Least Squares in estimating a convex set from noisy support function measurements in dimension $dgeq 6$. Specifically, we establish that Least Squares is mimimax sub-optimal, and achieves a rate of $tilde{Theta}_d(n^{-2/(d-1)})$ whereas the minimax rate is $Theta_d(n^{-4/(d+3)})$.
http://arxiv.org/pdf/2006.04046v1
[ "Gil Kur", "Alexander Rakhlin", "Adityanand Guntuboyina" ]
2020-06-07T05:19:00Z
2020-06-07T05:19:00Z
2003.10045
Architectural Resilience to Foreground-and-Background Adversarial Noise
Adversarial attacks in the form of imperceptible perturbations of normal images have been extensively studied, and for every new defense methodology created, multiple adversarial attacks are found to counteract it. In particular, a popular style of attack, exemplified in recent years by DeepFool and Carlini-Wagner, relies solely on white-box scenarios in which full access to the predictive model and its weights are required. In this work, we instead propose distinct model-agnostic benchmark perturbations of images in order to investigate the resilience and robustness of different network architectures. Results empirically determine that increasing depth within most types of Convolutional Neural Networks typically improves model resilience towards general attacks, with improvement steadily decreasing as the model becomes deeper. Additionally, we find that a notable difference in adversarial robustness exists between residual architectures with skip connections and non-residual architectures of similar complexity. Our findings provide direction for future understanding of residual connections and depth on network robustness.
http://arxiv.org/pdf/2003.10045v2
[ "Carl Cheng", "Evan Hu" ]
2020-06-07T05:28:09Z
2020-03-23T01:38:20Z
2006.04050
Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation
We describe our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE) (Mayhew et al., 2020). We view MT models at various training stages (i.e., checkpoints) as human learners at different levels. Hence, we employ an ensemble of multi-checkpoints from the same model to generate translation sequences with various levels of fluency. From each checkpoint, for our best model, we sample n-Best sequences (n=10) with a beam width =100. We achieve 37.57 macro F1 with a 6 checkpoint model ensemble on the official English to Portuguese shared task test data, outperforming a baseline Amazon translation system of 21.30 macro F1 and ultimately demonstrating the utility of our intuitive method.
http://arxiv.org/pdf/2006.04050v1
[ "El Moatez Billah Nagoudi", "Muhammad Abdul-Mageed", "Hasan Cavusoglu" ]
2020-06-07T05:46:15Z
2020-06-07T05:46:15Z
2002.11385
When Do Drivers Concentrate? Attention-based Driver Behavior Modeling With Deep Reinforcement Learning
Driver distraction a significant risk to driving safety. Apart from spatial domain, research on temporal inattention is also necessary. This paper aims to figure out the pattern of drivers' temporal attention allocation. In this paper, we propose an actor-critic method - Attention-based Twin Delayed Deep Deterministic policy gradient (ATD3) algorithm to approximate a driver' s action according to observations and measure the driver' s attention allocation for consecutive time steps in car-following model. Considering reaction time, we construct the attention mechanism in the actor network to capture temporal dependencies of consecutive observations. In the critic network, we employ Twin Delayed Deep Deterministic policy gradient algorithm (TD3) to address overestimated value estimates persisting in the actor-critic algorithm. We conduct experiments on real-world vehicle trajectory datasets and show that the accuracy of our proposed approach outperforms seven baseline algorithms. Moreover, the results reveal that the attention of the drivers in smooth vehicles is uniformly distributed in previous observations while they keep their attention to recent observations when sudden decreases of relative speeds occur. This study is the first contribution to drivers' temporal attention and provides scientific support for safety measures in transportation systems from the perspective of data mining.
http://arxiv.org/pdf/2002.11385v2
[ "Xingbo Fu", "Feng Gao", "Jiang Wu" ]
2020-06-07T06:09:55Z
2020-02-26T09:56:36Z
2006.04059
Soft Gradient Boosting Machine
Gradient Boosting Machine has proven to be one successful function approximator and has been widely used in a variety of areas. However, since the training procedure of each base learner has to take the sequential order, it is infeasible to parallelize the training process among base learners for speed-up. In addition, under online or incremental learning settings, GBMs achieved sub-optimal performance due to the fact that the previously trained base learners can not adapt with the environment once trained. In this work, we propose the soft Gradient Boosting Machine (sGBM) by wiring multiple differentiable base learners together, by injecting both local and global objectives inspired from gradient boosting, all base learners can then be jointly optimized with linear speed-up. When using differentiable soft decision trees as base learner, such device can be regarded as an alternative version of the (hard) gradient boosting decision trees with extra benefits. Experimental results showed that, sGBM enjoys much higher time efficiency with better accuracy, given the same base learner in both on-line and off-line settings.
http://arxiv.org/pdf/2006.04059v1
[ "Ji Feng", "Yi-Xuan Xu", "Yuan Jiang", "Zhi-Hua Zhou" ]
2020-06-07T06:43:23Z
2020-06-07T06:43:23Z
2006.04061
Dual Policy Distillation
Policy distillation, which transfers a teacher policy to a student policy has achieved great success in challenging tasks of deep reinforcement learning. This teacher-student framework requires a well-trained teacher model which is computationally expensive. Moreover, the performance of the student model could be limited by the teacher model if the teacher model is not optimal. In the light of collaborative learning, we study the feasibility of involving joint intellectual efforts from diverse perspectives of student models. In this work, we introduce dual policy distillation(DPD), a student-student framework in which two learners operate on the same environment to explore different perspectives of the environment and extract knowledge from each other to enhance their learning. The key challenge in developing this dual learning framework is to identify the beneficial knowledge from the peer learner for contemporary learning-based reinforcement learning algorithms, since it is unclear whether the knowledge distilled from an imperfect and noisy peer learner would be helpful. To address the challenge, we theoretically justify that distilling knowledge from a peer learner will lead to policy improvement and propose a disadvantageous distillation strategy based on the theoretical results. The conducted experiments on several continuous control tasks show that the proposed framework achieves superior performance with a learning-based agent and function approximation without the use of expensive teacher models.
http://arxiv.org/pdf/2006.04061v1
[ "Kwei-Herng Lai", "Daochen Zha", "Yuening Li", "Xia Hu" ]
2020-06-07T06:49:47Z
2020-06-07T06:49:47Z
1905.03125
Batch-Size Independent Regret Bounds for the Combinatorial Multi-Armed Bandit Problem
We consider the combinatorial multi-armed bandit (CMAB) problem, where the reward function is nonlinear. In this setting, the agent chooses a batch of arms on each round and receives feedback from each arm of the batch. The reward that the agent aims to maximize is a function of the selected arms and their expectations. In many applications, the reward function is highly nonlinear, and the performance of existing algorithms relies on a global Lipschitz constant to encapsulate the function's nonlinearity. This may lead to loose regret bounds, since by itself, a large gradient does not necessarily cause a large regret, but only in regions where the uncertainty in the reward's parameters is high. To overcome this problem, we introduce a new smoothness criterion, which we term emph{Gini-weighted smoothness}, that takes into account both the nonlinearity of the reward and concentration properties of the arms. We show that a linear dependence of the regret in the batch size in existing algorithms can be replaced by this smoothness parameter. This, in turn, leads to much tighter regret bounds when the smoothness parameter is batch-size independent. For example, in the probabilistic maximum coverage (PMC) problem, that has many applications, including influence maximization, diverse recommendations and more, we achieve dramatic improvements in the upper bounds. We also prove matching lower bounds for the PMC problem and show that our algorithm is tight, up to a logarithmic factor in the problem's parameters.
http://arxiv.org/pdf/1905.03125v4
[ "Nadav Merlis", "Shie Mannor" ]
2020-06-07T07:22:57Z
2019-05-08T14:58:24Z
2006.04069
Fusion Recurrent Neural Network
Considering deep sequence learning for practical application, two representative RNNs - LSTM and GRU may come to mind first. Nevertheless, is there no chance for other RNNs? Will there be a better RNN in the future? In this work, we propose a novel, succinct and promising RNN - Fusion Recurrent Neural Network (Fusion RNN). Fusion RNN is composed of Fusion module and Transport module every time step. Fusion module realizes the multi-round fusion of the input and hidden state vector. Transport module which mainly refers to simple recurrent network calculate the hidden state and prepare to pass it to the next time step. Furthermore, in order to evaluate Fusion RNN's sequence feature extraction capability, we choose a representative data mining task for sequence data, estimated time of arrival (ETA) and present a novel model based on Fusion RNN. We contrast our method and other variants of RNN for ETA under massive vehicle travel data from DiDi Chuxing. The results demonstrate that for ETA, Fusion RNN is comparable to state-of-the-art LSTM and GRU which are more complicated than Fusion RNN.
http://arxiv.org/pdf/2006.04069v1
[ "Yiwen Sun", "Yulu Wang", "Kun Fu", "Zheng Wang", "Changshui Zhang", "Jieping Ye" ]
2020-06-07T07:39:49Z
2020-06-07T07:39:49Z
2006.04072
Implications of Human Irrationality for Reinforcement Learning
Recent work in the behavioural sciences has begun to overturn the long-held belief that human decision making is irrational, suboptimal and subject to biases. This turn to the rational suggests that human decision making may be a better source of ideas for constraining how machine learning problems are defined than would otherwise be the case. One promising idea concerns human decision making that is dependent on apparently irrelevant aspects of the choice context. Previous work has shown that by taking into account choice context and making relational observations, people can maximize expected value. Other work has shown that Partially observable Markov decision processes (POMDPs) are a useful way to formulate human-like decision problems. Here, we propose a novel POMDP model for contextual choice tasks and show that, despite the apparent irrationalities, a reinforcement learner can take advantage of the way that humans make decisions. We suggest that human irrationalities may offer a productive source of inspiration for improving the design of AI architectures and machine learning methods.
http://arxiv.org/pdf/2006.04072v1
[ "Haiyang Chen", "Hyung Jin Chang", "Andrew Howes" ]
2020-06-07T07:44:53Z
2020-06-07T07:44:53Z
2002.02917
Data augmentation with Mobius transformations
Data augmentation has led to substantial improvements in the performance and generalization of deep models, and remain a highly adaptable method to evolving model architectures and varying amounts of data---in particular, extremely scarce amounts of available training data. In this paper, we present a novel method of applying Mobius transformations to augment input images during training. Mobius transformations are bijective conformal maps that generalize image translation to operate over complex inversion in pixel space. As a result, Mobius transformations can operate on the sample level and preserve data labels. We show that the inclusion of Mobius transformations during training enables improved generalization over prior sample-level data augmentation techniques such as cutout and standard crop-and-flip transformations, most notably in low data regimes.
http://arxiv.org/pdf/2002.02917v2
[ "Sharon Zhou", "Jiequan Zhang", "Hang Jiang", "Torbjorn Lundh", "Andrew Y. Ng" ]
2020-06-07T08:00:04Z
2020-02-07T17:45:39Z
2006.04077
FMA-ETA: Estimating Travel Time Entirely Based on FFN With Attention
Estimated time of arrival (ETA) is one of the most important services in intelligent transportation systems and becomes a challenging spatial-temporal (ST) data mining task in recent years. Nowadays, deep learning based methods, specifically recurrent neural networks (RNN) based ones are adapted to model the ST patterns from massive data for ETA and become the state-of-the-art. However, RNN is suffering from slow training and inference speed, as its structure is unfriendly to parallel computing. To solve this problem, we propose a novel, brief and effective framework mainly based on feed-forward network (FFN) for ETA, FFN with Multi-factor self-Attention (FMA-ETA). The novel Multi-factor self-attention mechanism is proposed to deal with different category features and aggregate the information purposefully. Extensive experimental results on the real-world vehicle travel dataset show FMA-ETA is competitive with state-of-the-art methods in terms of the prediction accuracy with significantly better inference speed.
http://arxiv.org/pdf/2006.04077v1
[ "Yiwen Sun", "Yulu Wang", "Kun Fu", "Zheng Wang", "Ziang Yan", "Changshui Zhang", "Jieping Ye" ]
2020-06-07T08:10:47Z
2020-06-07T08:10:47Z
2002.02753
Translating Diffusion, Wavelets, and Regularisation into Residual Networks
Convolutional neural networks (CNNs) often perform well, but their stability is poorly understood. To address this problem, we consider the simple prototypical problem of signal denoising, where classical approaches such as nonlinear diffusion, wavelet-based methods and regularisation offer provable stability guarantees. To transfer such guarantees to CNNs, we interpret numerical approximations of these classical methods as a specific residual network (ResNet) architecture. This leads to a dictionary which allows to translate diffusivities, shrinkage functions, and regularisers into activation functions, and enables a direct communication between the four research communities. On the CNN side, it does not only inspire new families of nonmonotone activation functions, but also introduces intrinsically stable architectures for an arbitrary number of layers.
http://arxiv.org/pdf/2002.02753v3
[ "Tobias Alt", "Joachim Weickert", "Pascal Peter" ]
2020-06-07T08:51:13Z
2020-02-07T13:07:34Z
2006.04094
Average Sensitivity of Spectral Clustering
Spectral clustering is one of the most popular clustering methods for finding clusters in a graph, which has found many applications in data mining. However, the input graph in those applications may have many missing edges due to error in measurement, withholding for a privacy reason, or arbitrariness in data conversion. To make reliable and efficient decisions based on spectral clustering, we assess the stability of spectral clustering against edge perturbations in the input graph using the notion of average sensitivity, which is the expected size of the symmetric difference of the output clusters before and after we randomly remove edges. We first prove that the average sensitivity of spectral clustering is proportional to $lambda_2/lambda_3^2$, where $lambda_i$ is the $i$-th smallest eigenvalue of the (normalized) Laplacian. We also prove an analogous bound for $k$-way spectral clustering, which partitions the graph into $k$ clusters. Then, we empirically confirm our theoretical bounds by conducting experiments on synthetic and real networks. Our results suggest that spectral clustering is stable against edge perturbations when there is a cluster structure in the input graph.
http://arxiv.org/pdf/2006.04094v1
[ "Pan Peng", "Yuichi Yoshida" ]
2020-06-07T09:14:44Z
2020-06-07T09:14:44Z
2006.04096
Robust Learning Through Cross-Task Consistency
Visual perception entails solving a wide set of tasks, e.g., object detection, depth estimation, etc. The predictions made for multiple tasks from the same image are not independent, and therefore, are expected to be consistent. We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency. The proposed formulation is based on inference-path invariance over a graph of arbitrary tasks. We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs. This framework also leads to an informative unsupervised quantity, called Consistency Energy, based on measuring the intrinsic consistency of the system. Consistency Energy correlates well with the supervised error (r=0.67), thus it can be employed as an unsupervised confidence metric as well as for detection of out-of-distribution inputs (ROC-AUC=0.95). The evaluations are performed on multiple datasets, including Taskonomy, Replica, CocoDoom, and ApolloScape, and they benchmark cross-task consistency versus various baselines including conventional multi-task learning, cycle consistency, and analytical consistency.
http://arxiv.org/pdf/2006.04096v1
[ "Amir Zamir", "Alexander Sax", "Teresa Yeo", "Oğuzhan Kar", "Nikhil Cheerla", "Rohan Suri", "Zhangjie Cao", "Jitendra Malik", "Leonidas Guibas" ]
2020-06-07T09:24:33Z
2020-06-07T09:24:33Z
2006.04097
Optimally Combining Classifiers for Semi-Supervised Learning
This paper considers semi-supervised learning for tabular data. It is widely known that Xgboost based on tree model works well on the heterogeneous features while transductive support vector machine can exploit the low density separation assumption. However, little work has been done to combine them together for the end-to-end semi-supervised learning. In this paper, we find these two methods have complementary properties and larger diversity, which motivates us to propose a new semi-supervised learning method that is able to adaptively combine the strengths of Xgboost and transductive support vector machine. Instead of the majority vote rule, an optimization problem in terms of ensemble weight is established, which helps to obtain more accurate pseudo labels for unlabeled data. The experimental results on the UCI data sets and real commercial data set demonstrate the superior classification performance of our method over the five state-of-the-art algorithms improving test accuracy by about $3%-4%$. The partial code can be found at https://github.com/hav-cam-mit/CTO.
http://arxiv.org/pdf/2006.04097v1
[ "Zhiguo Wang", "Liusha Yang", "Feng Yin", "Ke Lin", "Qingjiang Shi", "Zhi-Quan Luo" ]
2020-06-07T09:28:34Z
2020-06-07T09:28:34Z
1909.07729
K-TanH: Efficient TanH For Deep Learning
We propose K-TanH, a novel, highly accurate, hardware efficient approximation of popular activation function TanH for Deep Learning. K-TanH consists of parameterized low-precision integer operations, such as, shift and add/subtract (no floating point operation needed) where parameters are stored in very small look-up tables that can fit in CPU registers. K-TanH can work on various numerical formats, such as, Float32 and BFloat16. High quality approximations to other activation functions, e.g., Sigmoid, Swish and GELU, can be derived from K-TanH. Our AVX512 implementation of K-TanH demonstrates $>5times$ speed up over Intel SVML, and it is consistently superior in efficiency over other approximations that use floating point arithmetic. Finally, we achieve state-of-the-art Bleu score and convergence results for training language translation model GNMT on WMT16 data sets with approximate TanH obtained via K-TanH on BFloat16 inputs.
http://arxiv.org/pdf/1909.07729v3
[ "Abhisek Kundu", "Alex Heinecke", "Dhiraj Kalamkar", "Sudarshan Srinivasan", "Eric C. Qin", "Naveen K. Mellempudi", "Dipankar Das", "Kunal Banerjee", "Bharat Kaul", "Pradeep Dubey" ]
2020-06-07T10:02:50Z
2019-09-17T11:43:23Z
2006.04117
Sharp Thresholds of the Information Cascade Fragility Under a Mismatched Model
We analyze a sequential decision making model in which decision makers (or, players) take their decisions based on their own private information as well as the actions of previous decision makers. Such decision making processes often lead to what is known as the emph{information cascade} or emph{herding} phenomenon. Specifically, a cascade develops when it seems rational for some players to abandon their own private information and imitate the actions of earlier players. The risk, however, is that if the initial decisions were wrong, then the whole cascade will be wrong. Nonetheless, information cascade are known to be fragile: there exists a sequence of emph{revealing} probabilities ${p_{ell}}_{ellgeq1}$, such that if with probability $p_{ell}$ player $ell$ ignores the decisions of previous players, and rely on his private information only, then wrong cascades can be avoided. Previous related papers which study the fragility of information cascades always assume that the revealing probabilities are known to all players perfectly, which might be unrealistic in practice. Accordingly, in this paper we study a mismatch model where players believe that the revealing probabilities are ${q_ell}_{ellinmathbb{N}}$ when they truly are ${p_ell}_{ellinmathbb{N}}$, and study the effect of this mismatch on information cascades. We consider both adversarial and probabilistic sequential decision making models, and derive closed-form expressions for the optimal learning rates at which the error probability associated with a certain decision maker goes to zero. We prove several novel phase transitions in the behaviour of the asymptotic learning rate.
http://arxiv.org/pdf/2006.04117v1
[ "Wasim Huleihel", "Ofer Shayevitz" ]
2020-06-07T11:15:08Z
2020-06-07T11:15:08Z
2006.04125
BUDS: Balancing Utility and Differential Privacy by Shuffling
Balancing utility and differential privacy by shuffling or textit{BUDS} is an approach towards crowd-sourced, statistical databases, with strong privacy and utility balance using differential privacy theory. Here, a novel algorithm is proposed using one-hot encoding and iterative shuffling with the loss estimation and risk minimization techniques, to balance both the utility and privacy. In this work, after collecting one-hot encoded data from different sources and clients, a step of novel attribute shuffling technique using iterative shuffling (based on the query asked by the analyst) and loss estimation with an updation function and risk minimization produces a utility and privacy balanced differential private report. During empirical test of balanced utility and privacy, BUDS produces $epsilon = 0.02$ which is a very promising result. Our algorithm maintains a privacy bound of $epsilon = ln [t/((n_1 - 1)^S)]$ and loss bound of $c' bigg|e^{ln[t/((n_1 - 1)^S)]} - 1bigg|$.
http://arxiv.org/pdf/2006.04125v1
[ "Poushali Sengupta", "Sudipta Paul", "Subhankar Mishra" ]
2020-06-07T11:39:13Z
2020-06-07T11:39:13Z
2006.04127
ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Despite the recent progress of network pruning, directly applying it to the Internet of Things (IoT) applications still faces two challenges, i.e. the distribution divergence between end and cloud data and the missing of data label on end devices. One straightforward solution is to combine the unsupervised domain adaptation (UDA) technique and pruning. For example, the model is first pruned on the cloud and then transferred from cloud to end by UDA. However, such a naive combination faces high performance degradation. Hence this work proposes an Adversarial Double Masks based Pruning (ADMP) for such cross-domain compression. In ADMP, we construct a Knowledge Distillation framework not only to produce pseudo labels but also to provide a measurement of domain divergence as the output difference between the full-size teacher and the pruned student. Unlike existing mask-based pruning works, two adversarial masks, i.e. soft and hard masks, are adopted in ADMP. So ADMP can prune the model effectively while still allowing the model to extract strong domain-invariant features and robust classification boundaries. During training, the Alternating Direction Multiplier Method is used to overcome the binary constraint of {0,1}-masks. On Office31 and ImageCLEF-DA datasets, the proposed ADMP can prune 60% channels with only 0.2% and 0.3% average accuracy loss respectively. Compared with the state of art, we can achieve about 1.63x parameters reduction and 4.1% and 5.1% accuracy improvement.
http://arxiv.org/pdf/2006.04127v1
[ "Xiaoyu Feng", "Zhuqing Yuan", "Guijin Wang", "Yongpan Liu" ]
2020-06-07T11:44:43Z
2020-06-07T11:44:43Z
2005.06216
DAugNet: Unsupervised, Multi-source, Multi-target, and Life-long Domain Adaptation for Semantic Segmentation of Satellite Images
The domain adaptation of satellite images has recently gained an increasing attention to overcome the limited generalization abilities of machine learning models when segmenting large-scale satellite images. Most of the existing approaches seek for adapting the model from one domain to another. However, such single-source and single-target setting prevents the methods from being scalable solutions, since nowadays multiple source and target domains having different data distributions are usually available. Besides, the continuous proliferation of satellite images necessitates the classifiers to adapt to continuously increasing data. We propose a novel approach, coined DAugNet, for unsupervised, multi-source, multi-target, and life-long domain adaptation of satellite images. It consists of a classifier and a data augmentor. The data augmentor, which is a shallow network, is able to perform style transfer between multiple satellite images in an unsupervised manner, even when new data are added over the time. In each training iteration, it provides the classifier with diversified data, which makes the classifier robust to large data distribution difference between the domains. Our extensive experiments prove that DAugNet significantly better generalizes to new geographic locations than the existing approaches.
http://arxiv.org/abs/2005.06216v2
[ "Onur Tasar", "Alain Giros", "Yuliya Tarabalka", "Pierre Alliez", "Sébastien Clerc" ]
2020-06-07T11:48:20Z
2020-05-13T09:11:22Z
2005.00329
CDL: Curriculum Dual Learning for Emotion-Controllable Response Generation
Emotion-controllable response generation is an attractive and valuable task that aims to make open-domain conversations more empathetic and engaging. Existing methods mainly enhance the emotion expression by adding regularization terms to standard cross-entropy loss and thus influence the training process. However, due to the lack of further consideration of content consistency, the common problem of response generation tasks, safe response, is intensified. Besides, query emotions that can help model the relationship between query and response are simply ignored in previous models, which would further hurt the coherence. To alleviate these problems, we propose a novel framework named Curriculum Dual Learning (CDL) which extends the emotion-controllable response generation to a dual task to generate emotional responses and emotional queries alternatively. CDL utilizes two rewards focusing on emotion and content to improve the duality. Additionally, it applies curriculum learning to gradually generate high-quality responses based on the difficulties of expressing various emotions. Experimental results show that CDL significantly outperforms the baselines in terms of coherence, diversity, and relation to emotion factors.
http://arxiv.org/pdf/2005.00329v5
[ "Lei Shen", "Yang Feng" ]
2020-06-07T12:54:28Z
2020-05-01T12:16:44Z
2006.05308
Ensemble Learning with Statistical and Structural Models
Statistical and structural modeling represent two distinct approaches to data analysis. In this paper, we propose a set of novel methods for combining statistical and structural models for improved prediction and causal inference. Our first proposed estimator has the doubly robustness property in that it only requires the correct specification of either the statistical or the structural model. Our second proposed estimator is a weighted ensemble that has the ability to outperform both models when they are both misspecified. Experiments demonstrate the potential of our estimators in various settings, including fist-price auctions, dynamic models of entry and exit, and demand estimation with instrumental variables.
http://arxiv.org/pdf/2006.05308v1
[ "Jiaming Mao", "Jingzhi Xu" ]
2020-06-07T13:36:50Z
2020-06-07T13:36:50Z
1902.06965
DEDPUL: Difference-of-Estimated-Densities-based Positive-Unlabeled Learning
Positive-Unlabeled (PU) learning is an analog to supervised binary classification for the case when only the positive sample is clean, while the negative sample is contaminated with latent instances of positive class and hence can be considered as an unlabeled mixture. The objectives are to classify the unlabeled sample and train an unbiased PN classifier, which generally requires to identify the mixing proportions of positives and negatives first. Recently, unbiased risk estimation framework has achieved state-of-the-art performance in PU learning. This approach, however, exhibits two major bottlenecks. First, the mixing proportions are assumed to be identified, i.e. known in the domain or estimated with additional methods. Second, the approach relies on the classifier being a neural network. In this paper, we propose DEDPUL, a method that solves PU Learning without the aforementioned issues. The mechanism behind DEDPUL is to apply a computationally cheap post-processing procedure to the predictions of any classifier trained to distinguish positive and unlabeled data. Instead of assuming the proportions to be identified, DEDPUL estimates them alongside with classifying unlabeled sample. Experiments show that DEDPUL outperforms the current state-of-the-art in both proportion estimation and PU Classification.
http://arxiv.org/pdf/1902.06965v5
[ "Dmitry Ivanov" ]
2020-06-07T13:40:20Z
2019-02-19T09:30:53Z
2006.04154
VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Voice conversion (VC) is a task that transforms the source speaker's timbre, accent, and tones in audio into another one's while preserving the linguistic content. It is still a challenging work, especially in a one-shot setting. Auto-encoder-based VC methods disentangle the speaker and the content in input speech without given the speaker's identity, so these methods can further generalize to unseen speakers. The disentangle capability is achieved by vector quantization (VQ), adversarial training, or instance normalization (IN). However, the imperfect disentanglement may harm the quality of output speech. In this work, to further improve audio quality, we use the U-Net architecture within an auto-encoder-based VC system. We find that to leverage the U-Net architecture, a strong information bottleneck is necessary. The VQ-based method, which quantizes the latent vectors, can serve the purpose. The objective and the subjective evaluations show that the proposed method performs well in both audio naturalness and speaker similarity.
http://arxiv.org/pdf/2006.04154v1
[ "Da-Yi Wu", "Yen-Hao Chen", "Hung-Yi Lee" ]
2020-06-07T14:01:16Z
2020-06-07T14:01:16Z
2006.04156
Analogy as Nonparametric Bayesian Inference over Relational Systems
Much of human learning and inference can be framed within the computational problem of relational generalization. In this project, we propose a Bayesian model that generalizes relational knowledge to novel environments by analogically weighting predictions from previously encountered relational structures. First, we show that this learner outperforms a naive, theory-based learner on relational data derived from random- and Wikipedia-based systems when experience with the environment is small. Next, we show how our formalization of analogical similarity translates to the selection and weighting of analogies. Finally, we combine the analogy- and theory-based learners in a single nonparametric Bayesian model, and show that optimal relational generalization transitions from relying on analogies to building a theory of the novel system with increasing experience in it. Beyond predicting unobserved interactions better than either baseline, this formalization gives a computational-level perspective on the formation and abstraction of analogies themselves.
http://arxiv.org/pdf/2006.04156v1
[ "Ruairidh M. Battleday", "Thomas L. Griffiths" ]
2020-06-07T14:07:46Z
2020-06-07T14:07:46Z
2006.04164
Single-Layer Graph Convolutional Networks For Recommendation
Graph Convolutional Networks (GCNs) and their variants have received significant attention and achieved start-of-the-art performances on various recommendation tasks. However, many existing GCN models tend to perform recursive aggregations among all related nodes, which arises severe computational burden. Moreover, they favor multi-layer architectures in conjunction with complicated modeling techniques. Though effective, the excessive amount of model parameters largely hinder their applications in real-world recommender systems. To this end, in this paper, we propose the single-layer GCN model which is able to achieve superior performance along with remarkably less complexity compared with existing models. Our main contribution is three-fold. First, we propose a principled similarity metric named distribution-aware similarity (DA similarity), which can guide the neighbor sampling process and evaluate the quality of the input graph explicitly. We also prove that DA similarity has a positive correlation with the final performance, through both theoretical analysis and empirical simulations. Second, we propose a simplified GCN architecture which employs a single GCN layer to aggregate information from the neighbors filtered by DA similarity and then generates the node representations. Moreover, the aggregation step is a parameter-free operation, such that it can be done in a pre-processing manner to further reduce red the training and inference costs. Third, we conduct extensive experiments on four datasets. The results verify that the proposed model outperforms existing GCN models considerably and yields up to a few orders of magnitude speedup in training, in terms of the recommendation performance.
http://arxiv.org/pdf/2006.04164v1
[ "Yue Xu", "Hao Chen", "Zengde Deng", "Junxiong Zhu", "Yanghua Li", "Peng He", "Wenyao Gao", "Wenjun Xu" ]
2020-06-07T14:38:47Z
2020-06-07T14:38:47Z
2006.04180
RoeNets: Predicting Discontinuity of Hyperbolic Systems from Continuous Data
We introduce Roe Neural Networks (RoeNets) that can predict the discontinuity of the hyperbolic conservation laws (HCLs) based on short-term discontinuous and even continuous training data. Our methodology is inspired by Roe approximate Riemann solver (P. L. Roe, J. Comput. Phys., vol. 43, 1981, pp. 357--372), which is one of the most fundamental HCLs numerical solvers. In order to accurately solve the HCLs, Roe argues the need to construct a Roe matrix that fulfills "Property U", including diagonalizable with real eigenvalues, consistent with the exact Jacobian, and preserving conserved quantities. However, the construction of such matrix cannot be achieved by any general numerical method. Our model made a breakthrough improvement in solving the HCLs by applying Roe solver under a neural network perspective. To enhance the expressiveness of our model, we incorporate pseudoinverses into a novel context to enable a hidden dimension so that we are flexible with the number of parameters. The ability of our model to predict long-term discontinuity from a short window of continuous training data is in general considered impossible using traditional machine learning approaches. We demonstrate that our model can generate highly accurate predictions of evolution of convection without dissipation and the discontinuity of hyperbolic systems from smooth training data.
http://arxiv.org/pdf/2006.04180v1
[ "Shiying Xiong", "Xingzhe He", "Yunjin Tong", "Runze Liu", "Bo Zhu" ]
2020-06-07T15:28:00Z
2020-06-07T15:28:00Z
2006.04183
Uncertainty-Aware Deep Classifiers using Generative Models
Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.
http://arxiv.org/pdf/2006.04183v1
[ "Murat Sensoy", "Lance Kaplan", "Federico Cerutti", "Maryam Saleki" ]
2020-06-07T15:38:35Z
2020-06-07T15:38:35Z
1901.03440
Undirected Graphical Models as Approximate Posteriors
The representation of the approximate posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the approximate posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected approximate posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machines as approximate posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models. Our implementation is available at https://github.com/QuadrantAI/dvaess .
http://arxiv.org/pdf/1901.03440v2
[ "Arash Vahdat", "Evgeny Andriyash", "William G. Macready" ]
2020-06-07T16:02:23Z
2019-01-11T00:32:21Z
2006.04198
EnK: Encoding time-information in convolution
Recent development in deep learning techniques has attracted attention in decoding and classification in EEG signals. Despite several efforts utilizing different features of EEG signals, a significant research challenge is to use time-dependent features in combination with local and global features. There have been several efforts to remodel the deep learning convolution neural networks (CNNs) to capture time-dependency information by incorporating hand-crafted features, slicing the input data in a smaller time-windows, and recurrent convolution. However, these approaches partially solve the problem, but simultaneously hinder the CNN's capability to learn from unknown information that might be present in the data. To solve this, we have proposed a novel time encoding kernel (EnK) approach, which introduces the increasing time information during convolution operation in CNN. The encoded information by EnK lets CNN learn time-dependent features in-addition to local and global features. We performed extensive experiments on several EEG datasets: cognitive conflict (CC), physical-human robot collaboration (pHRC), P300 visual-evoked potentials, movement-related cortical potentials (MRCP). EnK outperforms the state-of-art by 12% (F1 score). Moreover, the EnK approach required only one additional parameter to learn and can be applied to a virtually any CNN architectures with minimal efforts. These results support our methodology and show high potential to improve CNN performance in the context of time-series data in general.
http://arxiv.org/pdf/2006.04198v1
[ "Avinash Kumar Singh", "Chin-Teng Lin" ]
2020-06-07T16:43:07Z
2020-06-07T16:43:07Z
2006.04201
Learning Behaviors with Uncertain Human Feedback
Human feedback is widely used to train agents in many domains. However, previous works rarely consider the uncertainty when humans provide feedback, especially in cases that the optimal actions are not obvious to the trainers. For example, the reward of a sub-optimal action can be stochastic and sometimes exceeds that of the optimal action, which is common in games or real-world. Trainers are likely to provide positive feedback to sub-optimal actions, negative feedback to the optimal actions and even do not provide feedback in some confusing situations. Existing works, which utilize the Expectation Maximization (EM) algorithm and treat the feedback model as hidden parameters, do not consider uncertainties in the learning environment and human feedback. To address this challenge, we introduce a novel feedback model that considers the uncertainty of human feedback. However, this incurs intractable calculus in the EM algorithm. To this end, we propose a novel approximate EM algorithm, in which we approximate the expectation step with the Gradient Descent method. Experimental results in both synthetic scenarios and two real-world scenarios with human participants demonstrate the superior performance of our proposed approach.
http://arxiv.org/pdf/2006.04201v1
[ "Xu He", "Haipeng Chen", "Bo An" ]
2020-06-07T16:51:48Z
2020-06-07T16:51:48Z
2006.04205
Machine learning dynamics of phase separation in correlated electron magnets
We demonstrate machine-learning enabled large-scale dynamical simulations of electronic phase separation in double-exchange system. This model, also known as the ferromagnetic Kondo lattice model, is believed to be relevant for the colossal magnetoresistance phenomenon. Real-space simulations of such inhomogeneous states with exchange forces computed from the electron Hamiltonian can be prohibitively expensive for large systems. Here we show that linear-scaling exchange field computation can be achieved using neural networks trained by datasets from exact calculation on small lattices. Our Landau-Lifshitz dynamics simulations based on machine-learning potentials nicely reproduce not only the nonequilibrium relaxation process, but also correlation functions that agree quantitatively with exact simulations. Our work paves the way for large-scale dynamical simulations of correlated electron systems using machine-learning models.
http://arxiv.org/pdf/2006.04205v1
[ "Puhan Zhang", "Preetha Saha", "Gia-Wei Chern" ]
2020-06-07T17:01:06Z
2020-06-07T17:01:06Z
2006.04208
Extensions and limitations of randomized smoothing for robustness guarantees
Randomized smoothing, a method to certify a classifier's decision on an input is invariant under adversarial noise, offers attractive advantages over other certification methods. It operates in a black-box and so certification is not constrained by the size of the classifier's architecture. Here, we extend the work of Li et al. cite{li2018second}, studying how the choice of divergence between smoothing measures affects the final robustness guarantee, and how the choice of smoothing measure itself can lead to guarantees in differing threat models. To this end, we develop a method to certify robustness against any $ell_p$ ($pinmathbb{N}_{>0}$) minimized adversarial perturbation. We then demonstrate a negative result, that randomized smoothing suffers from the curse of dimensionality; as $p$ increases, the effective radius around an input one can certify vanishes.
http://arxiv.org/pdf/2006.04208v1
[ "Jamie Hayes" ]
2020-06-07T17:22:32Z
2020-06-07T17:22:32Z
1910.06000
Second-Order Convergence of Asynchronous Parallel Stochastic Gradient Descent: When Is the Linear Speedup Achieved?
In machine learning, asynchronous parallel stochastic gradient descent (APSGD) is broadly used to speed up the training process through multi-workers. Meanwhile, the time delay of stale gradients in asynchronous algorithms is generally proportional to the total number of workers, which brings additional deviation from the accurate gradient due to using delayed gradients. This may have a negative influence on the convergence of the algorithm. One may ask: How many workers can we use at most to achieve a good convergence and the linear speedup? In this paper, we consider the second-order convergence of asynchronous algorithms in non-convex optimization. We investigate the behaviors of APSGD with consistent read near strictly saddle points and provide a theoretical guarantee that if the total number of workers is bounded by $widetilde{O}(K^{1/3}M^{-1/3})$ ($K$ is the total steps and $M$ is the mini-batch size), APSGD will converge to good stationary points ($||nabla f(x)||leq epsilon, nabla^2 f(x)succeq -sqrt{epsilon}bm{I}, epsilon^2leq O(sqrt{frac{1}{MK}}) $) and the linear speedup is achieved. Our works give the first theoretical guarantee on the second-order convergence for asynchronous algorithms. The technique we provide can be generalized to analyze other types of asynchronous algorithms to understand the behaviors of asynchronous algorithms in distributed asynchronous parallel training.
http://arxiv.org/pdf/1910.06000v6
[ "Lifu Wang", "Bo Shen", "Ning Zhao" ]
2020-06-07T17:23:20Z
2019-10-14T09:14:55Z
2006.08331
Probing Neural Dialog Models for Conversational Understanding
The predominant approach to open-domain dialog generation relies on end-to-end training of neural models on chat datasets. However, this approach provides little insight as to what these models learn (or do not learn) about engaging in dialog. In this study, we analyze the internal representations learned by neural open-domain dialog systems and evaluate the quality of these representations for learning basic conversational skills. Our results suggest that standard open-domain dialog systems struggle with answering questions, inferring contradiction, and determining the topic of conversation, among other tasks. We also find that the dyadic, turn-taking nature of dialog is not fully leveraged by these models. By exploring these limitations, we highlight the need for additional research into architectures and training methods that can better capture high-level information about dialog.
http://arxiv.org/abs/2006.08331v1
[ "Abdelrhman Saleh", "Tovly Deutsch", "Stephen Casper", "Yonatan Belinkov", "Stuart Shieber" ]
2020-06-07T17:32:00Z
2020-06-07T17:32:00Z
2006.04212
Generating Realistic Stock Market Order Streams
We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks (GANs). Our Stock-GAN model employs a conditional Wasserstein GAN to capture history dependence of orders. The generator design includes specially crafted aspects including components that approximate the market's auction mechanism, augmenting the order history with order-book constructions to improve the generation task. We perform an ablation study to verify the usefulness of aspects of our network structure. We provide a mathematical characterization of distribution learned by the generator. We also propose statistics to measure the quality of generated orders. We test our approach with synthetic and actual market data, compare to many baseline generative models, and find the generated data to be close to real data.
http://arxiv.org/pdf/2006.04212v1
[ "Junyi Li", "Xitong Wang", "Yaoyang Lin", "Arunesh Sinha", "Micheal P. Wellman" ]
2020-06-07T17:32:42Z
2020-06-07T17:32:42Z
2002.10435
Learning Structured Distributions From Untrusted Batches: Faster and Simpler
We revisit the problem of learning from untrusted batches introduced by Qiao and Valiant [QV17]. Recently, Jain and Orlitsky [JO19] gave a simple semidefinite programming approach based on the cut-norm that achieves essentially information-theoretically optimal error in polynomial time. Concurrently, Chen et al. [CLM19] considered a variant of the problem where $mu$ is assumed to be structured, e.g. log-concave, monotone hazard rate, $t$-modal, etc. In this case, it is possible to achieve the same error with sample complexity sublinear in $n$, and they exhibited a quasi-polynomial time algorithm for doing so using Haar wavelets. In this paper, we find an appealing way to synthesize the techniques of [JO19] and [CLM19] to give the best of both worlds: an algorithm which runs in polynomial time and can exploit structure in the underlying distribution to achieve sublinear sample complexity. Along the way, we simplify the approach of [JO19] by avoiding the need for SDP rounding and giving a more direct interpretation of it through the lens of soft filtering, a powerful recent technique in high-dimensional robust estimation. We validate the usefulness of our algorithms in preliminary experimental evaluations.
http://arxiv.org/pdf/2002.10435v2
[ "Sitan Chen", "Jerry Li", "Ankur Moitra" ]
2020-06-07T17:50:33Z
2020-02-24T18:32:10Z
2002.03847
Making Logic Learnable With Neural Networks
While neural networks are good at learning unspecified functions from training samples, they cannot be directly implemented in hardware and are often not interpretable or formally verifiable. On the other hand, logic circuits are implementable, verifiable, and interpretable but are not able to learn from training data in a generalizable way. We propose a novel logic learning pipeline that combines the advantages of neural networks and logic circuits. Our pipeline first trains a neural network on a classification task, and then translates this, first to random forests, and then to AND-Inverter logic. We show that our pipeline maintains greater accuracy than naive translations to logic, and minimizes the logic such that it is more interpretable and has decreased hardware cost. We show the utility of our pipeline on a network that is trained on biomedical data. This approach could be applied to patient care to provide risk stratification and guide clinical decision-making.
http://arxiv.org/pdf/2002.03847v3
[ "Tobias Brudermueller", "Dennis L. Shung", "Adrian J. Stanley", "Johannes Stegmaier", "Smita Krishnaswamy" ]
2020-06-07T18:07:58Z
2020-02-10T15:11:40Z
2006.04216
Efficient AutoML Pipeline Search with Matrix and Tensor Factorization
Data scientists seeking a good supervised learning model on a new dataset have many choices to make: they must preprocess the data, select features, possibly reduce the dimension, select an estimation algorithm, and choose hyperparameters for each of these pipeline components. With new pipeline components comes a combinatorial explosion in the number of choices! In this work, we design a new AutoML system to address this challenge: an automated system to design a supervised learning pipeline. Our system uses matrix and tensor factorization as surrogate models to model the combinatorial pipeline search space. Under these models, we develop greedy experiment design protocols to efficiently gather information about a new dataset. Experiments on large corpora of real-world classification problems demonstrate the effectiveness of our approach.
http://arxiv.org/pdf/2006.04216v1
[ "Chengrun Yang", "Jicong Fan", "Ziyang Wu", "Madeleine Udell" ]
2020-06-07T18:08:48Z
2020-06-07T18:08:48Z
1907.02098
Preserving physically important variables in optimal event selections: A case study in Higgs physics
Analyses of collider data, often assisted by modern Machine Learning methods, condense a number of observables into a few powerful discriminants for the separation of the targeted signal process from the contributing backgrounds. These discriminants are highly correlated with important physical observables; using them in the event selection thus leads to the distortion of physically relevant distributions. We present a novel method based on a differentiable estimate of mutual information, a measure of non-linear dependency between variables, to construct a discriminant that is statistically independent of a number of selected observables, and so manages to preserve their distributions in the event selection. Our strategy is evaluated in a realistic setting, the analysis of the Standard Model Higgs boson decaying into a pair of bottom quarks. Using the distribution of the invariant mass of the di-b-jet system to extract the Higgs boson signal strength, our method achieves state-of-the-art performance compared to other decorrelation techniques, while significantly improving the sensitivity of a similar, cut-based, analysis published by ATLAS.
http://arxiv.org/abs/1907.02098v2
[ "Philipp Windischhofer", "Miha Zgubic", "Daniela Bortoletto" ]
2020-06-07T18:23:06Z
2019-07-03T18:49:10Z
2005.07347
Towards Assessment of Randomized Smoothing Mechanisms for Certifying Adversarial Robustness
As a certified defensive technique, randomized smoothing has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions remain unanswered, such as (i) whether the Gaussian mechanism is an appropriate option for certifying $ell_2$-norm robustness, and (ii) whether there is an appropriate randomized (smoothing) mechanism to certify $ell_infty$-norm robustness. To shed light on these questions, we argue that the main difficulty is how to assess the appropriateness of each randomized mechanism. In this paper, we propose a generic framework that connects the existing frameworks in cite{lecuyer2018certified, li2019certified}, to assess randomized mechanisms. Under our framework, for a randomized mechanism that can certify a certain extent of robustness, we define the magnitude of its required additive noise as the metric for assessing its appropriateness. We also prove lower bounds on this metric for the $ell_2$-norm and $ell_infty$-norm cases as the criteria for assessment. Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria). We first conclude that the Gaussian mechanism is indeed an appropriate option to certify $ell_2$-norm robustness. Surprisingly, we show that the Gaussian mechanism is also an appropriate option for certifying $ell_infty$-norm robustness, instead of the Exponential mechanism. Finally, we generalize our framework to $ell_p$-norm for any $pgeq2$. Our theoretical findings are verified by evaluations on CIFAR10 and ImageNet.
http://arxiv.org/pdf/2005.07347v3
[ "Tianhang Zheng", "Di Wang", "Baochun Li", "Jinhui Xu" ]
2020-06-07T18:39:33Z
2020-05-15T03:54:53Z
1910.09094
Self-supervised classification of dynamic obstacles using the temporal information provided by videos
Nowadays, autonomous driving systems can detect, segment, and classify the surrounding obstacles using a monocular camera. However, state-of-the-art methods solving these tasks generally perform a fully supervised learning process and require a large amount of training labeled data. On another note, some self-supervised learning approaches can deal with detection and segmentation of dynamic obstacles using the temporal information available in video sequences. In this work, we propose to classify the detected obstacles depending on their motion pattern. We present a novel self-supervised framework consisting of learning offline clusters from temporal patch sequences and considering these clusters as labeled sets to train a real-time image classifier. The presented model outperforms state-of-the-art unsupervised image classification methods on large-scale diverse driving video dataset BDD100K.
http://arxiv.org/pdf/1910.09094v2
[ "Sid Ali Hamideche", "Florent Chiaroni", "Mohamed-Cherif Rahal" ]
2020-06-07T18:41:56Z
2019-10-21T00:48:14Z
2006.04228
Bayesian Hidden Physics Models: Uncertainty Quantification for Discovery of Nonlinear Partial Differential Operators from Data
What do data tell us about physics-and what don't they tell us? There has been a surge of interest in using machine learning models to discover governing physical laws such as differential equations from data, but current methods lack uncertainty quantification to communicate their credibility. This work addresses this shortcoming from a Bayesian perspective. We introduce a novel model comprising "leaf" modules that learn to represent distinct experiments' spatiotemporal functional data as neural networks and a single "root" module that expresses a nonparametric distribution over their governing nonlinear differential operator as a Gaussian process. Automatic differentiation is used to compute the required partial derivatives from the leaf functions as inputs to the root. Our approach quantifies the reliability of the learned physics in terms of a posterior distribution over operators and propagates this uncertainty to solutions of novel initial-boundary value problem instances. Numerical experiments demonstrate the method on several nonlinear PDEs.
http://arxiv.org/pdf/2006.04228v1
[ "Steven Atkinson" ]
2020-06-07T18:48:43Z
2020-06-07T18:48:43Z
1909.13014
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
Federated learning is a distributed framework according to which a model is trained over a set of devices, while keeping data localized. This framework faces several systems-oriented challenges which include (i) communication bottleneck since a large number of devices upload their local updates to a parameter server, and (ii) scalability as the federated network consists of millions of devices. Due to these systems challenges as well as issues related to statistical heterogeneity of data and privacy concerns, designing a provably efficient federated learning method is of significant importance yet it remains challenging. In this paper, we present FedPAQ, a communication-efficient Federated Learning method with Periodic Averaging and Quantization. FedPAQ relies on three key features: (1) periodic averaging where models are updated locally at devices and only periodically averaged at the server; (2) partial device participation where only a fraction of devices participate in each round of the training; and (3) quantized message-passing where the edge nodes quantize their updates before uploading to the parameter server. These features address the communications and scalability challenges in federated learning. We also show that FedPAQ achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by our method.
http://arxiv.org/pdf/1909.13014v4
[ "Amirhossein Reisizadeh", "Aryan Mokhtari", "Hamed Hassani", "Ali Jadbabaie", "Ramtin Pedarsani" ]
2020-06-07T19:09:29Z
2019-09-28T03:10:53Z
2002.03080
Analysis of Random Perturbations for Robust Convolutional Neural Networks
Recent work has extensively shown that randomized perturbations of neural networks can improve robustness to adversarial attacks. The literature is, however, lacking a detailed compare-and-contrast of the latest proposals to understand what classes of perturbations work, when they work, and why they work. We contribute a detailed evaluation that elucidates these questions and benchmarks perturbation based defenses consistently. In particular, we show five main results: (1) all input perturbation defenses, whether random or deterministic, are equivalent in their efficacy, (2) attacks transfer between perturbation defenses so the attackers need not know the specific type of defense -- only that it involves perturbations, (3) a tuned sequence of noise layers across a network provides the best empirical robustness, (4) perturbation based defenses offer almost no robustness to adaptive attacks unless these perturbations are observed during training, and (5) adversarial examples in a close neighborhood of original inputs show an elevated sensitivity to perturbations in first and second-order analyses.
http://arxiv.org/pdf/2002.03080v4
[ "Adam Dziedzic", "Sanjay Krishnan" ]
2020-06-07T19:25:31Z
2020-02-08T03:46:07Z
2006.04246
Self-Representation Based Unsupervised Exemplar Selection in a Union of Subspaces
Finding a small set of representatives from an unlabeled dataset is a core problem in a broad range of applications such as dataset summarization and information extraction. Classical exemplar selection methods such as $k$-medoids work under the assumption that the data points are close to a few cluster centroids, and cannot handle the case where data lie close to a union of subspaces. This paper proposes a new exemplar selection model that searches for a subset that best reconstructs all data points as measured by the $ell_1$ norm of the representation coefficients. Geometrically, this subset best covers all the data points as measured by the Minkowski functional of the subset. To solve our model efficiently, we introduce a farthest first search algorithm that iteratively selects the worst represented point as an exemplar. When the dataset is drawn from a union of independent subspaces, our method is able to select sufficiently many representatives from each subspace. We further develop an exemplar based subspace clustering method that is robust to imbalanced data and efficient for large scale data. Moreover, we show that a classifier trained on the selected exemplars (when they are labeled) can correctly classify the rest of the data points.
http://arxiv.org/pdf/2006.04246v1
[ "Chong You", "Chi Li", "Daniel P. Robinson", "Rene Vidal" ]
2020-06-07T19:43:33Z
2020-06-07T19:43:33Z
1911.10538
Breaking the cycle -- Colleagues are all you need
This paper proposes a novel approach to performing image-to-image translation between unpaired domains. Rather than relying on a cycle constraint, our method takes advantage of collaboration between various GANs. This results in a multi-modal method, in which multiple optional and diverse images are produced for a given image. Our model addresses some of the shortcomings of classical GANs: (1) It is able to remove large objects, such as glasses. (2) Since it does not need to support the cycle constraint, no irrelevant traces of the input are left on the generated image. (3) It manages to translate between domains that require large shape modifications. Our results are shown to outperform those generated by state-of-the-art methods for several challenging applications on commonly-used datasets, both qualitatively and quantitatively.
http://arxiv.org/pdf/1911.10538v2
[ "Ori Nizan", "Ayellet Tal" ]
2020-06-07T20:23:38Z
2019-11-24T14:43:45Z
1910.01636
Self-supervised learning for autonomous vehicles perception: A conciliation between analytical and learning methods
Nowadays, supervised deep learning techniques yield the best state-of-the-art prediction performances for a wide variety of computer vision tasks. However, such supervised techniques generally require a large amount of manually labeled training data. In the context of autonomous vehicles perception, this requirement is critical, as the distribution of sensor data can continuously change and include several unexpected variations. It turns out that a category of learning techniques, referred to as self-supervised learning (SSL), consists of replacing the manual labeling effort by an automatic labeling process. Thanks to their ability to learn on the application time and in varying environments, state-of-the-art SSL techniques provide a valid alternative to supervised learning for a variety of different tasks, including long-range traversable area segmentation, moving obstacle instance segmentation, long-term moving obstacle tracking, or depth map prediction. In this tutorial-style article, we present an overview and a general formalization of the concept of self-supervised learning (SSL) for autonomous vehicles perception. This formalization provides helpful guidelines for developing novel frameworks based on generic SSL principles. Moreover, it enables to point out significant challenges in the design of future SSL systems.
http://arxiv.org/pdf/1910.01636v2
[ "Florent Chiaroni", "Mohamed-Cherif Rahal", "Nicolas Hueber", "Frederic Dufaux" ]
2020-06-07T21:03:21Z
2019-10-03T17:56:18Z
2006.04271
Multi-Task Reinforcement Learning based Mobile Manipulation Control for Dynamic Object Tracking and Grasping
Agile control of mobile manipulator is challenging because of the high complexity coupled by the robotic system and the unstructured working environment. Tracking and grasping a dynamic object with a random trajectory is even harder. In this paper, a multi-task reinforcement learning-based mobile manipulation control framework is proposed to achieve general dynamic object tracking and grasping. Several basic types of dynamic trajectories are chosen as the task training set. To improve the policy generalization in practice, random noise and dynamics randomization are introduced during the training process. Extensive experiments show that our policy trained can adapt to unseen random dynamic trajectories with about 0.1m tracking error and 75% grasping success rate of dynamic objects. The trained policy can also be successfully deployed on a real mobile manipulator.
http://arxiv.org/pdf/2006.04271v1
[ "Cong Wang", "Qifeng Zhang", "Qiyan Tian", "Shuo Li", "Xiaohui Wang", "David Lane", "Yvan Petillot", "Ziyang Hong", "Sen Wang" ]
2020-06-07T21:18:36Z
2020-06-07T21:18:36Z
2002.01053
Deep-URL: A Model-Aware Approach To Blind Deconvolution Based On Deep Unfolded Richardson-Lucy Network
The lack of interpretability in current deep learning models causes serious concerns as they are extensively used for various life-critical applications. Hence, it is of paramount importance to develop interpretable deep learning models. In this paper, we consider the problem of blind deconvolution and propose a novel model-aware deep architecture that allows for the recovery of both the blur kernel and the sharp image from the blurred image. In particular, we propose the Deep Unfolded Richardson-Lucy (Deep-URL) framework -- an interpretable deep-learning architecture that can be seen as an amalgamation of classical estimation technique and deep neural network, and consequently leads to improved performance. Our numerical investigations demonstrate significant improvement compared to state-of-the-art algorithms.
http://arxiv.org/abs/2002.01053v3
[ "Chirag Agarwal", "Shahin Khobahi", "Arindam Bose", "Mojtaba Soltanalian", "Dan Schonfeld" ]
2020-06-07T21:19:09Z
2020-02-03T23:43:08Z
2006.00697
Translating Natural Language Instructions for Behavioral Robot Navigation with a Multi-Head Attention Mechanism
We propose a multi-head attention mechanism as a blending layer in a neural network model that translates natural language to a high level behavioral language for indoor robot navigation. We follow the framework established by (Zang et al., 2018a) that proposes the use of a navigation graph as a knowledge base for the task. Our results show significant performance gains when translating instructions on previously unseen environments, therefore, improving the generalization capabilities of the model.
http://arxiv.org/pdf/2006.00697v3
[ "Patricio Cerda-Mardini", "Vladimir Araujo", "Alvaro Soto" ]
2020-06-07T23:00:47Z
2020-06-01T03:49:43Z
2006.04292
Achieving Equalized Odds by Resampling Sensitive Attributes
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness. This is achieved by introducing a general discrepancy functional that rigorously quantifies violations of this criterion. This differentiable functional is used as a penalty driving the model parameters towards equalized odds. To rigorously evaluate fitted models, we develop a formal hypothesis test to detect whether a prediction rule violates this property, the first such test in the literature. Both the model fitting and hypothesis testing leverage a resampled version of the sensitive attribute obeying equalized odds, by construction. We demonstrate the applicability and validity of the proposed framework both in regression and multi-class classification problems, reporting improved performance over state-of-the-art methods. Lastly, we show how to incorporate techniques for equitable uncertainty quantification---unbiased for each group under study---to communicate the results of the data analysis in exact terms.
http://arxiv.org/pdf/2006.04292v1
[ "Yaniv Romano", "Stephen Bates", "Emmanuel J. Candès" ]
2020-06-08T00:18:34Z
2020-06-08T00:18:34Z
2006.04296
Randomised Gaussian Process Upper Confidence Bound for Bayesian Optimisation
In order to improve the performance of Bayesian optimisation, we develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function. This is done by sampling the exploration-exploitation trade-off parameter from a distribution. We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret. We also provide results showing that our method achieves better performance than GP-UCB in a range of real-world and synthetic problems.
http://arxiv.org/pdf/2006.04296v1
[ "Julian Berk", "Sunil Gupta", "Santu Rana", "Svetha Venkatesh" ]
2020-06-08T00:28:41Z
2020-06-08T00:28:41Z
2006.04298
Multi-step Estimation for Gradient-based Meta-learning
Gradient-based meta-learning approaches have been successful in few-shot learning, transfer learning, and a wide range of other domains. Despite its efficacy and simplicity, the burden of calculating the Hessian matrix with large memory footprints is the critical challenge in large-scale applications. To tackle this issue, we propose a simple yet straightforward method to reduce the cost by reusing the same gradient in a window of inner steps. We describe the dynamics of the multi-step estimation in the Lagrangian formalism and discuss how to reduce evaluating second-order derivatives estimating the dynamics. To validate our method, we experiment on meta-transfer learning and few-shot learning tasks for multiple settings. The experiment on meta-transfer emphasizes the applicability of training meta-networks, where other approximations are limited. For few-shot learning, we evaluate time and memory complexities compared with popular baselines. We show that our method significantly reduces training time and memory usage, maintaining competitive accuracies, or even outperforming in some cases.
http://arxiv.org/pdf/2006.04298v1
[ "Jin-Hwa Kim", "Junyoung Park", "Yongseok Choi" ]
2020-06-08T00:37:01Z
2020-06-08T00:37:01Z
2006.04300
Machine Learning Interpretability and Its Impact on Smart Campus Projects
Machine learning (ML) has shown increasing abilities for predictive analytics over the last decades. It is becoming ubiquitous in different fields, such as healthcare, criminal justice, finance and smart city. For instance, the University of Northampton is building a smart system with multiple layers of IoT and software-defined networks (SDN) on its new Waterside Campus. The system can be used to optimize smart buildings energy efficiency, improve the health and safety of its tenants and visitors, assist crowd management and way-finding, and improve the Internet connectivity.
http://arxiv.org/pdf/2006.04300v1
[ "Raghad Zenki", "Mu Mu" ]
2020-06-08T00:48:53Z
2020-06-08T00:48:53Z
1503.05671
Optimizing Neural Networks with Kronecker-factored Approximate Curvature
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-Factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
http://arxiv.org/pdf/1503.05671v7
[ "James Martens", "Roger Grosse" ]
2020-06-08T01:28:58Z
2015-03-19T08:30:24Z
2006.11384
A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning
In this paper, we present a new method, Transductive Multi-Head Few-Shot learning (TMHFS), to address the Cross-Domain Few-Shot Learning (CD-FSL) challenge. The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method [2] by introducing a new prediction head, i.e, an instance-wise global classification network based on semantic information, after the common feature embedding network. We train the embedding network with the multiple heads, i.e,, the MCT loss, the DFMN loss and the semantic classifier loss, simultaneously in the source domain. For the few-shot learning in the target domain, we first perform fine-tuning on the embedding network with only the semantic global classifier and the support instances, and then use the MCT part to predict labels of the query set with the fine-tuned embedding network. Moreover, we further exploit data augmentation techniques during the fine-tuning and test stages to improve the prediction performance. The experimental results demonstrate that the proposed methods greatly outperform the strong baseline, fine-tuning, on four different target domains.
http://arxiv.org/pdf/2006.11384v1
[ "Jianan Jiang", "Zhenpeng Li", "Yuhong Guo", "Jieping Ye" ]
2020-06-08T02:39:59Z
2020-06-08T02:39:59Z
2006.04330
Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs
Graph Neural Networks (GNNs) are emerging machine learning models on graphs. Although sufficiently deep GNNs are shown theoretically capable of fully preserving graph structures, most existing GNN models in practice are shallow and essentially feature-centric. We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well. To overcome this fundamental challenge, we propose Eigen-GNN, a simple yet effective and general plug-in module to boost GNNs ability in preserving graph structures. Specifically, we integrate the eigenspace of graph structures with GNNs by treating GNNs as a type of dimensionality reduction and expanding the initial dimensionality reduction bases. Without needing to increase depths, Eigen-GNN possesses more flexibilities in handling both feature-driven and structure-driven tasks since the initial bases contain both node features and graph structures. We present extensive experimental results to demonstrate the effectiveness of Eigen-GNN for tasks including node classification, link prediction, and graph isomorphism tests.
http://arxiv.org/pdf/2006.04330v1
[ "Ziwei Zhang", "Peng Cui", "Jian Pei", "Xin Wang", "Wenwu Zhu" ]
2020-06-08T02:47:38Z
2020-06-08T02:47:38Z
1912.10382
Deep Learning via Dynamical Systems: An Approximation Perspective
We build on the dynamical systems approach to deep learning, where deep residual networks are idealized as continuous-time dynamical systems, from the approximation perspective. In particular, we establish general sufficient conditions for universal approximation using continuous-time deep residual networks, which can also be understood as approximation theories in $L^p$ using flow maps of dynamical systems. In specific cases, rates of approximation in terms of the time horizon are also established. Overall, these results reveal that composition function approximation through flow maps present a new paradigm in approximation theory and contributes to building a useful mathematical framework to investigate deep learning.
http://arxiv.org/pdf/1912.10382v2
[ "Qianxiao Li", "Ting Lin", "Zuowei Shen" ]
2020-06-08T03:21:43Z
2019-12-22T04:19:33Z
2006.04340
The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization
The extrapolation strategy raised by Nesterov, which can accelerate the convergence rate of gradient descent methods by orders of magnitude when dealing with smooth convex objective, has led to tremendous success in training machine learning tasks. In this article, the convergence of individual iterates of projected subgradient (PSG) methods for nonsmooth convex optimization problems is theoretically studied based on Nesterov's extrapolation, which we name individual convergence. We prove that Nesterov's extrapolation has the strength to make the individual convergence of PSG optimal for nonsmooth problems. In light of this consideration, a direct modification of the subgradient evaluation suffices to achieve optimal individual convergence for strongly convex problems, which can be regarded as making an interesting step toward the open question about stochastic gradient descent (SGD) posed by Shamir. Furthermore, we give an extension of the derived algorithms to solve regularized learning tasks with nonsmooth losses in stochastic settings. Compared with other state-of-the-art nonsmooth methods, the derived algorithms can serve as an alternative to the basic SGD especially in coping with machine learning problems, where an individual output is needed to guarantee the regularization structure while keeping an optimal rate of convergence. Typically, our method is applicable as an efficient tool for solving large-scale $l$1-regularized hinge-loss learning problems. Several comparison experiments demonstrate that our individual output not only achieves an optimal convergence rate but also guarantees better sparsity than the averaged solution.
http://arxiv.org/abs/2006.04340v1
[ "W. Tao", "Z. Pan", "G. Wu", "Q. Tao" ]
2020-06-08T03:35:41Z
2020-06-08T03:35:41Z