arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
sequencelengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
1908.08036 | Deep Reinforcement Learning for Foreign Exchange Trading | Reinforcement learning can interact with the environment and is suitable for applications in decision control systems. Therefore, we used the reinforcement learning method to establish a foreign exchange transaction, avoiding the long-standing problem of unstable trends in deep learning predictions. In the system design, we optimized the Sure-Fire statistical arbitrage policy, set three different actions, encoded the continuous price over a period of time into a heat-map view of the Gramian Angular Field (GAF) and compared the Deep Q Learning (DQN) and Proximal Policy Optimization (PPO) algorithms. To test feasibility, we analyzed three currency pairs, namely EUR/USD, GBP/USD, and AUD/USD. We trained the data in units of four hours from 1 August 2018 to 30 November 2018 and tested model performance using data between 1 December 2018 and 31 December 2018. The test results of the various models indicated that favorable investment performance was achieved as long as the model was able to handle complex and random processes and the state was able to describe the environment, validating the feasibility of reinforcement learning in the development of trading strategies. | http://arxiv.org/pdf/1908.08036v2 | [
"Yun-Cheng Tsai",
"Chun-Chieh Wang"
] | 2020-06-03T12:54:33Z | 2019-08-21T01:55:36Z |
2001.07522 | Engineering AI Systems: A Research Agenda | Artificial intelligence (AI) and machine learning (ML) are increasingly broadly adopted in industry, However, based on well over a dozen case studies, we have learned that deploying industry-strength, production quality ML models in systems proves to be challenging. Companies experience challenges related to data quality, design methods and processes, performance of models as well as deployment and compliance. We learned that a new, structured engineering approach is required to construct and evolve systems that contain ML/DL components. In this paper, we provide a conceptualization of the typical evolution patterns that companies experience when employing ML as well as an overview of the key problems experienced by the companies that we have studied. The main contribution of the paper is a research agenda for AI engineering that provides an overview of the key engineering challenges surrounding ML solutions and an overview of open items that need to be addressed by the research community at large. | http://arxiv.org/pdf/2001.07522v2 | [
"Jan Bosch",
"Ivica Crnkovic",
"Helena Holmström Olsson"
] | 2020-06-03T12:59:36Z | 2020-01-16T20:29:48Z |
1906.06854 | Differentiated Backprojection Domain Deep Learning for Conebeam Artifact
Removal | Conebeam CT using a circular trajectory is quite often used for various applications due to its relative simple geometry. For conebeam geometry, Feldkamp, Davis and Kress algorithm is regarded as the standard reconstruction method, but this algorithm suffers from so-called conebeam artifacts as the cone angle increases. Various model-based iterative reconstruction methods have been developed to reduce the cone-beam artifacts, but these algorithms usually require multiple applications of computational expensive forward and backprojections. In this paper, we develop a novel deep learning approach for accurate conebeam artifact removal. In particular, our deep network, designed on the differentiated backprojection domain, performs a data-driven inversion of an ill-posed deconvolution problem associated with the Hilbert transform. The reconstruction results along the coronal and sagittal directions are then combined using a spectral blending technique to minimize the spectral leakage. Experimental results show that our method outperforms the existing iterative methods despite significantly reduced runtime complexity. | http://arxiv.org/pdf/1906.06854v2 | [
"Yoseob Han",
"Junyoung Kim",
"Jong Chul Ye"
] | 2020-06-03T13:05:48Z | 2019-06-17T05:59:33Z |
2006.02251 | A review of smartphones based indoor positioning: challenges and
applications | The continual proliferation of mobile devices has encouraged much effort in using the smartphones for indoor positioning. This article is dedicated to review the most recent and interesting smartphones based indoor navigation systems, ranging from electromagnetic to inertia to visible light ones, with an emphasis on their unique challenges and potential real-world applications. A taxonomy of smartphones sensors will be introduced, which serves as the basis to categorise different positioning systems for reviewing. A set of criteria to be used for the evaluation purpose will be devised. For each sensor category, the most recent, interesting and practical systems will be examined, with detailed discussion on the open research questions for the academics, and the practicality for the potential clients. | http://arxiv.org/pdf/2006.02251v1 | [
"Khuong An Nguyen",
"Zhiyuan Luo",
"Guang Li",
"Chris Watkins"
] | 2020-06-03T13:10:22Z | 2020-06-03T13:10:22Z |
2005.04788 | Distributed Fine-Grained Traffic Speed Prediction for Large-Scale
Transportation Networks based on Automatic LSTM Customization and Sharing | Short-term traffic speed prediction has been an important research topic in the past decade, and many approaches have been introduced. However, providing fine-grained, accurate, and efficient traffic-speed prediction for large-scale transportation networks where numerous traffic detectors are deployed has not been well studied. In this paper, we propose DistPre, which is a distributed fine-grained traffic speed prediction scheme for large-scale transportation networks. To achieve fine-grained and accurate traffic-speed prediction, DistPre customizes a Long Short-Term Memory (LSTM) model with an appropriate hyperparameter configuration for a detector. To make such customization process efficient and applicable for large-scale transportation networks, DistPre conducts LSTM customization on a cluster of computation nodes and allows any trained LSTM model to be shared between different detectors. If a detector observes a similar traffic pattern to another one, DistPre directly shares the existing LSTM model between the two detectors rather than customizing an LSTM model per detector. Experiments based on traffic data collected from freeway I5-N in California are conducted to evaluate the performance of DistPre. The results demonstrate that DistPre provides time-efficient LSTM customization and accurate fine-grained traffic-speed prediction for large-scale transportation networks. | http://arxiv.org/pdf/2005.04788v2 | [
"Ming-Chang Lee",
"Jia-Chun Lin",
"Ernst Gunnar Gran"
] | 2020-06-03T13:17:42Z | 2020-05-10T21:24:23Z |
2003.04745 | Spitzoid Lesions Diagnosis based on GA feature selection and Random
Forest | Spitzoid lesions broadly categorized into Spitz Nevus (SN), Atypical Spitz Tumors (AST), and Spitz Melanomas (SM). The accurate diagnosis of these lesions is one of the most challenges for dermapathologists; this is due to the high similarities between them. Data mining techniques are successfully applied to situations like these where complexity exists. This study aims to develop an artificial intelligence model to support the diagnosis of Spitzoid lesions. A private spitzoid lesions dataset have been used to evaluate the system proposed in this study. The proposed system has three stages. In the first stage, SMOTE method applied to solve the imbalance data problem, in the second stage, in order to eliminate irrelevant features; genetic algorithm is used to select significant features. This later reduces the computational complexity and speed up the data mining process. In the third stage, Random forest classifier is employed to make a decision for two different categories of lesions (Spitz nevus or Atypical Spitz Tumors). The performance of our proposed scheme is evaluated using accuracy, sensitivity, specificity, G-mean, F- measure, ROC and AUC. Results obtained with our SMOTE-GA-RF model with GA-based 16 features show a great performance with accuracy 0.97, F-measure 0.98, AUC 0.98, and G-mean 0.97.Results obtained in this study have potential to open new opportunities in diagnosis of spitzoid lesions. | http://arxiv.org/pdf/2003.04745v2 | [
"Abir Belaala",
"Labib Sadek",
"Noureddine Zerhouni",
"Christine Devalland"
] | 2020-06-03T13:23:16Z | 2020-03-10T14:03:28Z |
2006.02267 | FastONN -- Python based open-source GPU implementation for Operational
Neural Networks | Operational Neural Networks (ONNs) have recently been proposed as a special class of artificial neural networks for grid structured data. They enable heterogenous non-linear operations to generalize the widely adopted convolution-based neuron model. This work introduces a fast GPU-enabled library for training operational neural networks, FastONN, which is based on a novel vectorized formulation of the operational neurons. Leveraging on automatic reverse-mode differentiation for backpropagation, FastONN enables increased flexibility with the incorporation of new operator sets and customized gradient flows. Additionally, bundled auxiliary modules offer interfaces for performance tracking and checkpointing across different data partitions and customized metrics. | http://arxiv.org/pdf/2006.02267v1 | [
"Junaid Malik",
"Serkan Kiranyaz",
"Moncef Gabbouj"
] | 2020-06-03T13:33:35Z | 2020-06-03T13:33:35Z |
2006.04509 | IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge | Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.While much of the recent activity is focused on addressing the sparsity of KGs by using embeddings for inferring new facts, the issue of cleaning up of noise in KGs through KG refinement task is not as actively studied. Most successful techniques for KG refinement make use of inference rules and reasoning over ontologies. Barring a few exceptions, embeddings do not make use of ontological information, and their performance in KG refinement task is not well understood. In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques - one which uses ontological information and inferences rules, PSL-KGI, and the KG embeddings such as ComplEx and ConvE which do not. As a result, IterefinE is able to exploit not only the ontological information to improve the quality of predictions, but also the power of KG embeddings which (implicitly) perform longer chains of reasoning. The IterefinE framework, operates in a co-training mode and results in explicit type-supervised embedding of the refined KG from PSL-KGI which we call as TypeE-X. Our experiments over a range of KG benchmarks show that the embeddings that we produce are able to reject noisy facts from KG and at the same time infer higher quality new facts resulting in up to 9% improvement of overall weighted F1 score | http://arxiv.org/pdf/2006.04509v1 | [
"Siddhant Arora",
"Srikanta Bedathur",
"Maya Ramanath",
"Deepak Sharma"
] | 2020-06-03T14:05:54Z | 2020-06-03T14:05:54Z |
2004.09140 | Recurrent Convolutional Neural Networks help to predict location of
Earthquakes | We examine the applicability of modern neural network architectures to the midterm prediction of earthquakes. Our data-based classification model aims to predict if an earthquake with the magnitude above a threshold takes place at a given area of size $10 times 10$ kilometers in $10$-$60$ days from a given moment. Our deep neural network model has a recurrent part (LSTM) that accounts for time dependencies between earthquakes and a convolutional part that accounts for spatial dependencies. Obtained results show that neural networks-based models beat baseline feature-based models that also account for spatio-temporal dependencies between different earthquakes. For historical data on Japan earthquakes our model predicts occurrence of an earthquake in $10$ to $60$ days from a given moment with magnitude $M_c > 5$ with quality metrics ROC AUC $0.975$ and PR AUC $0.0890$, making $1.18 cdot 10^3$ correct predictions, while missing $2.09 cdot 10^3$ earthquakes and making $192 cdot 10^3$ false alarms. The baseline approach has similar ROC AUC $0.992$, number of correct predictions $1.19 cdot 10^3$, and missing $2.07 cdot 10^3$ earthquakes, but significantly worse PR AUC $0.00911$, and number of false alarms $1004 cdot 10^3$. | http://arxiv.org/pdf/2004.09140v3 | [
"Roman Kail",
"Alexey Zaytsev",
"Evgeny Burnaev"
] | 2020-06-03T14:13:17Z | 2020-04-20T09:05:13Z |
2006.01475 | Light-in-the-loop: using a photonics co-processor for scalable training
of neural networks | As neural networks grow larger and more complex and data-hungry, training costs are skyrocketing. Especially when lifelong learning is necessary, such as in recommender systems or self-driving cars, this might soon become unsustainable. In this study, we present the first optical co-processor able to accelerate the training phase of digitally-implemented neural networks. We rely on direct feedback alignment as an alternative to backpropagation, and perform the error projection step optically. Leveraging the optical random projections delivered by our co-processor, we demonstrate its use to train a neural network for handwritten digits recognition. | http://arxiv.org/pdf/2006.01475v2 | [
"Julien Launay",
"Iacopo Poli",
"Kilian Müller",
"Igor Carron",
"Laurent Daudet",
"Florent Krzakala",
"Sylvain Gigan"
] | 2020-06-03T14:42:49Z | 2020-06-02T09:19:45Z |
2002.04317 | Machine Learning Approaches For Motor Learning: A Short Review | Machine learning approaches have seen considerable applications in human movement modeling, but remain limited for motor learning. Motor learning requires accounting for motor variability, and poses new challenges as the algorithms need to be able to differentiate between new movements and variation of known ones. In this short review, we outline existing machine learning models for motor learning and their adaptation capabilities. We identify and describe three types of adaptation: Parameter adaptation in probabilistic models, Transfer and meta-learning in deep neural networks, and Planning adaptation by reinforcement learning. To conclude, we discuss challenges for applying these models in the domain of motor learning support systems. | http://arxiv.org/pdf/2002.04317v4 | [
"Baptiste Caramiaux",
"Jules Françoise",
"Wanyu Liu",
"Téo Sanchez",
"Frédéric Bevilacqua"
] | 2020-06-03T15:00:42Z | 2020-02-11T11:11:26Z |
2006.02338 | Flexible Bayesian Modelling for Nonlinear Image Registration | We describe a diffeomorphic registration algorithm that allows groups of images to be accurately aligned to a common space, which we intend to incorporate into the SPM software. The idea is to perform inference in a probabilistic graphical model that accounts for variability in both shape and appearance. The resulting framework is general and entirely unsupervised. The model is evaluated at inter-subject registration of 3D human brain scans. Here, the main modeling assumption is that individual anatomies can be generated by deforming a latent 'average' brain. The method is agnostic to imaging modality and can be applied with no prior processing. We evaluate the algorithm using freely available, manually labelled datasets. In this validation we achieve state-of-the-art results, within reasonable runtimes, against previous state-of-the-art widely used, inter-subject registration algorithms. On the unprocessed dataset, the increase in overlap score is over 17%. These results demonstrate the benefits of using informative computational anatomy frameworks for nonlinear registration. | http://arxiv.org/abs/2006.02338v1 | [
"Mikael Brudfors",
"Yaël Balbastre",
"Guillaume Flandin",
"Parashkev Nachev",
"John Ashburner"
] | 2020-06-03T15:33:14Z | 2020-06-03T15:33:14Z |
2006.02355 | Learning Robust Decision Policies from Observational Data | We address the problem of learning a decision policy from observational data of past decisions in contexts with features and associated outcomes. The past policy maybe unknown and in safety-critical applications, such as medical decision support, it is of interest to learn robust policies that reduce the risk of outcomes with high costs. In this paper, we develop a method for learning policies that reduce tails of the cost distribution at a specified level and, moreover, provide a statistically valid bound on the cost of each decision. These properties are valid under finite samples -- even in scenarios with uneven or no overlap between features for different decisions in the observed data -- by building on recent results in conformal prediction. The performance and statistical properties of the proposed method are illustrated using both real and synthetic data. | http://arxiv.org/pdf/2006.02355v1 | [
"Muhammad Osama",
"Dave Zachariah",
"Peter Stoica"
] | 2020-06-03T16:02:57Z | 2020-06-03T16:02:57Z |
1912.11460 | Characterizing the Decision Boundary of Deep Neural Networks | Deep neural networks and in particular, deep neural classifiers have become an integral part of many modern applications. Despite their practical success, we still have limited knowledge of how they work and the demand for such an understanding is evergrowing. In this regard, one crucial aspect of deep neural network classifiers that can help us deepen our knowledge about their decision-making behavior is to investigate their decision boundaries. Nevertheless, this is contingent upon having access to samples populating the areas near the decision boundary. To achieve this, we propose a novel approach we call Deep Decision boundary Instance Generation (DeepDIG). DeepDIG utilizes a method based on adversarial example generation as an effective way of generating samples near the decision boundary of any deep neural network model. Then, we introduce a set of important principled characteristics that take advantage of the generated instances near the decision boundary to provide multifaceted understandings of deep neural networks. We have performed extensive experiments on multiple representative datasets across various deep neural network models and characterized their decision boundaries. The code is publicly available at https://github.com/hamidkarimi/DeepDIG/. | http://arxiv.org/pdf/1912.11460v3 | [
"Hamid Karimi",
"Tyler Derr",
"Jiliang Tang"
] | 2020-06-03T16:18:25Z | 2019-12-24T18:30:11Z |
2006.02909 | Assessing Intelligence in Artificial Neural Networks | The purpose of this work was to develop of metrics to assess network architectures that balance neural network size and task performance. To this end, the concept of neural efficiency is introduced to measure neural layer utilization, and a second metric called artificial intelligence quotient (aIQ) was created to balance neural network performance and neural network efficiency. To study aIQ and neural efficiency, two simple neural networks were trained on MNIST: a fully connected network (LeNet-300-100) and a convolutional neural network (LeNet-5). The LeNet-5 network with the highest aIQ was 2.32% less accurate but contained 30,912 times fewer parameters than the highest accuracy network. Both batch normalization and dropout layers were found to increase neural efficiency. Finally, high aIQ networks are shown to be memorization and overtraining resistant, capable of learning proper digit classification with an accuracy of 92.51% even when 75% of the class labels are randomized. These results demonstrate the utility of aIQ and neural efficiency as metrics for balancing network performance and size. | http://arxiv.org/pdf/2006.02909v1 | [
"Nicholas J. Schaub",
"Nathan Hotaling"
] | 2020-06-03T16:45:42Z | 2020-06-03T16:45:42Z |
2006.02377 | RODE-Net: Learning Ordinary Differential Equations with Randomness from
Data | Random ordinary differential equations (RODEs), i.e. ODEs with random parameters, are often used to model complex dynamics. Most existing methods to identify unknown governing RODEs from observed data often rely on strong prior knowledge. Extracting the governing equations from data with less prior knowledge remains a great challenge. In this paper, we propose a deep neural network, called RODE-Net, to tackle such challenge by fitting a symbolic expression of the differential equation and the distribution of parameters simultaneously. To train the RODE-Net, we first estimate the parameters of the unknown RODE using the symbolic networks cite{long2019pde} by solving a set of deterministic inverse problems based on the measured data, and use a generative adversarial network (GAN) to estimate the true distribution of the RODE's parameters. Then, we use the trained GAN as a regularization to further improve the estimation of the ODE's parameters. The two steps are operated alternatively. Numerical results show that the proposed RODE-Net can well estimate the distribution of model parameters using simulated data and can make reliable predictions. It is worth noting that, GAN serves as a data driven regularization in RODE-Net and is more effective than the $ell_1$ based regularization that is often used in system identifications. | http://arxiv.org/pdf/2006.02377v1 | [
"Junyu Liu",
"Zichao Long",
"Ranran Wang",
"Jie Sun",
"Bin Dong"
] | 2020-06-03T16:49:49Z | 2020-06-03T16:49:49Z |
2005.08837 | When and How to Lift the Lockdown? Global COVID-19 Scenario Analysis and
Policy Assessment using Compartmental Gaussian Processes | The coronavirus disease 2019 (COVID-19) global pandemic has led many countries to impose unprecedented lockdown measures in order to slow down the outbreak. Questions on whether governments have acted promptly enough, and whether lockdown measures can be lifted soon have since been central in public discourse. Data-driven models that predict COVID-19 fatalities under different lockdown policy scenarios are essential for addressing these questions and informing governments on future policy directions. To this end, this paper develops a Bayesian model for predicting the effects of COVID-19 lockdown policies in a global context -- we treat each country as a distinct data point, and exploit variations of policies across countries to learn country-specific policy effects. Our model utilizes a two-layer Gaussian process (GP) prior -- the lower layer uses a compartmental SEIR (Susceptible, Exposed, Infected, Recovered) model as a prior mean function with "country-and-policy-specific" parameters that capture fatality curves under "counterfactual" policies within each country, whereas the upper layer is shared across all countries, and learns lower-layer SEIR parameters as a function of a country's features and its policy indicators. Our model combines the solid mechanistic foundations of SEIR models (Bayesian priors) with the flexible data-driven modeling and gradient-based optimization routines of machine learning (Bayesian posteriors) -- i.e., the entire model is trained end-to-end via stochastic variational inference. We compare the projections of COVID-19 fatalities by our model with other models listed by the Center for Disease Control (CDC), and provide scenario analyses for various lockdown and reopening strategies highlighting their impact on COVID-19 fatalities. | http://arxiv.org/pdf/2005.08837v2 | [
"Zhaozhi Qian",
"Ahmed M. Alaa",
"Mihaela van der Schaar"
] | 2020-06-03T16:55:22Z | 2020-05-13T18:21:50Z |
1805.01352 | Spiking Deep Residual Network | Spiking neural networks (SNNs) have received significant attention for their biological plausibility. SNNs theoretically have at least the same computational power as traditional artificial neural networks (ANNs). They possess potential of achieving energy-efficiency while keeping comparable performance to deep neural networks (DNNs). However, it is still a big challenge to train a very deep SNN. In this paper, we propose an efficient approach to build a spiking version of deep residual network (ResNet). ResNet is considered as a kind of the state-of-the-art convolutional neural networks (CNNs). We employ the idea of converting a trained ResNet to a network of spiking neurons, named Spiking ResNet (S-ResNet). We propose a shortcut conversion model to appropriately scale continuous-valued activations to match firing rates in SNN, and a compensation mechanism to reduce the error caused by discretisation. Experimental results demonstrate that, compared with the state-of-the-art SNN approaches, the proposed Spiking ResNet achieves the best performance on CIFAR-10, CIFAR-100, and ImageNet 2012. Our work is the first time to build a SNN deeper than 40, with comparable performance to ANNs on a large-scale dataset. | http://arxiv.org/pdf/1805.01352v2 | [
"Yangfan Hu",
"Huajin Tang",
"Gang Pan"
] | 2020-06-03T16:55:37Z | 2018-04-28T06:44:13Z |
2006.04556 | Unveiling Relations in the Industry 4.0 Standards Landscape based on
Knowledge Graph Embeddings | Industry~4.0 (I4.0) standards and standardization frameworks have been proposed with the goal of emph{empowering interoperability} in smart factories. These standards enable the description and interaction of the main components, systems, and processes inside of a smart factory. Due to the growing number of frameworks and standards, there is an increasing need for approaches that automatically analyze the landscape of I4.0 standards. Standardization frameworks classify standards according to their functions into layers and dimensions. However, similar standards can be classified differently across the frameworks, producing, thus, interoperability conflicts among them. Semantic-based approaches that rely on ontologies and knowledge graphs, have been proposed to represent standards, known relations among them, as well as their classification according to existing frameworks. Albeit informative, the structured modeling of the I4.0 landscape only provides the foundations for detecting interoperability issues. Thus, graph-based analytical methods able to exploit knowledge encoded by these approaches, are required to uncover alignments among standards. We study the relatedness among standards and frameworks based on community analysis to discover knowledge that helps to cope with interoperability conflicts between standards. We use knowledge graph embeddings to automatically create these communities exploiting the meaning of the existing relationships. In particular, we focus on the identification of similar standards, i.e., communities of standards, and analyze their properties to detect unknown relations. We empirically evaluate our approach on a knowledge graph of I4.0 standards using the Trans$^*$ family of embedding models for knowledge graph entities. Our results are promising and suggest that relations among standards can be detected accurately. | http://arxiv.org/pdf/2006.04556v1 | [
"Ariam Rivas",
"Irlán Grangel-González",
"Diego Collarana",
"Jens Lehmann",
"Maria-Esther Vidal"
] | 2020-06-03T17:37:08Z | 2020-06-03T17:37:08Z |
2006.02411 | Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from
Suboptimal Demonstrations | We present a method for learning multi-stage tasks from demonstrations by learning the logical structure and atomic propositions of a consistent linear temporal logic (LTL) formula. The learner is given successful but potentially suboptimal demonstrations, where the demonstrator is optimizing a cost function while satisfying the LTL formula, and the cost function is uncertain to the learner. Our algorithm uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations together with a counterexample-guided falsification strategy to learn the atomic proposition parameters and logical structure of the LTL formula, respectively. We provide theoretical guarantees on the conservativeness of the recovered atomic proposition sets, as well as completeness in the search for finding an LTL formula consistent with the demonstrations. We evaluate our method on high-dimensional nonlinear systems by learning LTL formulas explaining multi-stage tasks on 7-DOF arm and quadrotor systems and show that it outperforms competing methods for learning LTL formulas from positive examples. | http://arxiv.org/pdf/2006.02411v1 | [
"Glen Chou",
"Necmiye Ozay",
"Dmitry Berenson"
] | 2020-06-03T17:40:14Z | 2020-06-03T17:40:14Z |
2006.02456 | A Distributed Trust Framework for Privacy-Preserving Machine Learning | When training a machine learning model, it is standard procedure for the researcher to have full knowledge of both the data and model. However, this engenders a lack of trust between data owners and data scientists. Data owners are justifiably reluctant to relinquish control of private information to third parties. Privacy-preserving techniques distribute computation in order to ensure that data remains in the control of the owner while learning takes place. However, architectures distributed amongst multiple agents introduce an entirely new set of security and trust complications. These include data poisoning and model theft. This paper outlines a distributed infrastructure which is used to facilitate peer-to-peer trust between distributed agents; collaboratively performing a privacy-preserving workflow. Our outlined prototype sets industry gatekeepers and governance bodies as credential issuers. Before participating in the distributed learning workflow, malicious actors must first negotiate valid credentials. We detail a proof of concept using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs) to establish a distributed trust architecture during a privacy-preserving machine learning experiment. Specifically, we utilise secure and authenticated DID communication channels in order to facilitate a federated learning workflow related to mental health care data. | http://arxiv.org/abs/2006.02456v1 | [
"Will Abramson",
"Adam James Hall",
"Pavlos Papadopoulos",
"Nikolaos Pitropakis",
"William J Buchanan"
] | 2020-06-03T18:06:13Z | 2020-06-03T18:06:13Z |
2006.02460 | Shallow Neural Hawkes: Non-parametric kernel estimation for Hawkes
processes | Multi-dimensional Hawkes process (MHP) is a class of self and mutually exciting point processes that find wide range of applications -- from prediction of earthquakes to modelling of order books in high frequency trading. This paper makes two major contributions, we first find an unbiased estimator for the log-likelihood estimator of the Hawkes process to enable efficient use of the stochastic gradient descent method for maximum likelihood estimation. The second contribution is, we propose a specific single hidden layered neural network for the non-parametric estimation of the underlying kernels of the MHP. We evaluate the proposed model on both synthetic and real datasets, and find the method has comparable or better performance than existing estimation methods. The use of shallow neural network ensures that we do not compromise on the interpretability of the Hawkes model, while at the same time have the flexibility to estimate any non-standard Hawkes excitation kernel. | http://arxiv.org/pdf/2006.02460v1 | [
"Sobin Joseph",
"Lekhapriya Dheeraj Kashyap",
"Shashi Jain"
] | 2020-06-03T18:15:38Z | 2020-06-03T18:15:38Z |
2003.04377 | Automatic segmentation of spinal multiple sclerosis lesions: How to
generalize across MRI contrasts? | Despite recent improvements in medical image segmentation, the ability to generalize across imaging contrasts remains an open issue. To tackle this challenge, we implement Feature-wise Linear Modulation (FiLM) to leverage physics knowledge within the segmentation model and learn the characteristics of each contrast. Interestingly, a well-optimised U-Net reached the same performance as our FiLMed-Unet on a multi-contrast dataset (0.72 of Dice score), which suggests that there is a bottleneck in spinal MS lesion segmentation different from the generalization across varying contrasts. This bottleneck likely stems from inter-rater variability, which is estimated at 0.61 of Dice score in our dataset. | http://arxiv.org/pdf/2003.04377v3 | [
"Olivier Vincent",
"Charley Gros",
"Joseph Paul Cohen",
"Julien Cohen-Adad"
] | 2020-06-03T18:33:24Z | 2020-03-09T19:29:45Z |
2004.13824 | Pyramid Attention Networks for Image Restoration | Self-similarity refers to the image prior widely used in image restoration algorithms that small but similar patterns tend to occur at different locations and scales. However, recent advanced deep convolutional neural network based methods for image restoration do not take full advantage of self-similarities by relying on self-attention neural modules that only process information at the same scale. To solve this problem, we present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid. Inspired by the fact that corruptions, such as noise or compression artifacts, drop drastically at coarser image scales, our attention module is designed to be able to borrow clean signals from their "clean" correspondences at the coarser levels. The proposed pyramid attention module is a generic building block that can be flexibly integrated into various neural architectures. Its effectiveness is validated through extensive experiments on multiple image restoration tasks: image denoising, demosaicing, compression artifact reduction, and super resolution. Without any bells and whistles, our PANet (pyramid attention module with simple network backbones) can produce state-of-the-art results with superior accuracy and visual quality. Our code will be available at https://github.com/SHI-Labs/Pyramid-Attention-Networks | http://arxiv.org/pdf/2004.13824v4 | [
"Yiqun Mei",
"Yuchen Fan",
"Yulun Zhang",
"Jiahui Yu",
"Yuqian Zhou",
"Ding Liu",
"Yun Fu",
"Thomas S. Huang",
"Humphrey Shi"
] | 2020-06-03T18:47:11Z | 2020-04-28T21:12:36Z |
1909.04761 | MultiFiT: Efficient Multi-lingual Language Model Fine-tuning | Pretrained language models are promising particularly for low-resource languages as they only require unlabelled data. However, training existing models requires huge amounts of compute, while pretrained cross-lingual models often underperform on low-resource languages. We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language. In addition, we propose a zero-shot method using an existing pretrained cross-lingual model. We evaluate our methods on two widely used cross-lingual classification datasets where they outperform models pretrained on orders of magnitude more data and compute. We release all models and code. | http://arxiv.org/pdf/1909.04761v2 | [
"Julian Martin Eisenschlos",
"Sebastian Ruder",
"Piotr Czapla",
"Marcin Kardas",
"Sylvain Gugger",
"Jeremy Howard"
] | 2020-06-03T19:05:15Z | 2019-09-10T21:30:54Z |
1807.04222 | Make $\ell_1$ Regularization Effective in Training Sparse CNN | Compressed Sensing using $ell_1$ regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)? This paper is aimed to provide an answer to this question and to show how to make it work. We first demonstrate that the commonly used stochastic gradient decent (SGD) and variants training algorithm is not an appropriate match with $ell_1$ regularization and then replace it with a different training algorithm based on a regularized dual averaging (RDA) method. RDA was originally designed specifically for convex problem, but with new theoretical insight and algorithmic modifications (using proper initialization and adaptivity), we have made it an effective match with $ell_1$ regularization to achieve a state-of-the-art sparsity for CNN compared to other weight pruning methods without compromising accuracy (achieving 95% sparsity for ResNet18 on CIFAR-10, for example). | http://arxiv.org/abs/1807.04222v5 | [
"Juncai He",
"Xiaodong Jia",
"Jinchao Xu",
"Lian Zhang",
"Liang Zhao"
] | 2020-06-03T19:22:03Z | 2018-07-11T16:06:23Z |
2006.02493 | Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE | Neural ordinary differential equations (NODEs) have recently attracted increasing attention; however, their empirical performance on benchmark tasks (e.g. image classification) are significantly inferior to discrete-layer models. We demonstrate an explanation for their poorer performance is the inaccuracy of existing gradient estimation methods: the adjoint method has numerical errors in reverse-mode integration; the naive method directly back-propagates through ODE solvers, but suffers from a redundantly deep computation graph when searching for the optimal stepsize. We propose the Adaptive Checkpoint Adjoint (ACA) method: in automatic differentiation, ACA applies a trajectory checkpoint strategy which records the forward-mode trajectory as the reverse-mode trajectory to guarantee accuracy; ACA deletes redundant components for shallow computation graphs; and ACA supports adaptive solvers. On image classification tasks, compared with the adjoint and naive method, ACA achieves half the error rate in half the training time; NODE trained with ACA outperforms ResNet in both accuracy and test-retest reliability. On time-series modeling, ACA outperforms competing methods. Finally, in an example of the three-body problem, we show NODE with ACA can incorporate physical knowledge to achieve better accuracy. We provide the PyTorch implementation of ACA: url{https://github.com/juntang-zhuang/torch-ACA}. | http://arxiv.org/pdf/2006.02493v1 | [
"Juntang Zhuang",
"Nicha Dvornek",
"Xiaoxiao Li",
"Sekhar Tatikonda",
"Xenophon Papademetris",
"James Duncan"
] | 2020-06-03T19:48:00Z | 2020-06-03T19:48:00Z |
2006.02509 | SVGD as a kernelized Wasserstein gradient flow of the chi-squared
divergence | Stein Variational Gradient Descent (SVGD), a popular sampling algorithm, is often described as the kernelized gradient flow for the Kullback-Leibler divergence in the geometry of optimal transport. We introduce a new perspective on SVGD that instead views SVGD as the (kernelized) gradient flow of the chi-squared divergence which, we show, exhibits a strong form of uniform exponential ergodicity under conditions as weak as a Poincar'e inequality. This perspective leads us to propose an alternative to SVGD, called Laplacian Adjusted Wasserstein Gradient Descent (LAWGD), that can be implemented from the spectral decomposition of the Laplacian operator associated with the target density. We show that LAWGD exhibits strong convergence guarantees and good practical performance. | http://arxiv.org/pdf/2006.02509v1 | [
"Sinho Chewi",
"Thibaut Le Gouic",
"Chen Lu",
"Tyler Maunu",
"Philippe Rigollet"
] | 2020-06-03T20:20:21Z | 2020-06-03T20:20:21Z |
2005.06897 | Neural Networks Versus Conventional Filters for Inertial-Sensor-based
Attitude Estimation | Inertial measurement units are commonly used to estimate the attitude of moving objects. Numerous nonlinear filter approaches have been proposed for solving the inherent sensor fusion problem. However, when a large range of different dynamic and static rotational and translational motions is considered, the attainable accuracy is limited by the need for situation-dependent adjustment of accelerometer and gyroscope fusion weights. We investigate to what extent these limitations can be overcome by means of artificial neural networks and how much domain-specific optimization of the neural network model is required to outperform the conventional filter solution. A diverse set of motion recordings with a marker-based optical ground truth is used for performance evaluation and comparison. The proposed neural networks are found to outperform the conventional filter across all motions only if domain-specific optimizations are introduced. We conclude that they are a promising tool for inertial-sensor-based real-time attitude estimation, but both expert knowledge and rich datasets are required to achieve top performance. | http://arxiv.org/abs/2005.06897v2 | [
"Daniel Weber",
"Clemens Gühmann",
"Thomas Seel"
] | 2020-06-03T20:50:55Z | 2020-05-14T11:59:19Z |
2006.02528 | Learning across label confidence distributions using Filtered Transfer
Learning | Performance of neural network models relies on the availability of large datasets with minimal levels of uncertainty. Transfer Learning (TL) models have been proposed to resolve the issue of small dataset size by letting the model train on a bigger, task-related reference dataset and then fine-tune on a smaller, task-specific dataset. In this work, we apply a transfer learning approach to improve predictive power in noisy data systems with large variable confidence datasets. We propose a deep neural network method called Filtered Transfer Learning (FTL) that defines multiple tiers of data confidence as separate tasks in a transfer learning setting. The deep neural network is fine-tuned in a hierarchical process by iteratively removing (filtering) data points with lower label confidence, and retraining. In this report we use FTL for predicting the interaction of drugs and proteins. We demonstrate that using FTL to learn stepwise, across the label confidence distribution, results in higher performance compared to deep neural network models trained on a single confidence range. We anticipate that this approach will enable the machine learning community to benefit from large datasets with uncertain labels in fields such as biology and medicine. | http://arxiv.org/pdf/2006.02528v1 | [
"Seyed Ali Madani Tonekaboni",
"Andrew E. Brereton",
"Zhaleh Safikhani",
"Andreas Windemuth",
"Benjamin Haibe-Kains",
"Stephen MacKinnon"
] | 2020-06-03T21:00:11Z | 2020-06-03T21:00:11Z |
2006.02536 | Phasic dopamine release identification using ensemble of AlexNet | Dopamine (DA) is an organic chemical that influences several parts of behaviour and physical functions. Fast-scan cyclic voltammetry (FSCV) is a technique used for in vivo phasic dopamine release measurements. The analysis of such measurements, though, requires notable effort. In this paper, we present the use of convolutional neural networks (CNNs) for the identification of phasic dopamine releases. | http://arxiv.org/pdf/2006.02536v1 | [
"Luca Patarnello",
"Marco Celin",
"Loris Nanni"
] | 2020-06-03T21:13:05Z | 2020-06-03T21:13:05Z |
2006.00866 | You say Normalizing Flows I see Bayesian Networks | Normalizing flows have emerged as an important family of deep neural networks for modelling complex probability distributions. In this note, we revisit their coupling and autoregressive transformation layers as probabilistic graphical models and show that they reduce to Bayesian networks with a pre-defined topology and a learnable density at each node. From this new perspective, we provide three results. First, we show that stacking multiple transformations in a normalizing flow relaxes independence assumptions and entangles the model distribution. Second, we show that a fundamental leap of capacity emerges when the depth of affine flows exceeds 3 transformation layers. Third, we prove the non-universality of the affine normalizing flow, regardless of its depth. | http://arxiv.org/pdf/2006.00866v2 | [
"Antoine Wehenkel",
"Gilles Louppe"
] | 2020-06-03T21:43:28Z | 2020-06-01T11:54:50Z |
1911.02106 | Designing over uncertain outcomes with stochastic sampling Bayesian
optimization | Optimization is becoming increasingly common in scientific and engineering domains. Oftentimes, these problems involve various levels of stochasticity or uncertainty in generating proposed solutions. Therefore, optimization in these scenarios must consider this stochasticity to properly guide the design of future experiments. Here, we adapt Bayesian optimization to handle uncertain outcomes, proposing a new framework called stochastic sampling Bayesian optimization (SSBO). We show that the bounds on expected regret for an upper confidence bound search in SSBO resemble those of earlier Bayesian optimization approaches, with added penalties due to the stochastic generation of inputs. Additionally, we adapt existing batch optimization techniques to properly limit the myopic decision making that can arise when selecting multiple instances before feedback. Finally, we show that SSBO techniques properly optimize a set of standard optimization problems as well as an applied problem inspired by bioengineering. | http://arxiv.org/pdf/1911.02106v2 | [
"Peter D. Tonner",
"Daniel V. Samarov",
"A. Gilad Kusne"
] | 2020-06-03T21:55:42Z | 2019-11-05T22:21:07Z |
2006.02575 | Debiased Sinkhorn barycenters | Entropy regularization in optimal transport (OT) has been the driver of many recent interests for Wasserstein metrics and barycenters in machine learning. It allows to keep the appealing geometrical properties of the unregularized Wasserstein distance while having a significantly lower complexity thanks to Sinkhorn's algorithm. However, entropy brings some inherent smoothing bias, resulting for example in blurred barycenters. This side effect has prompted an increasing temptation in the community to settle for a slower algorithm such as log-domain stabilized Sinkhorn which breaks the parallel structure that can be leveraged on GPUs, or even go back to unregularized OT. Here we show how this bias is tightly linked to the reference measure that defines the entropy regularizer and propose debiased Wasserstein barycenters that preserve the best of both worlds: fast Sinkhorn-like iterations without entropy smoothing. Theoretically, we prove that the entropic OT barycenter of univariate Gaussians is a Gaussian and quantify its variance bias. This result is obtained by extending the differentiability and convexity of entropic OT to sub-Gaussian measures with unbounded supports. Empirically, we illustrate the reduced blurring and the computational advantage on various applications. | http://arxiv.org/pdf/2006.02575v1 | [
"Hicham Janati",
"Marco Cuturi",
"Alexandre Gramfort"
] | 2020-06-03T23:06:02Z | 2020-06-03T23:06:02Z |
2005.02575 | Active Preference-Based Gaussian Process Regression for Reward Learning | Designing reward functions is a challenging problem in AI and robotics. Humans usually have a difficult time directly specifying all the desirable behaviors that a robot needs to optimize. One common approach is to learn reward functions from collected expert demonstrations. However, learning reward functions from demonstrations introduces many challenges: some methods require highly structured models, e.g. reward functions that are linear in some predefined set of features, while others adopt less structured reward functions that on the other hand require tremendous amount of data. In addition, humans tend to have a difficult time providing demonstrations on robots with high degrees of freedom, or even quantifying reward values for given demonstrations. To address these challenges, we present a preference-based learning approach, where as an alternative, the human feedback is only in the form of comparisons between trajectories. Furthermore, we do not assume highly constrained structures on the reward function. Instead, we model the reward function using a Gaussian Process (GP) and propose a mathematical formulation to actively find a GP using only human preferences. Our approach enables us to tackle both inflexibility and data-inefficiency problems within a preference-based learning framework. Our results in simulations and a user study suggest that our approach can efficiently learn expressive reward functions for robotics tasks. | http://arxiv.org/pdf/2005.02575v2 | [
"Erdem Bıyık",
"Nicolas Huynh",
"Mykel J. Kochenderfer",
"Dorsa Sadigh"
] | 2020-06-03T23:08:00Z | 2020-05-06T03:29:27Z |
2006.02577 | An optimizable scalar objective value cannot be objective and should not
be the sole objective | This paper concerns the ethics and morality of algorithms and computational systems, and has been circulating internally at Facebook for the past couple years. The paper reviews many Nobel laureates' work, as well as the work of other prominent scientists such as Richard Dawkins, Andrei Kolmogorov, Vilfredo Pareto, and John von Neumann. The paper draws conclusions based on such works, as summarized in the title. The paper argues that the standard approach to modern machine learning and artificial intelligence is bound to be biased and unfair, and that longstanding traditions in the professions of law, justice, politics, and medicine should help. | http://arxiv.org/pdf/2006.02577v1 | [
"Isabel Kloumann",
"Mark Tygert"
] | 2020-06-03T23:10:38Z | 2020-06-03T23:10:38Z |
2006.02578 | DFR-TSD: A Deep Learning Based Framework for Robust Traffic Sign
Detection Under Challenging Weather Conditions | Robust traffic sign detection and recognition (TSDR) is of paramount importance for the successful realization of autonomous vehicle technology. The importance of this task has led to a vast amount of research efforts and many promising methods have been proposed in the existing literature. However, the SOTA (SOTA) methods have been evaluated on clean and challenge-free datasets and overlooked the performance deterioration associated with different challenging conditions (CCs) that obscure the traffic images captured in the wild. In this paper, we look at the TSDR problem under CCs and focus on the performance degradation associated with them. To overcome this, we propose a Convolutional Neural Network (CNN) based TSDR framework with prior enhancement. Our modular approach consists of a CNN-based challenge classifier, Enhance-Net, an encoder-decoder CNN architecture for image enhancement, and two separate CNN architectures for sign-detection and classification. We propose a novel training pipeline for Enhance-Net that focuses on the enhancement of the traffic sign regions (instead of the whole image) in the challenging images subject to their accurate detection. We used CURE-TSD dataset consisting of traffic videos captured under different CCs to evaluate the efficacy of our approach. We experimentally show that our method obtains an overall precision and recall of 91.1% and 70.71% that is 7.58% and 35.90% improvement in precision and recall, respectively, compared to the current benchmark. Furthermore, we compare our approach with SOTA object detection networks, Faster-RCNN and R-FCN, and show that our approach outperforms them by a large margin. | http://arxiv.org/abs/2006.02578v1 | [
"Sabbir Ahmed",
"Uday Kamal",
"Md. Kamrul Hasan"
] | 2020-06-03T23:12:26Z | 2020-06-03T23:12:26Z |
2006.02579 | Causality and Batch Reinforcement Learning: Complementary Approaches To
Planning In Unknown Domains | Reinforcement learning algorithms have had tremendous successes in online learning settings. However, these successes have relied on low-stakes interactions between the algorithmic agent and its environment. In many settings where RL could be of use, such as health care and autonomous driving, the mistakes made by most online RL algorithms during early training come with unacceptable costs. These settings require developing reinforcement learning algorithms that can operate in the so-called batch setting, where the algorithms must learn from set of data that is fixed, finite, and generated from some (possibly unknown) policy. Evaluating policies different from the one that collected the data is called off-policy evaluation, and naturally poses counter-factual questions. In this project we show how off-policy evaluation and the estimation of treatment effects in causal inference are two approaches to the same problem, and compare recent progress in these two areas. | http://arxiv.org/pdf/2006.02579v1 | [
"James Bannon",
"Brad Windsor",
"Wenbo Song",
"Tao Li"
] | 2020-06-03T23:14:14Z | 2020-06-03T23:14:14Z |
2006.02582 | Local SGD With a Communication Overhead Depending Only on the Number of
Workers | We consider speeding up stochastic gradient descent (SGD) by parallelizing it across multiple workers. We assume the same data set is shared among $n$ workers, who can take SGD steps and coordinate with a central server. Unfortunately, this could require a lot of communication between the workers and the server, which can dramatically reduce the gains from parallelism. The Local SGD method, proposed and analyzed in the earlier literature, suggests machines should make many local steps between such communications. While the initial analysis of Local SGD showed it needs $Omega ( sqrt{T} )$ communications for $T$ local gradient steps in order for the error to scale proportionately to $1/(nT)$, this has been successively improved in a string of papers, with the state-of-the-art requiring $Omega left( n left( mbox{ polynomial in log } (T) right) right)$ communications. In this paper, we give a new analysis of Local SGD. A consequence of our analysis is that Local SGD can achieve an error that scales as $1/(nT)$ with only a fixed number of communications independent of $T$: specifically, only $Omega(n)$ communications are required. | http://arxiv.org/pdf/2006.02582v1 | [
"Artin Spiridonoff",
"Alex Olshevsky",
"Ioannis Ch. Paschalidis"
] | 2020-06-03T23:33:02Z | 2020-06-03T23:33:02Z |
2006.02587 | XGNN: Towards Model-Level Explanations of Graph Neural Networks | Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information, which have achieved promising performance on many graph tasks. However, GNNs are mostly treated as black-boxes and lack human intelligible explanations. Thus, they cannot be fully trusted and used in certain application domains if GNN models cannot be explained. In this work, we propose a novel approach, known as XGNN, to interpret GNNs at the model-level. Our approach can provide high-level insights and generic understanding of how GNNs work. In particular, we propose to explain GNNs by training a graph generator so that the generated graph patterns maximize a certain prediction of the model.We formulate the graph generation as a reinforcement learning task, where for each step, the graph generator predicts how to add an edge into the current graph. The graph generator is trained via a policy gradient method based on information from the trained GNNs. In addition, we incorporate several graph rules to encourage the generated graphs to be valid. Experimental results on both synthetic and real-world datasets show that our proposed methods help understand and verify the trained GNNs. Furthermore, our experimental results indicate that the generated graphs can provide guidance on how to improve the trained GNNs. | http://arxiv.org/abs/2006.02587v1 | [
"Hao Yuan",
"Jiliang Tang",
"Xia Hu",
"Shuiwang Ji"
] | 2020-06-03T23:52:43Z | 2020-06-03T23:52:43Z |
2006.02588 | Meta Dialogue Policy Learning | Dialog policy determines the next-step actions for agents and hence is central to a dialogue system. However, when migrated to novel domains with little data, a policy model can fail to adapt due to insufficient interactions with the new environment. We propose Deep Transferable Q-Network (DTQN) to utilize shareable low-level signals between domains, such as dialogue acts and slots. We decompose the state and action representation space into feature subspaces corresponding to these low-level components to facilitate cross-domain knowledge transfer. Furthermore, we embed DTQN in a meta-learning framework and introduce Meta-DTQN with a dual-replay mechanism to enable effective off-policy training and adaptation. In experiments, our model outperforms baseline models in terms of both success rate and dialogue efficiency on the multi-domain dialogue dataset MultiWOZ 2.0. | http://arxiv.org/pdf/2006.02588v1 | [
"Yumo Xu",
"Chenguang Zhu",
"Baolin Peng",
"Michael Zeng"
] | 2020-06-03T23:53:06Z | 2020-06-03T23:53:06Z |
2005.13458 | Fast Risk Assessment for Autonomous Vehicles Using Learned Models of
Agent Futures | This paper presents fast non-sampling based methods to assess the risk of trajectories for autonomous vehicles when probabilistic predictions of other agents' futures are generated by deep neural networks (DNNs). The presented methods address a wide range of representations for uncertain predictions including both Gaussian and non-Gaussian mixture models for predictions of both agent positions and controls. We show that the problem of risk assessment when Gaussian mixture models (GMMs) of agent positions are learned can be solved rapidly to arbitrary levels of accuracy with existing numerical methods. To address the problem of risk assessment for non-Gaussian mixture models of agent position, we propose finding upper bounds on risk using Chebyshev's Inequality and sums-of-squares (SOS) programming; they are both of interest as the former is much faster while the latter can be arbitrarily tight. These approaches only require statistical moments of agent positions to determine upper bounds on risk. To perform risk assessment when models are learned for agent controls as opposed to positions, we develop TreeRing, an algorithm analogous to tree search over the ring of polynomials that can be used to exactly propagate moments of control distributions into position distributions through nonlinear dynamics. The presented methods are demonstrated on realistic predictions from DNNs trained on the Argoverse and CARLA datasets and are shown to be effective for rapidly assessing the probability of low probability events. | http://arxiv.org/pdf/2005.13458v2 | [
"Allen Wang",
"Xin Huang",
"Ashkan Jasour",
"Brian Williams"
] | 2020-06-03T23:56:09Z | 2020-05-27T16:16:36Z |
2006.02595 | Image Augmentations for GAN Training | Data augmentations have been widely studied to improve the accuracy and robustness of classifiers. However, the potential of image augmentation in improving GAN models for image synthesis has not been thoroughly investigated in previous studies. In this work, we systematically study the effectiveness of various existing augmentation techniques for GAN training in a variety of settings. We provide insights and guidelines on how to augment images for both vanilla GANs and GANs with regularizations, improving the fidelity of the generated images substantially. Surprisingly, we find that vanilla GANs attain generation quality on par with recent state-of-the-art results if we use augmentations on both real and generated images. When this GAN training is combined with other augmentation-based regularization techniques, such as contrastive loss and consistency regularization, the augmentations further improve the quality of generated images. We provide new state-of-the-art results for conditional generation on CIFAR-10 with both consistency loss and contrastive loss as additional regularizations. | http://arxiv.org/pdf/2006.02595v1 | [
"Zhengli Zhao",
"Zizhao Zhang",
"Ting Chen",
"Sameer Singh",
"Han Zhang"
] | 2020-06-04T00:16:02Z | 2020-06-04T00:16:02Z |
2005.05456 | Learning to Slide Unknown Objects with Differentiable Physics
Simulations | We propose a new technique for pushing an unknown object from an initial configuration to a goal configuration with stability constraints. The proposed method leverages recent progress in differentiable physics models to learn unknown mechanical properties of pushed objects, such as their distributions of mass and coefficients of friction. The proposed learning technique computes the gradient of the distance between predicted poses of objects and their actual observed poses and utilizes that gradient to search for values of the mechanical properties that reduce the reality gap. The proposed approach is also utilized to optimize a policy to efficiently push an object toward the desired goal configuration. Experiments with real objects using a real robot to gather data show that the proposed approach can identify the mechanical properties of heterogeneous objects from a small number of pushing actions. | http://arxiv.org/pdf/2005.05456v2 | [
"Changkyu Song",
"Abdeslam Boularias"
] | 2020-06-04T01:07:23Z | 2020-05-11T21:53:33Z |
1907.04138 | Characterization of Overlap in Observational Studies | Overlap between treatment groups is required for non-parametric estimation of causal effects. If a subgroup of subjects always receives the same intervention, we cannot estimate the effect of intervention changes on that subgroup without further assumptions. When overlap does not hold globally, characterizing local regions of overlap can inform the relevance of causal conclusions for new subjects, and can help guide additional data collection. To have impact, these descriptions must be interpretable for downstream users who are not machine learning experts, such as policy makers. We formalize overlap estimation as a problem of finding minimum volume sets subject to coverage constraints and reduce this problem to binary classification with Boolean rule classifiers. We then generalize this method to estimate overlap in off-policy policy evaluation. In several real-world applications, we demonstrate that these rules have comparable accuracy to black-box estimators and provide intuitive and informative explanations that can inform policy making. | http://arxiv.org/pdf/1907.04138v3 | [
"Michael Oberst",
"Fredrik D. Johansson",
"Dennis Wei",
"Tian Gao",
"Gabriel Brat",
"David Sontag",
"Kush R. Varshney"
] | 2020-06-04T01:46:10Z | 2019-07-09T13:18:47Z |
2004.10390 | Representation Bayesian Risk Decompositions and Multi-Source Domain
Adaptation | We consider representation learning (hypothesis class $mathcal{H} = mathcal{F}circmathcal{G}$) where training and test distributions can be different. Recent studies provide hints and failure examples for domain invariant representation learning, a common approach for this problem, but the explanations provided are somewhat different and do not provide a unified picture. In this paper, we provide new decompositions of risk which give finer-grained explanations and clarify potential generalization issues. For Single-Source Domain Adaptation, we give an exact decomposition (an equality) of the target risk, via a natural hybrid argument, as sum of three factors: (1) source risk, (2) representation conditional label divergence, and (3) representation covariate shift. We derive a similar decomposition for the Multi-Source case. These decompositions reveal factors (2) and (3) as the precise reasons for failure to generalize. For example, we demonstrate that domain adversarial neural networks (DANN) attempt to regularize for (3) but miss (2), while a recent technique Invariant Risk Minimization (IRM) attempts to account for (2) but does not consider (3). We also verify our observations experimentally. | http://arxiv.org/pdf/2004.10390v2 | [
"Xi Wu",
"Yang Guo",
"Jiefeng Chen",
"Yingyu Liang",
"Somesh Jha",
"Prasad Chalasani"
] | 2020-06-04T02:25:37Z | 2020-04-22T04:09:21Z |
2004.02086 | Arbitrary Scale Super-Resolution for Brain MRI Images | Recent attempts at Super-Resolution for medical images used deep learning techniques such as Generative Adversarial Networks (GANs) to achieve perceptually realistic single image Super-Resolution. Yet, they are constrained by their inability to generalise to different scale factors. This involves high storage and energy costs as every integer scale factor involves a separate neural network. A recent paper has proposed a novel meta-learning technique that uses a Weight Prediction Network to enable Super-Resolution on arbitrary scale factors using only a single neural network. In this paper, we propose a new network that combines that technique with SRGAN, a state-of-the-art GAN-based architecture, to achieve arbitrary scale, high fidelity Super-Resolution for medical images. By using this network to perform arbitrary scale magnifications on images from the Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset, we demonstrate that it is able to outperform traditional interpolation methods by up to 20$%$ on SSIM scores whilst retaining generalisability on brain MRI images. We show that performance across scales is not compromised, and that it is able to achieve competitive results with other state-of-the-art methods such as EDSR whilst being fifty times smaller than them. Combining efficiency, performance, and generalisability, this can hopefully become a new foundation for tackling Super-Resolution on medical images. Check out the webapp here: https://metasrgan.herokuapp.com/ Check out the github tutorial here: https://github.com/pancakewaffles/metasrgan-tutorial | http://arxiv.org/pdf/2004.02086v2 | [
"Chuan Tan",
"Jin Zhu",
"Pietro Lio'"
] | 2020-06-04T02:34:44Z | 2020-04-05T03:53:28Z |
2006.02619 | Integrating Machine Learning with Physics-Based Modeling | Machine learning is poised as a very powerful tool that can drastically improve our ability to carry out scientific research. However, many issues need to be addressed before this becomes a reality. This article focuses on one particular issue of broad interest: How can we integrate machine learning with physics-based modeling to develop new interpretable and truly reliable physical models? After introducing the general guidelines, we discuss the two most important issues for developing machine learning-based physical models: Imposing physical constraints and obtaining optimal datasets. We also provide a simple and intuitive explanation for the fundamental reasons behind the success of modern machine learning, as well as an introduction to the concurrent machine learning framework needed for integrating machine learning with physics-based modeling. Molecular dynamics and moment closure of kinetic equations are used as examples to illustrate the main issues discussed. We end with a general discussion on where this integration will lead us to, and where the new frontier will be after machine learning is successfully integrated into scientific modeling. | http://arxiv.org/pdf/2006.02619v1 | [
"Weinan E",
"Jiequn Han",
"Linfeng Zhang"
] | 2020-06-04T02:35:10Z | 2020-06-04T02:35:10Z |
2006.02636 | The Importance of Prior Knowledge in Precise Multimodal Prediction | Roads have well defined geometries, topologies, and traffic rules. While this has been widely exploited in motion planning methods to produce maneuvers that obey the law, little work has been devoted to utilize these priors in perception and motion forecasting methods. In this paper we propose to incorporate these structured priors as a loss function. In contrast to imposing hard constraints, this approach allows the model to handle non-compliant maneuvers when those happen in the real world. Safe motion planning is the end goal, and thus a probabilistic characterization of the possible future developments of the scene is key to choose the plan with the lowest expected cost. Towards this goal, we design a framework that leverages REINFORCE to incorporate non-differentiable priors over sample trajectories from a probabilistic model, thus optimizing the whole distribution. We demonstrate the effectiveness of our approach on real-world self-driving datasets containing complex road topologies and multi-agent interactions. Our motion forecasts not only exhibit better precision and map understanding, but most importantly result in safer motion plans taken by our self-driving vehicle. We emphasize that despite the importance of this evaluation, it has been often overlooked by previous perception and motion forecasting works. | http://arxiv.org/pdf/2006.02636v1 | [
"Sergio Casas",
"Cole Gulino",
"Simon Suo",
"Raquel Urtasun"
] | 2020-06-04T03:56:11Z | 2020-06-04T03:56:11Z |
2006.02655 | Neuroevolutionary Transfer Learning of Deep Recurrent Neural Networks
through Network-Aware Adaptation | Transfer learning entails taking an artificial neural network (ANN) that is trained on a source dataset and adapting it to a new target dataset. While this has been shown to be quite powerful, its use has generally been restricted by architectural constraints. Previously, in order to reuse and adapt an ANN's internal weights and structure, the underlying topology of the ANN being transferred across tasks must remain mostly the same while a new output layer is attached, discarding the old output layer's weights. This work introduces network-aware adaptive structure transfer learning (N-ASTL), an advancement over prior efforts to remove this restriction. N-ASTL utilizes statistical information related to the source network's topology and weight distribution in order to inform how new input and output neurons are to be integrated into the existing structure. Results show improvements over prior state-of-the-art, including the ability to transfer in challenging real-world datasets not previously possible and improved generalization over RNNs trained without transfer. | http://arxiv.org/pdf/2006.02655v1 | [
"AbdElRahman ElSaid",
"Joshua Karns",
"Alexander Ororbia II",
"Daniel Krutz",
"Zimeng Lyu",
"Travis Desell"
] | 2020-06-04T06:07:30Z | 2020-06-04T06:07:30Z |
2006.02672 | Sample Efficient Graph-Based Optimization with Noisy Observations | We study sample complexity of optimizing "hill-climbing friendly" functions defined on a graph under noisy observations. We define a notion of convexity, and we show that a variant of best-arm identification can find a near-optimal solution after a small number of queries that is independent of the size of the graph. For functions that have local minima and are nearly convex, we show a sample complexity for the classical simulated annealing under noisy observations. We show effectiveness of the greedy algorithm with restarts and the simulated annealing on problems of graph-based nearest neighbor classification as well as a web document re-ranking application. | http://arxiv.org/pdf/2006.02672v1 | [
"Tan Nguyen",
"Ali Shameli",
"Yasin Abbasi-Yadkori",
"Anup Rao",
"Branislav Kveton"
] | 2020-06-04T07:22:28Z | 2020-06-04T07:22:28Z |
2006.11406 | Location, location, location: Satellite image-based real-estate
appraisal | Buying a home is one of the most important buying decisions people have to make in their life. The latest research on real-estate appraisal focuses on incorporating image data in addition to structured data into the modeling process. This research measures the prediction performance of satellite images and structured data by using convolutional neural networks. The resulting CNN model trained performs 7% better in MAE than the advanced baseline of a neural network trained on structured data. Moreover, sliding-window heatmap provides visual interpretability of satellite images, revealing that neighborhood structures are essential in the price estimation. | http://arxiv.org/pdf/2006.11406v1 | [
"Jan-Peter Kucklick",
"Oliver Müller"
] | 2020-06-04T07:25:02Z | 2020-06-04T07:25:02Z |
2005.00447 | Image fusion using symmetric skip autoencodervia an Adversarial
Regulariser | It is a challenging task to extract the best of both worlds by combining the spatial characteristics of a visible image and the spectral content of an infrared image. In this work, we propose a spatially constrained adversarial autoencoder that extracts deep features from the infrared and visible images to obtain a more exhaustive and global representation. In this paper, we propose a residual autoencoder architecture, regularised by a residual adversarial network, to generate a more realistic fused image. The residual module serves as primary building for the encoder, decoder and adversarial network, as an add on the symmetric skip connections perform the functionality of embedding the spatial characteristics directly from the initial layers of encoder structure to the decoder part of the network. The spectral information in the infrared image is incorporated by adding the feature maps over several layers in the encoder part of the fusion structure, which makes inference on both the visual and infrared images separately. In order to efficiently optimize the parameters of the network, we propose an adversarial regulariser network which would perform supervised learning on the fused image and the original visual image. | http://arxiv.org/pdf/2005.00447v2 | [
"Snigdha Bhagat",
"S. D. Joshi",
"Brejesh Lall"
] | 2020-06-04T07:33:25Z | 2020-05-01T15:31:45Z |
2006.02689 | Solving Hard AI Planning Instances Using Curriculum-Driven Deep
Reinforcement Learning | Despite significant progress in general AI planning, certain domains remain out of reach of current AI planning systems. Sokoban is a PSPACE-complete planning task and represents one of the hardest domains for current AI planners. Even domain-specific specialized search methods fail quickly due to the exponential search complexity on hard instances. Our approach based on deep reinforcement learning augmented with a curriculum-driven method is the first one to solve hard instances within one day of training while other modern solvers cannot solve these instances within any reasonable time limit. In contrast to prior efforts, which use carefully handcrafted pruning techniques, our approach automatically uncovers domain structure. Our results reveal that deep RL provides a promising framework for solving previously unsolved AI planning problems, provided a proper training curriculum can be devised. | http://arxiv.org/pdf/2006.02689v1 | [
"Dieqiao Feng",
"Carla P. Gomes",
"Bart Selman"
] | 2020-06-04T08:13:12Z | 2020-06-04T08:13:12Z |
2006.02724 | Characterizing the Weight Space for Different Learning Models | Deep Learning has become one of the primary research areas in developing intelligent machines. Most of the well-known applications (such as Speech Recognition, Image Processing and NLP) of AI are driven by Deep Learning. Deep Learning algorithms mimic human brain using artificial neural networks and progressively learn to accurately solve a given problem. But there are significant challenges in Deep Learning systems. There have been many attempts to make deep learning models imitate the biological neural network. However, many deep learning models have performed poorly in the presence of adversarial examples. Poor performance in adversarial examples leads to adversarial attacks and in turn leads to safety and security in most of the applications. In this paper we make an attempt to characterize the solution space of a deep neural network in terms of three different subsets viz. weights belonging to exact trained patterns, weights belonging to generalized pattern set and weights belonging to adversarial pattern sets. We attempt to characterize the solution space with two seemingly different learning paradigms viz. the Deep Neural Networks and the Dense Associative Memory Model, which try to achieve learning via quite different mechanisms. We also show that adversarial attacks are generally less successful against Associative Memory Models than Deep Neural Networks. | http://arxiv.org/pdf/2006.02724v1 | [
"Saurav Musunuru",
"Jay N. Paranjape",
"Rahul Kumar Dubey",
"Vijendran G. Venkoparao"
] | 2020-06-04T09:30:29Z | 2020-06-04T09:30:29Z |
2006.02750 | Constrained Reinforcement Learning for Dynamic Optimization under
Uncertainty | Dynamic real-time optimization (DRTO) is a challenging task due to the fact that optimal operating conditions must be computed in real time. The main bottleneck in the industrial application of DRTO is the presence of uncertainty. Many stochastic systems present the following obstacles: 1) plant-model mismatch, 2) process disturbances, 3) risks in violation of process constraints. To accommodate these difficulties, we present a constrained reinforcement learning (RL) based approach. RL naturally handles the process uncertainty by computing an optimal feedback policy. However, no state constraints can be introduced intuitively. To address this problem, we present a chance-constrained RL methodology. We use chance constraints to guarantee the probabilistic satisfaction of process constraints, which is accomplished by introducing backoffs, such that the optimal policy and backoffs are computed simultaneously. Backoffs are adjusted using the empirical cumulative distribution function to guarantee the satisfaction of a joint chance constraint. The advantage and performance of this strategy are illustrated through a stochastic dynamic bioprocess optimization problem, to produce sustainable high-value bioproducts. | http://arxiv.org/pdf/2006.02750v1 | [
"Panagiotis Petsagkourakis",
"Ilya Orson Sandoval",
"Eric Bradford",
"Dongda Zhang",
"Ehecatl Antonio del Río Chanona"
] | 2020-06-04T10:17:35Z | 2020-06-04T10:17:35Z |
2006.02765 | Rates of Convergence for Laplacian Semi-Supervised Learning with Low
Labeling Rates | We study graph-based Laplacian semi-supervised learning at low labeling rates. Laplacian learning uses harmonic extension on a graph to propagate labels. At very low label rates, Laplacian learning becomes degenerate and the solution is roughly constant with spikes at each labeled data point. Previous work has shown that this degeneracy occurs when the number of labeled data points is finite while the number of unlabeled data points tends to infinity. In this work we allow the number of labeled data points to grow to infinity with the number of labels. Our results show that for a random geometric graph with length scale $varepsilon>0$ and labeling rate $beta>0$, if $beta llvarepsilon^2$ then the solution becomes degenerate and spikes form, and if $betagg varepsilon^2$ then Laplacian learning is well-posed and consistent with a continuum Laplace equation. Furthermore, in the well-posed setting we prove quantitative error estimates of $O(varepsilonbeta^{-1/2})$ for the difference between the solutions of the discrete problem and continuum PDE, up to logarithmic factors. We also study $p$-Laplacian regularization and show the same degeneracy result when $beta ll varepsilon^p$. The proofs of our well-posedness results use the random walk interpretation of Laplacian learning and PDE arguments, while the proofs of the ill-posedness results use $Gamma$-convergence tools from the calculus of variations. We also present numerical results on synthetic and real data to illustrate our results. | http://arxiv.org/pdf/2006.02765v1 | [
"Jeff Calder",
"Dejan Slepčev",
"Matthew Thorpe"
] | 2020-06-04T10:46:01Z | 2020-06-04T10:46:01Z |
2006.02767 | Seq2Seq AI Chatbot with Attention Mechanism | Intelligent Conversational Agent development using Artificial Intelligence or Machine Learning technique is an interesting problem in the field of Natural Language Processing. With the rise of deep learning, these models were quickly replaced by end to end trainable neural networks. | http://arxiv.org/pdf/2006.02767v1 | [
"Abonia Sojasingarayar"
] | 2020-06-04T10:54:43Z | 2020-06-04T10:54:43Z |
2006.02768 | Weight Pruning via Adaptive Sparsity Loss | Pruning neural networks has regained interest in recent years as a means to compress state-of-the-art deep neural networks and enable their deployment on resource-constrained devices. In this paper, we propose a robust compressive learning framework that efficiently prunes network parameters during training with minimal computational overhead. We incorporate fast mechanisms to prune individual layers and build upon these to automatically prune the entire network under a user-defined budget constraint. Key to our end-to-end network pruning approach is the formulation of an intuitive and easy-to-implement adaptive sparsity loss that is used to explicitly control sparsity during training, enabling efficient budget-aware optimization. Extensive experiments demonstrate the effectiveness of the proposed framework for image classification on the CIFAR and ImageNet datasets using different architectures, including AlexNet, ResNets and Wide ResNets. | http://arxiv.org/pdf/2006.02768v1 | [
"George Retsinas",
"Athena Elafrou",
"Georgios Goumas",
"Petros Maragos"
] | 2020-06-04T10:55:16Z | 2020-06-04T10:55:16Z |
2004.09141 | Spatial Action Maps for Mobile Manipulation | Typical end-to-end formulations for learning robotic navigation involve predicting a small set of steering command actions (e.g., step forward, turn left, turn right, etc.) from images of the current state (e.g., a bird's-eye view of a SLAM reconstruction). Instead, we show that it can be advantageous to learn with dense action representations defined in the same domain as the state. In this work, we present "spatial action maps," in which the set of possible actions is represented by a pixel map (aligned with the input image of the current state), where each pixel represents a local navigational endpoint at the corresponding scene location. Using ConvNets to infer spatial action maps from state images, action predictions are thereby spatially anchored on local visual features in the scene, enabling significantly faster learning of complex behaviors for mobile manipulation tasks with reinforcement learning. In our experiments, we task a robot with pushing objects to a goal location, and find that policies learned with spatial action maps achieve much better performance than traditional alternatives. | http://arxiv.org/abs/2004.09141v2 | [
"Jimmy Wu",
"Xingyuan Sun",
"Andy Zeng",
"Shuran Song",
"Johnny Lee",
"Szymon Rusinkiewicz",
"Thomas Funkhouser"
] | 2020-06-04T10:56:49Z | 2020-04-20T09:06:10Z |
1909.12200 | Scaling data-driven robotics with reward sketching and batch
reinforcement learning | We present a framework for data-driven robotics that makes use of a large dataset of recorded robot experience and scales to several tasks using learned reward functions. We show how to apply this framework to accomplish three different object manipulation tasks on a real robot platform. Given demonstrations of a task together with task-agnostic recorded experience, we use a special form of human annotation as supervision to learn a reward function, which enables us to deal with real-world tasks where the reward signal cannot be acquired directly. Learned rewards are used in combination with a large dataset of experience from different tasks to learn a robot policy offline using batch RL. We show that using our approach it is possible to train agents to perform a variety of challenging manipulation tasks including stacking rigid objects and handling cloth. | http://arxiv.org/pdf/1909.12200v3 | [
"Serkan Cabi",
"Sergio Gómez Colmenarejo",
"Alexander Novikov",
"Ksenia Konyushkova",
"Scott Reed",
"Rae Jeong",
"Konrad Zolna",
"Yusuf Aytar",
"David Budden",
"Mel Vecerik",
"Oleg Sushkov",
"David Barker",
"Jonathan Scholz",
"Misha Denil",
"Nando de Freitas",
"Ziyu Wang"
] | 2020-06-04T11:00:06Z | 2019-09-26T15:45:23Z |
2006.02155 | MLOS: An Infrastructure for Automated Software Performance Engineering | Developing modern systems software is a complex task that combines business logic programming and Software Performance Engineering (SPE). The later is an experimental and labor-intensive activity focused on optimizing the system for a given hardware, software, and workload (hw/sw/wl) context. Today's SPE is performed during build/release phases by specialized teams, and cursed by: 1) lack of standardized and automated tools, 2) significant repeated work as hw/sw/wl context changes, 3) fragility induced by a "one-size-fit-all" tuning (where improvements on one workload or component may impact others). The net result: despite costly investments, system software is often outside its optimal operating point - anecdotally leaving 30% to 40% of performance on the table. The recent developments in Data Science (DS) hints at an opportunity: combining DS tooling and methodologies with a new developer experience to transform the practice of SPE. In this paper we present: MLOS, an ML-powered infrastructure and methodology to democratize and automate Software Performance Engineering. MLOS enables continuous, instance-level, robust, and trackable systems optimization. MLOS is being developed and employed within Microsoft to optimize SQL Server performance. Early results indicated that component-level optimizations can lead to 20%-90% improvements when custom-tuning for a specific hw/sw/wl, hinting at a significant opportunity. However, several research challenges remain that will require community involvement. To this end, we are in the process of open-sourcing the MLOS core infrastructure, and we are engaging with academic institutions to create an educational program around Software 2.0 and MLOS ideas. | http://arxiv.org/abs/2006.02155v2 | [
"Carlo Curino",
"Neha Godwal",
"Brian Kroth",
"Sergiy Kuryata",
"Greg Lapinski",
"Siqi Liu",
"Slava Oks",
"Olga Poppe",
"Adam Smiechowski",
"Ed Thayer",
"Markus Weimer",
"Yiwen Zhu"
] | 2020-06-04T11:10:53Z | 2020-06-01T22:38:30Z |
2006.02797 | Overcoming Overfitting and Large Weight Update Problem in Linear
Rectifiers: Thresholded Exponential Rectified Linear Units | In past few years, linear rectified unit activation functions have shown its significance in the neural networks, surpassing the performance of sigmoid activations. RELU (Nair & Hinton, 2010), ELU (Clevert et al., 2015), PRELU (He et al., 2015), LRELU (Maas et al., 2013), SRELU (Jin et al., 2016), ThresholdedRELU, all these linear rectified activation functions have its own significance over others in some aspect. Most of the time these activation functions suffer from bias shift problem due to non-zero output mean, and high weight update problem in deep complex networks due to unit gradient, which results in slower training, and high variance in model prediction respectively. In this paper, we propose, "Thresholded exponential rectified linear unit" (TERELU) activation function that works better in alleviating in overfitting: large weight update problem. Along with alleviating overfitting problem, this method also gives good amount of non-linearity as compared to other linear rectifiers. We will show better performance on the various datasets using neural networks, considering TERELU activation method compared to other activations. | http://arxiv.org/pdf/2006.02797v1 | [
"Vijay Pandey"
] | 2020-06-04T11:55:47Z | 2020-06-04T11:55:47Z |
2006.02818 | Refined Continuous Control of DDPG Actors via Parametrised Activation | In this paper, we propose enhancing actor-critic reinforcement learning agents by parameterising the final actor layer which produces the actions in order to accommodate the behaviour discrepancy of different actuators, under different load conditions during interaction with the environment. We propose branching the action producing layer in the actor to learn the tuning parameter controlling the activation layer (e.g. Tanh and Sigmoid). The learned parameters are then used to create tailored activation functions for each actuator. We ran experiments on three OpenAI Gym environments, i.e. Pendulum-v0, LunarLanderContinuous-v2 and BipedalWalker-v2. Results have shown an average of 23.15% and 33.80% increase in total episode reward of the LunarLanderContinuous-v2 and BipedalWalker-v2 environments, respectively. There was no significant improvement in Pendulum-v0 environment but the proposed method produces a more stable actuation signal compared to the state-of-the-art method. The proposed method allows the reinforcement learning actor to produce more robust actions that accommodate the discrepancy in the actuators' response functions. This is particularly useful for real life scenarios where actuators exhibit different response functions depending on the load and the interaction with the environment. This also simplifies the transfer learning problem by fine tuning the parameterised activation layers instead of retraining the entire policy every time an actuator is replaced. Finally, the proposed method would allow better accommodation to biological actuators (e.g. muscles) in biomechanical systems. | http://arxiv.org/pdf/2006.02818v1 | [
"Mohammed Hossny",
"Julie Iskander",
"Mohammed Attia",
"Khaled Saleh"
] | 2020-06-04T12:27:46Z | 2020-06-04T12:27:46Z |
1910.02787 | Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping | The distributional perspective on reinforcement learning (RL) has given rise to a series of successful Q-learning algorithms, resulting in state-of-the-art performance in arcade game environments. However, it has not yet been analyzed how these findings from a discrete setting translate to complex practical applications characterized by noisy, high dimensional and continuous state-action spaces. In this work, we propose Quantile QT-Opt (Q2-Opt), a distributional variant of the recently introduced distributed Q-learning algorithm for continuous domains, and examine its behaviour in a series of simulated and real vision-based robotic grasping tasks. The absence of an actor in Q2-Opt allows us to directly draw a parallel to the previous discrete experiments in the literature without the additional complexities induced by an actor-critic architecture. We demonstrate that Q2-Opt achieves a superior vision-based object grasping success rate, while also being more sample efficient. The distributional formulation also allows us to experiment with various risk distortion metrics that give us an indication of how robots can concretely manage risk in practice using a Deep RL control policy. As an additional contribution, we perform batch RL experiments in our virtual environment and compare them with the latest findings from discrete settings. Surprisingly, we find that the previous batch RL findings from the literature obtained on arcade game environments do not generalise to our setup. | http://arxiv.org/abs/1910.02787v3 | [
"Cristian Bodnar",
"Adrian Li",
"Karol Hausman",
"Peter Pastor",
"Mrinal Kalakrishnan"
] | 2020-06-04T12:56:16Z | 2019-10-01T22:12:00Z |
2006.02879 | Auto-decoding Graphs | We present an approach to synthesizing new graph structures from empirically specified distributions. The generative model is an auto-decoder that learns to synthesize graphs from latent codes. The graph synthesis model is learned jointly with an empirical distribution over the latent codes. Graphs are synthesized using self-attention modules that are trained to identify likely connectivity patterns. Graph-based normalizing flows are used to sample latent codes from the distribution learned by the auto-decoder. The resulting model combines accuracy and scalability. On benchmark datasets of large graphs, the presented model outperforms the state of the art by a factor of 1.5 in mean accuracy and average rank across at least three different graph statistics, with a 2x speedup during inference. | http://arxiv.org/pdf/2006.02879v1 | [
"Sohil Atul Shah",
"Vladlen Koltun"
] | 2020-06-04T14:23:01Z | 2020-06-04T14:23:01Z |
2006.03476 | COVID-19 diagnosis by routine blood tests using machine learning | Physicians taking care of patients with coronavirus disease (COVID-19) have described different changes in routine blood parameters. However, these changes, hinder them from performing COVID-19 diagnosis. We constructed a machine learning predictive model for COVID-19 diagnosis. The model was based and cross-validated on the routine blood tests of 5,333 patients with various bacterial and viral infections, and 160 COVID-19-positive patients. We selected operational ROC point at a sensitivity of 81.9% and specificity of 97.9%. The cross-validated area under the curve (AUC) was 0.97. The five most useful routine blood parameters for COVID19 diagnosis according to the feature importance scoring of the XGBoost algorithm were MCHC, eosinophil count, albumin, INR, and prothrombin activity percentage. tSNE visualization showed that the blood parameters of the patients with severe COVID-19 course are more like the parameters of bacterial than viral infection. The reported diagnostic accuracy is at least comparable and probably complementary to RT-PCR and chest CT studies. Patients with fever, cough, myalgia, and other symptoms can now have initial routine blood tests assessed by our diagnostic tool. All patients with a positive COVID-19 prediction would then undergo standard RT-PCR studies to confirm the diagnosis. We believe that our results present a significant contribution to improvements in COVID-19 diagnosis. | http://arxiv.org/abs/2006.03476v1 | [
"Matjaž Kukar",
"Gregor Gunčar",
"Tomaž Vovko",
"Simon Podnar",
"Peter Černelč",
"Miran Brvar",
"Mateja Zalaznik",
"Mateja Notar",
"Sašo Moškon",
"Marko Notar"
] | 2020-06-04T14:57:17Z | 2020-06-04T14:57:17Z |
1911.08252 | IC-Network: Efficient Structure for Convolutional Neural Networks | Neural networks have been widely used, and most networks achieve excellent performance by stacking certain types of basic units. Compared to increasing the depth and width of the network, designing more effective basic units has become an important research topic. Inspired by the elastic collision model in physics, we present a universal structure that could be integrated into the existing network structures to speed up the training process and increase their generalization abilities. We term this structure the "Inter-layer Collision" (IC) structure. We built two kinds of basic computational units (IC layer and IC block) that compose the convolutional neural networks (CNNs) by combining the IC structure with the convolution operation. Compared to traditional convolutions, both of the proposed computational units have a stronger non-linear representation ability and can filter features useful for a given task. Using these computational units to build networks, we bring significant improvements in performance for existing state-of-the-art CNNs. On the imagenet experiment, we integrate the IC block into ResNet-50 and reduce the top-1 error from 22.85% to 21.49%, which also exceeds the top-1 error of ResNet-100 (21.75%). | http://arxiv.org/pdf/1911.08252v4 | [
"Junyi An",
"Fengshan Liu",
"Jian Zhao",
"Furao Shen"
] | 2020-06-04T15:05:47Z | 2019-11-19T13:26:06Z |
2006.02924 | Scaling Distributed Training with Adaptive Summation | Stochastic gradient descent (SGD) is an inherently sequential training algorithm--computing the gradient at batch $i$ depends on the model parameters learned from batch $i-1$. Prior approaches that break this dependence do not honor them (e.g., sum the gradients for each batch, which is not what sequential SGD would do) and thus potentially suffer from poor convergence. This paper introduces a novel method to combine gradients called Adasum (for adaptive sum) that converges faster than prior work. Adasum is easy to implement, almost as efficient as simply summing gradients, and is integrated into the open-source toolkit Horovod. This paper first provides a formal justification for Adasum and then empirically demonstrates Adasum is more accurate than prior gradient accumulation methods. It then introduces a series of case-studies to show Adasum works with multiple frameworks, (TensorFlow and PyTorch), scales multiple optimizers (Momentum-SGD, Adam, and LAMB) to larger batch-sizes while still giving good downstream accuracy. Finally, it proves that Adasum converges. To summarize, Adasum scales Momentum-SGD on the MLPerf Resnet50 benchmark to 64K examples before communication (no MLPerf v0.5 entry converged with more than 16K), the Adam optimizer to 64K examples before communication on BERT-LARGE (prior work showed Adam stopped scaling at 16K), and the LAMB optimizer to 128K before communication on BERT-LARGE (prior work used 64K), all while maintaining downstream accuracy metrics. Finally, if a user does not need to scale, we show LAMB with Adasum on BERT-LARGE converges in 30% fewer steps than the baseline. | http://arxiv.org/pdf/2006.02924v1 | [
"Saeed Maleki",
"Madan Musuvathi",
"Todd Mytkowicz",
"Olli Saarikivi",
"Tianju Xu",
"Vadim Eksarevskiy",
"Jaliya Ekanayake",
"Emad Barsoum"
] | 2020-06-04T15:08:20Z | 2020-06-04T15:08:20Z |
1906.00588 | Quantifying Point-Prediction Uncertainty in Neural Networks via Residual
Estimation with an I/O Kernel | Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough: the uncertainty (i.e. risk or confidence) of that prediction must also be estimated. Standard NNs, which are most often used in such tasks, do not provide uncertainty information. Existing approaches address this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not predict as accurately as standard NNs. In this paper, a new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN. The behavior of the NN is captured by modeling its prediction residuals with a Gaussian Process, whose kernel includes both the NN's input and its output. The framework is evaluated in twelve real-world datasets, where it is found to (1) provide reliable estimates of uncertainty, (2) reduce the error of the point predictions, and (3) scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient for building real-world NN applications. | http://arxiv.org/pdf/1906.00588v5 | [
"Xin Qiu",
"Elliot Meyerson",
"Risto Miikkulainen"
] | 2020-06-04T15:26:07Z | 2019-06-03T06:08:57Z |
1808.07647 | Machine Learning at the Edge: A Data-Driven Architecture with
Applications to 5G Cellular Networks | The fifth generation of cellular networks (5G) will rely on edge cloud deployments to satisfy the ultra-low latency demand of future applications. In this paper, we argue that such deployments can also be used to enable advanced data-driven and Machine Learning (ML) applications in mobile networks. We propose an edge-controller-based architecture for cellular networks and evaluate its performance with real data from hundreds of base stations of a major U.S. operator. In this regard, we will provide insights on how to dynamically cluster and associate base stations and controllers, according to the global mobility patterns of the users. Then, we will describe how the controllers can be used to run ML algorithms to predict the number of users in each base station, and a use case in which these predictions are exploited by a higher-layer application to route vehicular traffic according to network Key Performance Indicators (KPIs). We show that the prediction accuracy improves when based on machine learning algorithms that rely on the controllers' view and, consequently, on the spatial correlation introduced by the user mobility, with respect to when the prediction is based only on the local data of each single base station. | http://arxiv.org/abs/1808.07647v4 | [
"Michele Polese",
"Rittwik Jana",
"Velin Kounev",
"Ke Zhang",
"Supratim Deb",
"Michele Zorzi"
] | 2020-06-04T15:33:19Z | 2018-08-23T07:06:41Z |
2006.02954 | Handling missing data in model-based clustering | Gaussian Mixture models (GMMs) are a powerful tool for clustering, classification and density estimation when clustering structures are embedded in the data. The presence of missing values can largely impact the GMMs estimation process, thus handling missing data turns out to be a crucial point in clustering, classification and density estimation. Several techniques have been developed to impute the missing values before model estimation. Among these, multiple imputation is a simple and useful general approach to handle missing data. In this paper we propose two different methods to fit Gaussian mixtures in the presence of missing data. Both methods use a variant of the Monte Carlo Expectation-Maximisation (MCEM) algorithm for data augmentation. Thus, multiple imputations are performed during the E-step, followed by the standard M-step for a given eigen-decomposed component-covariance matrix. We show that the proposed methods outperform the multiple imputation approach, both in terms of clusters identification and density estimation. | http://arxiv.org/pdf/2006.02954v1 | [
"Alessio Serafini",
"Thomas Brendan Murphy",
"Luca Scrucca"
] | 2020-06-04T15:36:31Z | 2020-06-04T15:36:31Z |
2006.02957 | Sparsity in Reservoir Computing Neural Networks | Reservoir Computing (RC) is a well-known strategy for designing Recurrent Neural Networks featured by striking efficiency of training. The crucial aspect of RC is to properly instantiate the hidden recurrent layer that serves as dynamical memory to the system. In this respect, the common recipe is to create a pool of randomly and sparsely connected recurrent neurons. While the aspect of sparsity in the design of RC systems has been debated in the literature, it is nowadays understood mainly as a way to enhance the efficiency of computation, exploiting sparse matrix operations. In this paper, we empirically investigate the role of sparsity in RC network design under the perspective of the richness of the developed temporal representations. We analyze both sparsity in the recurrent connections, and in the connections from the input to the reservoir. Our results point out that sparsity, in particular in input-reservoir connections, has a major role in developing internal temporal representations that have a longer short-term memory of past inputs and a higher dimension. | http://arxiv.org/pdf/2006.02957v1 | [
"Claudio Gallicchio"
] | 2020-06-04T15:38:17Z | 2020-06-04T15:38:17Z |
1803.03241 | Efficient Algorithms for Outlier-Robust Regression | We give the first polynomial-time algorithm for performing linear or polynomial regression resilient to adversarial corruptions in both examples and labels. Given a sufficiently large (polynomial-size) training set drawn i.i.d. from distribution D and subsequently corrupted on some fraction of points, our algorithm outputs a linear function whose squared error is close to the squared error of the best-fitting linear function with respect to D, assuming that the marginal distribution of D over the input space is emph{certifiably hypercontractive}. This natural property is satisfied by many well-studied distributions such as Gaussian, strongly log-concave distributions and, uniform distribution on the hypercube among others. We also give a simple statistical lower bound showing that some distributional assumption is necessary to succeed in this setting. These results are the first of their kind and were not known to be even information-theoretically possible prior to our work. Our approach is based on the sum-of-squares (SoS) method and is inspired by the recent applications of the method for parameter recovery problems in unsupervised learning. Our algorithm can be seen as a natural convex relaxation of the following conceptually simple non-convex optimization problem: find a linear function and a large subset of the input corrupted sample such that the least squares loss of the function over the subset is minimized over all possible large subsets. | http://arxiv.org/pdf/1803.03241v3 | [
"Adam Klivans",
"Pravesh K. Kothari",
"Raghu Meka"
] | 2020-06-04T15:42:45Z | 2018-03-08T18:30:31Z |
1912.07398 | Accuracy comparison across face recognition algorithms: Where are we on
measuring race bias? | Previous generations of face recognition algorithms differ in accuracy for images of different races (race bias). Here, we present the possible underlying factors (data-driven and scenario modeling) and methodological considerations for assessing race bias in algorithms. We discuss data driven factors (e.g., image quality, image population statistics, and algorithm architecture), and scenario modeling factors that consider the role of the "user" of the algorithm (e.g., threshold decisions and demographic constraints). To illustrate how these issues apply, we present data from four face recognition algorithms (a previous-generation algorithm and three deep convolutional neural networks, DCNNs) for East Asian and Caucasian faces. First, dataset difficulty affected both overall recognition accuracy and race bias, such that race bias increased with item difficulty. Second, for all four algorithms, the degree of bias varied depending on the identification decision threshold. To achieve equal false accept rates (FARs), East Asian faces required higher identification thresholds than Caucasian faces, for all algorithms. Third, demographic constraints on the formulation of the distributions used in the test, impacted estimates of algorithm accuracy. We conclude that race bias needs to be measured for individual applications and we provide a checklist for measuring this bias in face recognition algorithms. | http://arxiv.org/pdf/1912.07398v2 | [
"Jacqueline G. Cavazos",
"P. Jonathon Phillips",
"Carlos D. Castillo",
"Alice J. O'Toole"
] | 2020-06-04T15:50:26Z | 2019-12-16T14:27:10Z |
2006.02983 | Median regression with differential privacy | Median regression analysis has robustness properties which make it attractive compared with regression based on the mean, while differential privacy can protect individual privacy during statistical analysis of certain datasets. In this paper, three privacy preserving methods are proposed for median regression. The first algorithm is based on a finite smoothing method, the second provides an iterative way and the last one further employs the greedy coordinate descent approach. Privacy preserving properties of these three methods are all proved. Accuracy bound or convergence properties of these algorithms are also provided. Numerical calculation shows that the first method has better accuracy than the others when the sample size is small. When the sample size becomes larger, the first method needs more time while the second method needs less time with well-matched accuracy. For the third method, it costs less time in both cases, while it highly depends on step size. | http://arxiv.org/pdf/2006.02983v1 | [
"E Chen",
"Ying Miao",
"Yu Tang"
] | 2020-06-04T16:14:39Z | 2020-06-04T16:14:39Z |
2006.02986 | A Novel Update Mechanism for Q-Networks Based On Extreme Learning
Machines | Reinforcement learning is a popular machine learning paradigm which can find near optimal solutions to complex problems. Most often, these procedures involve function approximation using neural networks with gradient based updates to optimise weights for the problem being considered. While this common approach generally works well, there are other update mechanisms which are largely unexplored in reinforcement learning. One such mechanism is Extreme Learning Machines. These were initially proposed to drastically improve the training speed of neural networks and have since seen many applications. Here we attempt to apply extreme learning machines to a reinforcement learning problem in the same manner as gradient based updates. This new algorithm is called Extreme Q-Learning Machine (EQLM). We compare its performance to a typical Q-Network on the cart-pole task - a benchmark reinforcement learning problem - and show EQLM has similar long-term learning performance to a Q-Network. | http://arxiv.org/pdf/2006.02986v1 | [
"Callum Wilson",
"Annalisa Riccardi",
"Edmondo Minisci"
] | 2020-06-04T16:16:13Z | 2020-06-04T16:16:13Z |
2006.02991 | MHVAE: a Human-Inspired Deep Hierarchical Generative Model for
Multimodal Representation Learning | Humans are able to create rich representations of their external reality. Their internal representations allow for cross-modality inference, where available perceptions can induce the perceptual experience of missing input modalities. In this paper, we contribute the Multimodal Hierarchical Variational Auto-encoder (MHVAE), a hierarchical multimodal generative model for representation learning. Inspired by human cognitive models, the MHVAE is able to learn modality-specific distributions, of an arbitrary number of modalities, and a joint-modality distribution, responsible for cross-modality inference. We formally derive the model's evidence lower bound and propose a novel methodology to approximate the joint-modality posterior based on modality-specific representation dropout. We evaluate the MHVAE on standard multimodal datasets. Our model performs on par with other state-of-the-art generative models regarding joint-modality reconstruction from arbitrary input modalities and cross-modality inference. | http://arxiv.org/pdf/2006.02991v1 | [
"Miguel Vasco",
"Francisco S. Melo",
"Ana Paiva"
] | 2020-06-04T16:24:00Z | 2020-06-04T16:24:00Z |
2006.03465 | Visual Transfer for Reinforcement Learning via Wasserstein Domain
Confusion | We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for visual transfer in Reinforcement Learning that explicitly learns to align the distributions of extracted features between a source and target task. WAPPO approximates and minimizes the Wasserstein-1 distance between the distributions of features from source and target domains via a novel Wasserstein Confusion objective. WAPPO outperforms the prior state-of-the-art in visual transfer and successfully transfers policies across Visual Cartpole and two instantiations of 16 OpenAI Procgen environments. | http://arxiv.org/pdf/2006.03465v1 | [
"Josh Roy",
"George Konidaris"
] | 2020-06-04T16:31:26Z | 2020-06-04T16:31:26Z |
2006.03000 | Differentiable Linear Bandit Algorithm | Upper Confidence Bound (UCB) is arguably the most commonly used method for linear multi-arm bandit problems. While conceptually and computationally simple, this method highly relies on the confidence bounds, failing to strike the optimal exploration-exploitation if these bounds are not properly set. In the literature, confidence bounds are typically derived from concentration inequalities based on assumptions on the reward distribution, e.g., sub-Gaussianity. The validity of these assumptions however is unknown in practice. In this work, we aim at learning the confidence bound in a data-driven fashion, making it adaptive to the actual problem structure. Specifically, noting that existing UCB-typed algorithms are not differentiable with respect to confidence bound, we first propose a novel differentiable linear bandit algorithm. Then, we introduce a gradient estimator, which allows the confidence bound to be learned via gradient ascent. Theoretically, we show that the proposed algorithm achieves a $tilde{mathcal{O}}(hat{beta}sqrt{dT})$ upper bound of $T$-round regret, where $d$ is the dimension of arm features and $hat{beta}$ is the learned size of confidence bound. Empirical results show that $hat{beta}$ is significantly smaller than its theoretical upper bound and proposed algorithms outperforms baseline ones on both simulated and real-world datasets. | http://arxiv.org/pdf/2006.03000v1 | [
"Kaige Yang",
"Laura Toni"
] | 2020-06-04T16:43:55Z | 2020-06-04T16:43:55Z |
2006.03001 | A Siamese Neural Network with Modified Distance Loss For Transfer
Learning in Speech Emotion Recognition | Automatic emotion recognition plays a significant role in the process of human computer interaction and the design of Internet of Things (IOT) technologies. Yet, a common problem in emotion recognition systems lies in the scarcity of reliable labels. By modeling pairwise differences between samples of interest, a Siamese network can help to mitigate this challenge since it requires fewer samples than traditional deep learning methods. In this paper, we propose a distance loss, which can be applied on the Siamese network fine-tuning, by optimizing the model based on the relevant distance between same and difference class pairs. Our system use samples from the source data to pre-train the weights of proposed Siamese neural network, which are fine-tuned based on the target data. We present an emotion recognition task that uses speech, since it is one of the most ubiquitous and frequently used bio-behavioral signals. Our target data comes from the RAVDESS dataset, while the CREMA-D and eNTERFACE'05 are used as source data, respectively. Our results indicate that the proposed distance loss is able to greatly benefit the fine-tuning process of Siamese network. Also, the selection of source data has more effect on the Siamese network performance compared to the number of frozen layers. These suggest the great potential of applying the Siamese network and modelling pairwise differences in the field of transfer learning for automatic emotion recognition. | http://arxiv.org/pdf/2006.03001v1 | [
"Kexin Feng",
"Theodora Chaspari"
] | 2020-06-04T16:44:33Z | 2020-06-04T16:44:33Z |
2006.03005 | Learning DAGs without imposing acyclicity | We explore if it is possible to learn a directed acyclic graph (DAG) from data without imposing explicitly the acyclicity constraint. In particular, for Gaussian distributions, we frame structural learning as a sparse matrix factorization problem and we empirically show that solving an $ell_1$-penalized optimization yields to good recovery of the true graph and, in general, to almost-DAG graphs. Moreover, this approach is computationally efficient and is not affected by the explosion of combinatorial complexity as in classical structural learning algorithms. | http://arxiv.org/pdf/2006.03005v1 | [
"Gherardo Varando"
] | 2020-06-04T16:52:01Z | 2020-06-04T16:52:01Z |
2006.03015 | Quadruply Stochastic Gaussian Processes | We introduce a stochastic variational inference procedure for training scalable Gaussian process (GP) models whose per-iteration complexity is independent of both the number of training points, $n$, and the number basis functions used in the kernel approximation, $m$. Our central contributions include an unbiased stochastic estimator of the evidence lower bound (ELBO) for a Gaussian likelihood, as well as a stochastic estimator that lower bounds the ELBO for several other likelihoods such as Laplace and logistic. Independence of the stochastic optimization update complexity on $n$ and $m$ enables inference on huge datasets using large capacity GP models. We demonstrate accurate inference on large classification and regression datasets using GPs and relevance vector machines with up to $m = 10^7$ basis functions. | http://arxiv.org/pdf/2006.03015v1 | [
"Trefor W. Evans",
"Prasanth B. Nair"
] | 2020-06-04T17:06:25Z | 2020-06-04T17:06:25Z |
2004.07584 | Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions | In this paper, the issue of model uncertainty in safety-critical control is addressed with a data-driven approach. For this purpose, we utilize the structure of an input-ouput linearization controller based on a nominal model along with a Control Barrier Function and Control Lyapunov Function based Quadratic Program (CBF-CLF-QP). Specifically, we propose a novel reinforcement learning framework which learns the model uncertainty present in the CBF and CLF constraints, as well as other control-affine dynamic constraints in the quadratic program. The trained policy is combined with the nominal model-based CBF-CLF-QP, resulting in the Reinforcement Learning-based CBF-CLF-QP (RL-CBF-CLF-QP), which addresses the problem of model uncertainty in the safety constraints. The performance of the proposed method is validated by testing it on an underactuated nonlinear bipedal robot walking on randomly spaced stepping stones with one step preview, obtaining stable and safe walking under model uncertainty. | http://arxiv.org/pdf/2004.07584v2 | [
"Jason Choi",
"Fernando Castañeda",
"Claire J. Tomlin",
"Koushil Sreenath"
] | 2020-06-04T17:07:52Z | 2020-04-16T10:51:33Z |
2006.03022 | Response to LiveBot: Generating Live Video Comments Based on Visual and
Textual Contexts | Live video commenting systems are an emerging feature of online video sites. Recently the Chinese video sharing platform Bilibili, has popularised a novel captioning system where user comments are displayed as streams of moving subtitles overlaid on the video playback screen and broadcast to all viewers in real-time. LiveBot was recently introduced as a novel Automatic Live Video Commenting (ALVC) application. This enables the automatic generation of live video comments from both the existing video stream and existing viewers comments. In seeking to reproduce the baseline results reported in the original Livebot paper, we found differences between the reproduced results using the project codebase and the numbers reported in the paper. Further examination of this situation suggests that this may be caused by a number of small issues in the project code, including a non-obvious overlap between the training and test sets. In this paper, we study these discrepancies in detail and propose an alternative baseline implementation as a reference for other researchers in this field. | http://arxiv.org/pdf/2006.03022v1 | [
"Hao Wu",
"Gareth J. F. Jones",
"Francois Pitie"
] | 2020-06-04T17:16:22Z | 2020-06-04T17:16:22Z |
2006.03039 | The SOFC-Exp Corpus and Neural Approaches to Information Extraction in
the Materials Science Domain | This paper presents a new challenging information extraction task in the domain of materials science. We develop an annotation scheme for marking information on experiments related to solid oxide fuel cells in scientific publications, such as involved materials and measurement conditions. With this paper, we publish our annotation guidelines, as well as our SOFC-Exp corpus consisting of 45 open-access scholarly articles annotated by domain experts. A corpus and an inter-annotator agreement study demonstrate the complexity of the suggested named entity recognition and slot filling tasks as well as high annotation quality. We also present strong neural-network based models for a variety of tasks that can be addressed on the basis of our new data set. On all tasks, using BERT embeddings leads to large performance gains, but with increasing task complexity, adding a recurrent neural network on top seems beneficial. Our models will serve as competitive baselines in future work, and analysis of their performance highlights difficult cases when modeling the data and suggests promising research directions. | http://arxiv.org/pdf/2006.03039v1 | [
"Annemarie Friedrich",
"Heike Adel",
"Federico Tomazic",
"Johannes Hingerl",
"Renou Benteau",
"Anika Maruscyk",
"Lukas Lange"
] | 2020-06-04T17:49:34Z | 2020-06-04T17:49:34Z |
2004.09506 | Revisiting Initialization of Neural Networks | The proper initialization of weights is crucial for the effective training and fast convergence of deep neural networks (DNNs). Prior work in this area has mostly focused on balancing the variance among weights per layer to maintain stability of (i) the input data propagated forwards through the network and (ii) the loss gradients propagated backwards, respectively. This prevalent heuristic is however agnostic of dependencies among gradients across the various layers and captures only firstorder effects. In this paper, we propose and discuss an initialization principle that is based on a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix. The proposed approach is more systematic and recovers previous results for DNN activations such as smooth functions, dropouts, and ReLU. Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool which helps to more rigorously initialize weights | http://arxiv.org/pdf/2004.09506v3 | [
"Maciej Skorski",
"Alessandro Temperoni",
"Martin Theobald"
] | 2020-06-04T17:51:07Z | 2020-04-20T18:12:56Z |
2005.10406 | Training Keyword Spotting Models on Non-IID Data with Federated Learning | We demonstrate that a production-quality keyword-spotting model can be trained on-device using federated learning and achieve comparable false accept and false reject rates to a centrally-trained model. To overcome the algorithmic constraints associated with fitting on-device data (which are inherently non-independent and identically distributed), we conduct thorough empirical studies of optimization algorithms and hyperparameter configurations using large-scale federated simulations. To overcome resource constraints, we replace memory intensive MTR data augmentation with SpecAugment, which reduces the false reject rate by 56%. Finally, to label examples (given the zero visibility into on-device data), we explore teacher-student training. | http://arxiv.org/pdf/2005.10406v2 | [
"Andrew Hard",
"Kurt Partridge",
"Cameron Nguyen",
"Niranjan Subrahmanya",
"Aishanee Shah",
"Pai Zhu",
"Ignacio Lopez Moreno",
"Rajiv Mathews"
] | 2020-06-04T17:52:52Z | 2020-05-21T00:53:33Z |
1902.06679 | Link Prediction via Higher-Order Motif Features | Link prediction requires predicting which new links are likely to appear in a graph. Being able to predict unseen links with good accuracy has important applications in several domains such as social media, security, transportation, and recommendation systems. A common approach is to use features based on the common neighbors of an unconnected pair of nodes to predict whether the pair will form a link in the future. In this paper, we present an approach for link prediction that relies on higher-order analysis of the graph topology, well beyond common neighbors. We treat the link prediction problem as a supervised classification problem, and we propose a set of features that depend on the patterns or motifs that a pair of nodes occurs in. By using motifs of sizes 3, 4, and 5, our approach captures a high level of detail about the graph topology within the neighborhood of the pair of nodes, which leads to a higher classification accuracy. In addition to proposing the use of motif-based features, we also propose two optimizations related to constructing the classification dataset from the graph. First, to ensure that positive and negative examples are treated equally when extracting features, we propose adding the negative examples to the graph as an alternative to the common approach of removing the positive ones. Second, we show that it is important to control for the shortest-path distance when sampling pairs of nodes to form negative examples, since the difficulty of prediction varies with the shortest-path distance. We experimentally demonstrate that using off-the-shelf classifiers with a well constructed classification dataset results in up to 10 percentage points increase in accuracy over prior topology-based and feature learning methods. | http://arxiv.org/pdf/1902.06679v2 | [
"Ghadeer Abuoda",
"Gianmarco De Francisci Morales",
"Ashraf Aboulnaga"
] | 2020-06-04T18:00:42Z | 2019-02-08T10:01:04Z |
2006.03089 | Towards Understanding Fast Adversarial Training | Current neural-network-based classifiers are susceptible to adversarial examples. The most empirically successful approach to defending against such adversarial examples is adversarial training, which incorporates a strong self-attack during training to enhance its robustness. This approach, however, is computationally expensive and hence is hard to scale up. A recent work, called fast adversarial training, has shown that it is possible to markedly reduce computation time without sacrificing significant performance. This approach incorporates simple self-attacks, yet it can only run for a limited number of training epochs, resulting in sub-optimal performance. In this paper, we conduct experiments to understand the behavior of fast adversarial training and show the key to its success is the ability to recover from overfitting to weak attacks. We then extend our findings to improve fast adversarial training, demonstrating superior robust accuracy to strong adversarial training, with much-reduced training time. | http://arxiv.org/pdf/2006.03089v1 | [
"Bai Li",
"Shiqi Wang",
"Suman Jana",
"Lawrence Carin"
] | 2020-06-04T18:19:43Z | 2020-06-04T18:19:43Z |
2002.08973 | Affinity and Diversity: Quantifying Mechanisms of Data Augmentation | Though data augmentation has become a standard component of deep neural network training, the underlying mechanism behind the effectiveness of these techniques remains poorly understood. In practice, augmentation policies are often chosen using heuristics of either distribution shift or augmentation diversity. Inspired by these, we seek to quantify how data augmentation improves model generalization. To this end, we introduce interpretable and easy-to-compute measures: Affinity and Diversity. We find that augmentation performance is predicted not by either of these alone but by jointly optimizing the two. | http://arxiv.org/pdf/2002.08973v2 | [
"Raphael Gontijo-Lopes",
"Sylvia J. Smullin",
"Ekin D. Cubuk",
"Ethan Dyer"
] | 2020-06-04T19:04:48Z | 2020-02-20T19:02:02Z |
2006.04515 | Deep Learning for Posture Control Nonlinear Model System and Noise
Identification | In this work we present a system identification procedure based on Convolutional Neural Networks (CNN) for human posture control models. A usual approach to the study of human posture control consists in the identification of parameters for a control system. In this context, linear models are particularly popular due to the relative simplicity in identifying the required parameters and to analyze the results. Nonlinear models, conversely, are required to predict the real behavior exhibited by human subjects and hence it is desirable to use them in posture control analysis. The use of CNN aims to overcome the heavy computational requirement for the identification of nonlinear models, in order to make the analysis of experimental data less time consuming and, in perspective, to make such analysis feasible in the context of clinical tests. Some potential implications of the method for humanoid robotics are also discussed. | http://arxiv.org/pdf/2006.04515v1 | [
"Vittorio Lippi",
"Thomas Mergner",
"Christoph Maurer"
] | 2020-06-04T19:34:24Z | 2020-06-04T19:34:24Z |
2006.03108 | A Linear Algebraic Approach to Model Parallelism in Deep Learning | Training deep neural networks (DNNs) in large-cluster computing environments is increasingly necessary, as networks grow in size and complexity. Local memory and processing limitations require robust data and model parallelism for crossing compute node boundaries. We propose a linear-algebraic approach to model parallelism in deep learning, which allows parallel distribution of any tensor in the DNN. Rather than rely on automatic differentiation tools, which do not universally support distributed memory parallelism models, we show that parallel data movement operations, e.g., broadcast, sum-reduce, and halo exchange, are linear operators, and by defining the relevant spaces and inner products, we manually develop the adjoint, or backward, operators required for gradient-based training of DNNs. We build distributed DNN layers using these parallel primitives, composed with sequential layer implementations, and demonstrate their application by building and training a distributed DNN using DistDL, a PyTorch and MPI-based distributed deep learning toolkit. | http://arxiv.org/pdf/2006.03108v1 | [
"Russell J. Hewett",
"Thomas J. Grady II"
] | 2020-06-04T19:38:05Z | 2020-06-04T19:38:05Z |
2006.03112 | Embedding Directed Graphs in Potential Fields Using FastMap-D | Embedding undirected graphs in a Euclidean space has many computational benefits. FastMap is an efficient embedding algorithm that facilitates a geometric interpretation of problems posed on undirected graphs. However, Euclidean distances are inherently symmetric and, thus, Euclidean embeddings cannot be used for directed graphs. In this paper, we present FastMap-D, an efficient generalization of FastMap to directed graphs. FastMap-D embeds vertices using a potential field to capture the asymmetry between the pairwise distances in directed graphs. FastMap-D learns a potential function to define the potential field using a machine learning module. In experiments on various kinds of directed graphs, we demonstrate the advantage of FastMap-D over other approaches. | http://arxiv.org/pdf/2006.03112v1 | [
"Sriram Gopalakrishnan",
"Liron Cohen",
"Sven Koenig",
"T. K. Satish Kumar"
] | 2020-06-04T19:50:04Z | 2020-06-04T19:50:04Z |
2002.01587 | Deep Learning Tubes for Tube MPC | Learning-based control aims to construct models of a system to use for planning or trajectory optimization, e.g. in model-based reinforcement learning. In order to obtain guarantees of safety in this context, uncertainty must be accurately quantified. This uncertainty may come from errors in learning (due to a lack of data, for example), or may be inherent to the system. Propagating uncertainty forward in learned dynamics models is a difficult problem. In this work we use deep learning to obtain expressive and flexible models of how distributions of trajectories behave, which we then use for nonlinear Model Predictive Control (MPC). We introduce a deep quantile regression framework for control that enforces probabilistic quantile bounds and quantifies epistemic uncertainty. Using our method we explore three different approaches for learning tubes that contain the possible trajectories of the system, and demonstrate how to use each of them in a Tube MPC scheme. We prove these schemes are recursively feasible and satisfy constraints with a desired margin of probability. We present experiments in simulation on a nonlinear quadrotor system, demonstrating the practical efficacy of these ideas. | http://arxiv.org/pdf/2002.01587v2 | [
"David D. Fan",
"Ali-akbar Agha-mohammadi",
"Evangelos A. Theodorou"
] | 2020-06-04T20:12:35Z | 2020-02-05T00:32:18Z |
2005.05814 | A Report on the 2020 Sarcasm Detection Shared Task | Detecting sarcasm and verbal irony is critical for understanding people's actual sentiments and beliefs. Thus, the field of sarcasm analysis has become a popular research problem in natural language processing. As the community working on computational approaches for sarcasm detection is growing, it is imperative to conduct benchmarking studies to analyze the current state-of-the-art, facilitating progress in this area. We report on the shared task on sarcasm detection we conducted as a part of the 2nd Workshop on Figurative Language Processing (FigLang 2020) at ACL 2020. | http://arxiv.org/pdf/2005.05814v2 | [
"Debanjan Ghosh",
"Avijit Vajpayee",
"Smaranda Muresan"
] | 2020-06-04T20:31:11Z | 2020-05-12T14:27:19Z |
2006.03122 | SIDU: Similarity Difference and Uniqueness Method for Explainable AI | A new brand of technical artificial intelligence ( Explainable AI ) research has focused on trying to open up the 'black box' and provide some explainability. This paper presents a novel visual explanation method for deep learning networks in the form of a saliency map that can effectively localize entire object regions. In contrast to the current state-of-the art methods, the proposed method shows quite promising visual explanations that can gain greater trust of human expert. Both quantitative and qualitative evaluations are carried out on both general and clinical data sets to confirm the effectiveness of the proposed method. | http://arxiv.org/pdf/2006.03122v1 | [
"Satya M. Muddamsetty",
"Mohammad N. S. Jahromi",
"Thomas B. Moeslund"
] | 2020-06-04T20:33:40Z | 2020-06-04T20:33:40Z |
1912.03015 | Learning to Correspond Dynamical Systems | Many dynamical systems exhibit similar structure, as often captured by hand-designed simplified models that can be used for analysis and control. We develop a method for learning to correspond pairs of dynamical systems via a learned latent dynamical system. Given trajectory data from two dynamical systems, we learn a shared latent state space and a shared latent dynamics model, along with an encoder-decoder pair for each of the original systems. With the learned correspondences in place, we can use a simulation of one system to produce an imagined motion of its counterpart. We can also simulate in the learned latent dynamics and synthesize the motions of both corresponding systems, as a form of bisimulation. We demonstrate the approach using pairs of controlled bipedal walkers, as well as by pairing a walker with a controlled pendulum. | http://arxiv.org/pdf/1912.03015v3 | [
"Nam Hee Kim",
"Zhaoming Xie",
"Michiel van de Panne"
] | 2020-06-04T20:39:08Z | 2019-12-06T08:21:49Z |
2008.02736 | Recommending Influenceable Targets based on Influence Propagation
through Activity Behaviors in Online Social Media | Online Social Media (OSM) is a platform through which the users present themselves to the connected world by means of messaging, posting, reacting, tagging, and sharing on different contents with also other social activities. Nowadays, it has a vast impact on various aspects of the industry, business and society along with on users life. In an OSN platform, reaching the target users is one of the primary focus for most of the businesses and other organizations. Identification and recommendation of influenceable targets help to capture the appropriate audience efficiently and effectively. In this paper, an effective model has been discussed in egocentric OSN by incorporating an efficient influence measured Recommendation System in order to generate a list of top most influenceable target users among all connected network members for any specific social network user. Firstly the list of interacted network members has been updated based on all activities. On which the interacted network members with most similar activities have been recommended based on the specific influence category with sentiment type. After that, the top most influenceable network members in the basis of the required amount among those updated list of interacted network members have been identified with proper ranking by analyzing the similarity and frequency of their activity contents with respect to the activity contents of the main user. Through these two continuous stages, an effective list of top influenceable targets of the main user has been distinguished from the egocentric view of any social network. | http://arxiv.org/pdf/2008.02736v1 | [
"Dhrubasish Sarkar"
] | 2020-06-04T20:53:20Z | 2020-06-04T20:53:20Z |
2006.08336 | Affective Conditioning on Hierarchical Networks applied to Depression
Detection from Transcribed Clinical Interviews | In this work we propose a machine learning model for depression detection from transcribed clinical interviews. Depression is a mental disorder that impacts not only the subject's mood but also the use of language. To this end we use a Hierarchical Attention Network to classify interviews of depressed subjects. We augment the attention layer of our model with a conditioning mechanism on linguistic features, extracted from affective lexica. Our analysis shows that individuals diagnosed with depression use affective language to a greater extent than not-depressed. Our experiments show that external affective information improves the performance of the proposed architecture in the General Psychotherapy Corpus and the DAIC-WoZ 2017 depression datasets, achieving state-of-the-art 71.6 and 68.6 F1 scores respectively. | http://arxiv.org/pdf/2006.08336v1 | [
"D. Xezonaki",
"G. Paraskevopoulos",
"A. Potamianos",
"S. Narayanan"
] | 2020-06-04T20:55:22Z | 2020-06-04T20:55:22Z |