input
stringlengths 331
3.18k
| output
sequence |
---|---|
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Due to a resource-constrained environment, network compression has become an important part of deep neural networks research.', 'In this paper, we propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in all convolution layers based on an inter-frame prediction method in conventional video coding schemes.', 'Furthermore, we found a phenomenon Smoothly Varying Weight Hypothesis (SVWH) which is that the weights in adjacent convolution layers share strong similarity in shapes and values, i.e., the weights tend to vary smoothly along with the layers.', 'Based on SVWH, we propose a second ILWP and quantization method which quantize the predicted residuals between the weights in adjacent convolution layers.', 'Since the predicted weight residuals tend to follow Laplace distributions with very low variance, the weight quantization can more effectively be applied, thus producing more zero weights and enhancing the weight compression ratio.', 'In addition, we propose a new inter-layer loss for eliminating non-texture bits, which enabled us to more effectively store only texture bits.', 'That is, the proposed loss regularizes the weights such that the collocated weights between the adjacent two layers have the same values.', 'Finally, we propose an ILWP with an inter-layer loss and quantization method.', 'Our comprehensive experiments show that the proposed method achieves a much higher weight compression rate at the same accuracy level compared with the previous quantization-based compression methods in deep neural networks.']
### SUMMARY:
| [
"We propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in convolution layers."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference.', 'However, to the best of our knowledge, none target a specific number of floating-point operations (FLOPs) as part of a single end-to-end optimization objective, despite reporting FLOPs as part of the results.', 'Furthermore, a one-size-fits-all approach ignores realistic system constraints, which differ significantly between, say, a GPU and a mobile phone -- FLOPs on the former incur less latency than on the latter; thus, it is important for practitioners to be able to specify a target number of FLOPs during model compression.', 'In this work, we extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective and show that, given a desired FLOPs requirement, different neural networks can be successfully trained for image classification.']
### SUMMARY:
| [
"We extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective, and we show that, given a desired FLOPs requirement, different neural networks are successfully trained."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Unpaired image-to-image translation among category domains has achieved remarkable success in past decades.', 'Recent studies mainly focus on two challenges.', 'For one thing, such translation is inherently multimodal due to variations of domain-specific information (e.g., the domain of house cat has multiple fine-grained subcategories).', 'For another, existing multimodal approaches have limitations in handling more than two domains, i.e. they have to independently build one model for every pair of domains.', 'To address these problems, we propose the Hierarchical Image-to-image Translation (HIT) method which jointly formulates the multimodal and multi-domain problem in a semantic hierarchy structure, and can further control the uncertainty of multimodal.', 'Specifically, we regard the domain-specific variations as the result of the multi-granularity property of domains, and one can control the granularity of the multimodal translation by dividing a domain with large variations into multiple subdomains which capture local and fine-grained variations.', 'With the assumption of Gaussian prior, variations of domains are modeled in a common space such that translations can further be done among multiple domains within one model.', 'To learn such complicated space, we propose to leverage the inclusion relation among domains to constrain distributions of parent and children to be nested.', 'Experiments on several datasets validate the promising results and competitive performance against state-of-the-arts.']
### SUMMARY:
| [
"Granularity controled multi-domain and multimodal image to image translation method"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recurrent Neural Networks (RNNs) are very successful at solving challenging problems with sequential data.', 'However, this observed efficiency is not yet entirely explained by theory.', 'It is known that a certain class of multiplicative RNNs enjoys the property of depth efficiency --- a shallow network of exponentially large width is necessary to realize the same score function as computed by such an RNN.', 'Such networks, however, are not very often applied to real life tasks.', 'In this work, we attempt to reduce the gap between theory and practice by extending the theoretical analysis to RNNs which employ various nonlinearities, such as Rectified Linear Unit (ReLU), and show that they also benefit from properties of universality and depth efficiency.', 'Our theoretical results are verified by a series of extensive computational experiments.']
### SUMMARY:
| [
"Analysis of expressivity and generality of recurrent neural networks with ReLu nonlinearities using Tensor-Train decomposition."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood.', 'In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image.', 'In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step.', 'We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.']
### SUMMARY:
| [
"We propose a new, efficient algorithm to construct adversarial examples by means of deformations, rather than additive perturbations."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable.', 'Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients.', 'In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck.', "By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients.", 'We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms.', 'Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running.', 'We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods.', 'The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings.', 'Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods.']
### SUMMARY:
| [
"Regularizing adversarial learning with an information bottleneck, applied to imitation learning, inverse reinforcement learning, and generative adversarial networks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions.', 'While architectural advances have led to improved accuracy, building robust models remains challenging, involving major changes in training procedure and datasets. ', 'Prior work has argued that there is an inherent trade-off between robustness and accuracy, as exemplified by standard data augmentation techniques such as Cutout, which improves clean accuracy but not robustness, and additive Gaussian noise, which improves robustness but hurts accuracy.', 'We introduce Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image. ', 'Models trained with Patch Gaussian achieve state of the art on the CIFAR-10 and ImageNet Common Corruptions benchmarks while also maintaining accuracy on clean data.', 'We find that this augmentation leads to reduced sensitivity to high frequency noise (similar to Gaussian) while retaining the ability to take advantage of relevant high frequency information in the image (similar to Cutout).', 'We show it can be used in conjunction with other regularization methods and data augmentation policies such as AutoAugment. ', 'Finally, we find that the idea of restricting perturbations to patches can also be useful in the context of adversarial learning, yielding models without the loss in accuracy that is found with unconstrained adversarial training.']
### SUMMARY:
| [
"Simple augmentation method overcomes robustness/accuracy trade-off observed in literature and opens questions about the effect of training distribution on out-of-distribution generalization."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Offset regression is a standard method for spatial localization in many vision tasks, including human pose estimation, object detection, and instance segmentation.', 'However, \nif high localization accuracy is crucial for a task, convolutional neural networks will offset regression\nusually struggle to deliver. ', 'This can be attributed to the locality of the convolution operation, exacerbated by variance in scale, clutter, and viewpoint.', 'An even more fundamental issue is the multi-modality of real-world images.', 'As a consequence, they cannot be approximated adequately using a single mode model. ', 'Instead, we propose to use mixture density networks (MDN) for offset regression, allowing the model to manage various modes efficiently and learning to predict full conditional density of the outputs given the input.', 'On 2D human pose estimation in the wild, which requires accurate localisation of body keypoints, we show that this yields significant improvement in localization accuracy.', 'In particular, our experiments reveal viewpoint variation as the dominant multi-modal factor.', 'Further, by carefully initializing MDN parameters, we do not face any instabilities in training, which is known to be a big obstacle for widespread deployment of MDN.', 'The method can be readily applied to any task with a spatial regression component.', 'Our findings highlight the multi-modal nature of real-world vision, and the significance of explicitly accounting for viewpoint variation, at least when spatial localization is concerned.\n']
### SUMMARY:
| [
"We use mixture density networks to do full conditional density estimation for spatial offset regression and apply it to the human pose estimation task."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Like language, music can be represented as a sequence of discrete symbols that form a hierarchical syntax, with notes being roughly like characters and motifs of notes like words. ', 'Unlike text however, music relies heavily on repetition on multiple timescales to build structure and meaning.', 'The Music Transformer has shown compelling results in generating music with structure (Huang et al., 2018). ', 'In this paper, we introduce a tool for visualizing self-attention on polyphonic music with an interactive pianoroll. ', 'We use music transformer as both a descriptive tool and a generative model. ', 'For the former, we use it to analyze existing music to see if the resulting self-attention structure corroborates with the musical structure known from music theory. ', "For the latter, we inspect the model's self-attention during generation, in order to understand how past notes affect future ones.", 'We also compare and contrast the attention structure of regular attention to that of relative attention (Shaw et al., 2018, Huang et al., 2018), and examine its impact on the resulting generated music. ', 'For example, for the JSB Chorales dataset, a model trained with relative attention is more consistent in attending to all the voices in the preceding timestep and the chords before, and at cadences to the beginning of a phrase, allowing it to create an arc. ', 'We hope that our analyses will offer more evidence for relative self-attention as a powerful inductive bias for modeling music. ', 'We invite the reader to explore our video animations of music attention and to interact with the visualizations at https://storage.googleapis.com/nips-workshop-visualization/index.html.']
### SUMMARY:
| [
"Visualizing the differences between regular and relative attention for Music Transformer."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We study the statistical properties of the endpoint of stochastic gradient descent (SGD).', 'We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients..', 'Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount. \nWe experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.']
### SUMMARY:
| [
"Three factors (batch size, learning rate, gradient noise) change in predictable way the properties (e.g. sharpness) of minima found by SGD."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems.', 'In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns.', 'My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted.', 'Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does.\n']
### SUMMARY:
| [
"Simple generative approach to solve the word analogy problem which yields insights into word relationships, and the problems with estimating them"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks.', 'Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper-parameters and keeping them fixed to values normally used for training from scratch.', 'This paper re-examines several common practices of setting hyper-parameters for fine-tuning.', 'Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks.', '(1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter.', 'We find that picking the right value for momentum is critical for fine-tuning performance and connect it with previous theoretical findings.', '(2) Optimal hyper-parameters for fine-tuning in particular the effective learning rate are not only dataset dependent but also sensitive to the similarity between the source domain and target domain.', 'This is in contrast to hyper-parameters for training from scratch.', '(3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for "dissimilar" datasets.', 'Our findings challenge common practices of fine- tuning and encourages deep learning practitioners to rethink the hyper-parameters for fine-tuning.']
### SUMMARY:
| [
"This paper re-examines several common practices of setting hyper-parameters for fine-tuning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task.', 'To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied.', 'In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention.', 'Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network.', 'Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner.', 'We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP.', 'The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks.']
### SUMMARY:
| [
"improving deep transfer learning with regularization using attention based feature maps"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost.', 'In some domains, automated predictions without justifications have limited applicability.', 'Recently, progress has been made regarding single-aspect sentiment analysis for reviews, where the ambiguity of a justification is minimal.', 'In this context, a justification, or mask, consists of (long) word sequences from the input text, which suffice to make the prediction.', 'Existing models cannot handle more than one aspect in one training and induce binary masks that might be ambiguous.', 'In our work, we propose a neural model for predicting multi-aspect sentiments for reviews and generates a probabilistic multi-dimensional mask (one per aspect) simultaneously, in an unsupervised and multi-task learning manner.', 'Our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable.']
### SUMMARY:
| [
"Neural model predicting multi-aspect sentiments and generating a probabilistic multi-dimensional mask simultaneously. Model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Neural sequence generation is commonly approached by using maximum- likelihood (ML) estimation or reinforcement learning (RL).', 'However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency.', 'We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL.', 'In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL.', 'We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α → 0 and RL to α → 1).', 'We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α.', 'Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.']
### SUMMARY:
| [
"Propose new objective function for neural sequence generation which integrates ML-based and RL-based objective functions."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Capsule Networks have shown encouraging results on \\textit{defacto} benchmark computer vision datasets such as MNIST, CIFAR and smallNORB.', 'Although, they are yet to be tested on tasks where (1) the entities detected inherently have more complex internal representations and (2) there are very few instances per class to learn from and (3) where point-wise classification is not suitable.', 'Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points.', 'In doing so we introduce \\textit{Siamese Capsule Networks}, a new variant that can be used for pairwise learning tasks.', 'We find that the model improves over baselines in the few-shot learning setting, suggesting that capsule networks are efficient at learning discriminative representations when given few samples. \n', 'We find that \\textit{Siamese Capsule Networks} perform well against strong baselines on both pairwise learning datasets when trained using a contrastive loss with $\\ell_2$-normalized capsule encoded pose features, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.']
### SUMMARY:
| [
"A pairwise learned capsule network that performs well on face verification tasks given limited labeled data "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016).', 'Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting.', 'The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones.', 'Next, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline.', 'Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization.', 'Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance.', 'Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.']
### SUMMARY:
| [
"Reactor combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN while giving better run-time performance than A3C."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Hierarchical planning, in particular, Hierarchical Task Networks, was proposed as a method to describe plans by decomposition of tasks to sub-tasks until primitive tasks, actions, are obtained.', 'Plan verification assumes a complete plan as input, and the objective is finding a task that decomposes to this plan.', 'In plan recognition, a prefix of the plan is given and the objective is finding a task that decomposes to the (shortest) plan with the given prefix.', 'This paper describes how to verify and recognize plans using a common method known from formal grammars, by parsing.']
### SUMMARY:
| [
"The paper describes methods to verify and recognize HTN plans by parsing of attribute grammars."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Neural architecture search (NAS), the task of finding neural architectures automatically, has recently emerged as a promising approach for unveiling better models over human-designed ones.', 'However, most success stories are for vision tasks and have been quite limited for text, except for a small language modeling setup.', 'In this paper, we explore NAS for text sequences at scale, by first focusing on the task of language translation and later extending to reading comprehension.', 'From a standard sequence-to-sequence models for translation, we conduct extensive searches over the recurrent cells and attention similarity functions across two translation tasks, IWSLT English-Vietnamese and WMT German-English.', 'We report challenges in performing cell searches as well as demonstrate initial success on attention searches with translation improvements over strong baselines.', 'In addition, we show that results on attention searches are transferable to reading comprehension on the SQuAD dataset.']
### SUMMARY:
| [
"We explore neural architecture search for language tasks. Recurrent cell search is challenging for NMT, but attention mechanism search works. The result of attention search on translation is transferable to reading comprehension."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code.', 'In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints.', 'In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data.', 'We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting.', 'We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.']
### SUMMARY:
| [
"We propose a regularizer that improves interpolation and autoencoders and show that it also improves the learned representation for downstream tasks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We consider the problem of generating plausible and diverse video sequences, when we are only given a start and an end frame.', 'This task is also known as inbetweening, and it belongs to the broader area of stochastic video generation, which is generally approached by means of recurrent neural networks (RNN).', 'In this paper, we propose instead a fully convolutional model to generate video sequences directly in the pixel domain.', 'We first obtain a latent video representation using a stochastic fusion mechanism that learns how to incorporate information from the start and end frames.', 'Our model learns to produce such latent representation by progressively increasing the temporal resolution, and then decode in the spatiotemporal domain using 3D convolutions.', 'The model is trained end-to-end by minimizing an adversarial loss.', 'Experiments on several widely-used benchmark datasets show that it is able to generate meaningful and diverse in-between video sequences, according to both quantitative and qualitative evaluations.']
### SUMMARY:
| [
"This paper presents method for stochastically generating in-between video frames from given key frames, using direct 3D convolutions."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Aligning knowledge graphs from different sources or languages, which aims to align both the entity and relation, is critical to a variety of applications such as knowledge graph construction and question answering.', 'Existing methods of knowledge graph alignment usually rely on a large number of aligned knowledge triplets to train effective models.', 'However, these aligned triplets may not be available or are expensive to obtain for many domains.', 'Therefore, in this paper we study how to design fully-unsupervised methods or weakly-supervised methods, i.e., to align knowledge graphs without or with only a few aligned triplets.', 'We propose an unsupervised framework based on adversarial training, which is able to map the entities and relations in a source knowledge graph to those in a target knowledge graph.', 'This framework can be further seamlessly integrated with existing supervised methods, where only a limited number of aligned triplets are utilized as guidance.', 'Experiments on real-world datasets prove the effectiveness of our proposed approach in both the weakly-supervised and unsupervised settings.']
### SUMMARY:
| [
"This paper studies weakly-supervised knowledge graph alignment with adversarial training frameworks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Multi-view learning can provide self-supervision when different views are available of the same data.', 'Distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora.', 'Motivated by the asymmetry in the two hemispheres of the human brain as well as the observation that different learning architectures tend to emphasise different aspects of sentence meaning, we present two multi-view frameworks for learning sentence representations in an unsupervised fashion.', 'One framework uses a generative objective and the other a discriminative one.', 'In both frameworks, the final representation is an ensemble of two views, in which, one view encodes the input sentence with a Recurrent Neural Network (RNN), and the other view encodes it with a simple linear model.', 'We show that, after learning, the vectors produced by our multi-view frameworks provide improved representations over their single-view learnt counterparts, and the combination of different views gives representational improvement over each view and demonstrates solid transferability on standard downstream tasks.']
### SUMMARY:
| [
"Multi-view learning improves unsupervised sentence representation learning"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['There are myriad kinds of segmentation, and ultimately the `"right" segmentation of a given scene is in the eye of the annotator.', 'Standard approaches require large amounts of labeled data to learn just one particular kind of segmentation.', 'As a first step towards relieving this annotation burden, we propose the problem of guided segmentation: given varying amounts of pixel-wise labels, segment unannotated pixels by propagating supervision locally (within an image) and non-locally (across images).', 'We propose guided networks, which extract a latent task representation---guidance---from variable amounts and classes (categories, instances, etc.) of pixel supervision and optimize our architecture end-to-end for fast, accurate, and data-efficient segmentation by meta-learning.', 'To span the few-shot and many-shot learning regimes, we examine guidance from as little as one pixel per concept to as much as 1000+ images, and compare to full gradient optimization at both extremes.', 'To explore generalization, we analyze guidance as a bridge between different levels of supervision to segment classes as the union of instances.', 'Our segmentor concentrates different amounts of supervision of different types of classes into an efficient latent representation, non-locally propagates this supervision across images, and can be updated quickly and cumulatively when given more supervision.']
### SUMMARY:
| [
"We propose a meta-learning approach for guiding visual segmentation tasks from varying amounts of supervision."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Training generative adversarial networks requires balancing of delicate adversarial dynamics.', 'Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes.', 'In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator.', 'We develop supporting theoretical analysis from the perspectives of differentiable games and stochastic approximation.', 'Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet (128 x 128) dataset.', 'Our model achieves an Inception Score (IS) of 148 and an Frechet Inception Distance (FID) of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.']
### SUMMARY:
| [
"Latent optimisation improves adversarial training dynamics. We present both theoretical analysis and state-of-the-art image generation with ImageNet 128x128."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset.', 'We look at this problem in the setting where the number of parameters is greater than the number of sampled points.', 'We show that for a wide class of differentiable activation functions (this class involves most nonlinear functions and excludes piecewise linear functions), we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular.', 'We essentially show that these non-singular hidden layer matrix satisfy a ``"good" property for these big class of activation functions.', 'Techniques involved in proving this result inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent (SGD) step of the output layer.', 'In this new algorithmic framework, we extend our earlier result and show that for all finite iterations the hidden layer satisfies the``good" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier result.', 'Both of these results are easily extended to hidden layers given by a flat matrix from that of a square matrix.', 'Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer.', 'Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply.', 'We use smoothness properties to guarantee asymptotic convergence of $O(1/\\text{number of iterations})$ to a first-order optimal solution.']
### SUMMARY:
| [
"This paper talks about theoretical properties of first-order optimal point of two layer neural network in over-parametrized case"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We introduce the concept of channel aggregation in ConvNet architecture, a novel compact representation of CNN features useful for explicitly modeling the nonlinear channels encoding especially when the new unit is embedded inside of deep architectures for action recognition.', 'The channel aggregation is based on multiple-channels features of ConvNet and aims to be at the spot finding the optical convergence path at fast speed.', 'We name our proposed convolutional architecture “nonlinear channels aggregation networks (NCAN)” and its new layer “nonlinear channels aggregation layer (NCAL)”.', 'We theoretically motivate channels aggregation functions and empirically study their effect on convergence speed and classification accuracy.', 'Another contribution in this work is an efficient and effective implementation of the NCAL, speeding it up orders of magnitude.', 'We evaluate its performance on standard benchmarks UCF101 and HMDB51, and experimental results demonstrate that this formulation not only obtains a fast convergence but stronger generalization capability without sacrificing performance.']
### SUMMARY:
| [
"An architecture enables CNN trained on the video sequences converging rapidly "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present a new method for black-box adversarial attack.', 'Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient search within the embedding space to attack an unknown target network.', 'The method produces adversarial perturbations with high level semantic patterns that are easily transferable.', 'We show that this approach can greatly improve the query efficiency of black-box adversarial attack across different target network architectures.', 'We evaluate our approach on MNIST, ImageNet and Google Cloud Vision API, resulting in a significant reduction on the number of queries.', 'We also attack adversarially defended networks on CIFAR10 and ImageNet, where our method not only reduces the number of queries, but also improves the attack success rate.']
### SUMMARY:
| [
"We present a new method that combines transfer-based and scored black-box adversarial attack, improving the success rate and query efficiency of black-box adversarial attack across different network architectures."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep neural networks (DNNs) are inspired from the human brain and the interconnection between the two has been widely studied in the literature. ', 'However, it is still an open question whether DNNs are able to make decisions like the brain.', "Previous work has demonstrated that DNNs, trained by matching the neural responses from inferior temporal (IT) cortex in monkey's brain, is able to achieve human-level performance on the image object recognition tasks.", 'This indicates that neural dynamics can provide informative knowledge to help DNNs accomplish specific tasks.', "In this paper, we introduce the concept of a neuro-AI interface, which aims to use human's neural responses as supervised information for helping AI systems solve a task that is difficult when using traditional machine learning strategies.", 'In order to deliver the idea of neuro-AI interfaces, we focus on deploying it to one of the fundamental problems in generative adversarial networks (GANs): designing a proper evaluation metric to evaluate the quality of images produced by GANs. ']
### SUMMARY:
| [
"Describe a neuro-AI interface technique to evaluate generative adversarial networks"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing.', 'Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims.', 'We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms.', 'Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior.', 'We demonstrate our framework on a highway scenario.']
### SUMMARY:
| [
"Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Many tasks in natural language understanding require learning relationships between two sequences for various tasks such as natural language inference, paraphrasing and entailment.', 'These aforementioned tasks are similar in nature, yet they are often modeled individually.', 'Knowledge transfer can be effective for closely related tasks, which is usually carried out using parameter transfer in neural networks.', 'However, transferring all parameters, some of which irrelevant for a target task, can lead to sub-optimal results and can have a negative effect on performance, referred to as \\textit{negative} transfer. \n\n', 'Hence, this paper focuses on the transferability of both instances and parameters across natural language understanding tasks by proposing an ensemble-based transfer learning method in the context of few-shot learning.\n\n', 'Our main contribution is a method for mitigating negative transfer across tasks when using neural networks, which involves dynamically bagging small recurrent neural networks trained on different subsets of the source task/s.', 'We present a straightforward yet novel approach for incorporating these networks to a target task for few-shot learning by using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training.\n\n', 'Our proposed method show improvements over hard and soft parameter sharing transfer methods in the few-shot learning case and shows competitive performance against models that are trained given full supervision on the target task, from only few examples.']
### SUMMARY:
| [
"A dynamic bagging methods approach to avoiding negatve transfer in neural network few-shot transfer learning"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment.', 'Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenient or harmful to a patient (e.g. biopsy, x-ray).', 'In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression.', 'Most of existing methods for estimating trajectories do not account for events in-between observations, what dramatically decreases their adequacy for clinical practice.', 'In this study, we develop a machine learning framework named Coordinatewise-Soft-Impute (CSI) for analyzing disease progression from sparse observations in the presence of confounding events.', 'CSI is guaranteed to converge to the global minimum of the corresponding optimization problem.', 'Experimental results also demonstrates the effectiveness of CSI using both simulated and real dataset.']
### SUMMARY:
| [
"A novel matrix completion based algorithm to model disease progression with events"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Multilingual Neural Machine Translation (NMT) systems are capable of translating between multiple source and target languages within a single system.', 'An important indicator of generalization within these systems is the quality of zero-shot translation - translating between language pairs that the system has never seen during training.', 'However, until now, the zero-shot performance of multilingual models has lagged far behind the quality that can be achieved by using a two step translation process that pivots through an intermediate language (usually English).', 'In this work, we diagnose why multilingual models under-perform in zero shot settings.', 'We propose explicit language invariance losses that guide an NMT encoder towards learning language agnostic representations.', 'Our proposed strategies significantly improve zero-shot translation performance on WMT English-French-German and on the IWSLT 2017 shared task, and for the first time, match the performance of pivoting approaches while maintaining performance on supervised directions.']
### SUMMARY:
| [
"Simple similarity constraints on top of multilingual NMT enables high quality translation between unseen language pairs for the first time."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We prove the precise scaling, at finite depth and width, for the mean and variance of the neural tangent kernel (NTK) in a randomly initialized ReLU network.', 'The standard deviation is exponential in the ratio of network depth to width.', 'Thus, even in the limit of infinite overparameterization, the NTK is not deterministic if depth and width simultaneously tend to infinity.', 'Moreover, we prove that for such deep and wide networks, the NTK has a non-trivial evolution during training by showing that the mean of its first SGD update is also exponential in the ratio of network depth to width.', 'This is sharp contrast to the regime where depth is fixed and network width is very large.', 'Our results suggest that, unlike relatively shallow and wide networks, deep and wide ReLU networks are capable of learning data-dependent features even in the so-called lazy training regime.']
### SUMMARY:
| [
"The neural tangent kernel in a randomly initialized ReLU net is non-trivial fluctuations as long as the depth and width are comparable. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Most algorithms for representation learning and link prediction in relational data have been designed for static data.', 'However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems.', 'This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time.', 'For the problem of link prediction under temporal constraints, i.e., answering queries of the form (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4.\n', 'We introduce new regularization schemes and present an extension of ComplEx that achieves state-of-the-art performance.', 'Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.']
### SUMMARY:
| [
"We propose new tensor decompositions and associated regularizers to obtain state of the art performances on temporal knowledge base completion."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The conventional approach to solving the recommendation problem greedily ranks\n', 'individual document candidates by prediction scores.', 'However, this method fails to\n', 'optimize the slate as a whole, and hence, often struggles to capture biases caused\n', 'by the page layout and document interdepedencies.', 'The slate recommendation\n', 'problem aims to directly find the optimally ordered subset of documents (i.e.\n', 'slates) that best serve users’ interests.', 'Solving this problem is hard due to the\n', 'combinatorial explosion of document candidates and their display positions on the\n', 'page.', 'Therefore we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework.', 'In this paper, we introduce List Conditional Variational Auto-Encoders (ListCVAE),\n', 'which learn the joint distribution of documents on the slate conditioned\n', 'on user responses, and directly generate full slates.', 'Experiments on simulated\n', 'and real-world data show that List-CVAE outperforms greedy ranking methods\n', 'consistently on various scales of documents corpora.']
### SUMMARY:
| [
"We used a CVAE type model structure to learn to directly generate slates/whole pages for recommendation systems."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Neural networks for structured data like graphs have been studied extensively in recent years.\n', 'To date, the bulk of research activity has focused mainly on static graphs.\n', 'However, most real-world networks are dynamic since their topology tends to change over time.\n', 'Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining.\n', 'Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature.\n', 'In this paper, we propose a model that predicts the evolution of dynamic graphs.\n', 'Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs.\n', 'Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology.\n', 'We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets.\n', 'Results demonstrate the effectiveness of the proposed model.']
### SUMMARY:
| [
"Combining graph neural networks and the RNN graph generative model, we propose a novel architecture that is able to learn from a sequence of evolving graphs and predict the graph topology evolution for the future timesteps"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['As for knowledge-based question answering, a fundamental problem is to relax the assumption of answerable questions from simple questions to compound questions.', 'Traditional approaches firstly detect topic entity mentioned in questions, then traverse the knowledge graph to find relations as a multi-hop path to answers, while we propose a novel approach to leverage simple-question answerers to answer compound questions.', 'Our model consists of two parts:', '(i) a novel learning-to-decompose agent that learns a policy to decompose a compound question into simple questions and', '(ii) three independent simple-question answerers that classify the corresponding relations for each simple question.', 'Experiments demonstrate that our model learns complex rules of compositionality as stochastic policy, which benefits simple neural networks to achieve state-of-the-art results on WebQuestions and MetaQA.', 'We analyze the interpretable decomposition process as well as generated partitions.']
### SUMMARY:
| [
"We propose a learning-to-decompose agent that helps simple-question answerers to answer compound question over knowledge graph."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Energy based models outputs unmormalized log-probability values given datasamples. ', 'Such a estimation is essential in a variety of application problems suchas sample generation, denoising, sample restoration, outlier detection, Bayesianreasoning, and many more. ', 'However, standard maximum likelihood training iscomputationally expensive due to the requirement of sampling model distribution.', 'Score matching potentially alleviates this problem, and denoising score matching(Vincent, 2011) is a particular convenient version. ', 'However, previous attemptsfailed to produce models capable of high quality sample synthesis. ', 'We believethat it is because they only performed denoising score matching over a singlenoise scale.', 'To overcome this limitation, here we instead learn an energy functionusing all noise scales. ', 'When sampled using Annealed Langevin dynamics andsingle step denoising jump, our model produced high-quality samples comparableto state-of-the-art techniques such as GANs, in addition to assigning likelihood totest data comparable to previous likelihood models. ', 'Our model set a new sam-ple quality baseline in likelihood-based models. ', 'We further demonstrate that our model learns sample distribution and generalize well on an image inpainting tasks.']
### SUMMARY:
| [
"Learned energy based model with score matching"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['A restricted Boltzmann machine (RBM) learns a probabilistic distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling.', 'Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-way) input.', 'Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by construction.', 'This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power.', 'A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed.']
### SUMMARY:
| [
"Propose a general tensor-based RBM model which can compress the model greatly at the same keep a strong model expression capacity"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Autonomous driving is still considered as an “unsolved problem” given its inherent important variability and that many processes associated with its development like vehicle control and scenes recognition remain open issues.', 'Despite reinforcement learning algorithms have achieved notable results in games and some robotic manipulations, this technique has not been widely scaled up to the more challenging real world applications like autonomous driving.', 'In this work, we propose a deep reinforcement learning (RL) algorithm embedding an actor critic architecture with multi-step returns to achieve a better robustness of the agent learning strategies when acting in complex and unstable environments.', 'The experiment is conducted with Carla simulator offering a customizable and realistic urban driving conditions.', 'The developed deep actor RL guided by a policy-evaluator critic distinctly surpasses the performance of a standard deep RL agent.']
### SUMMARY:
| [
"An actor-critic reinforcement learning approach with multi-step returns applied to autonomous driving with Carla simulator."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on.', 'The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability.', 'In this paper, we propose new techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data.', 'These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets.', 'They also indicate that GANs have significant problems in reproducing the more distributional properties of the training dataset.', 'In particular, the diversity of such synthetic data is orders of magnitude smaller than that of the original data.']
### SUMMARY:
| [
"We propose new methods for evaluating and quantifying the quality of synthetic GAN distributions from the perspective of classification tasks"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The goal of survival clustering is to map subjects (e.g., users in a social network, patients in a medical study) to $K$ clusters ranging from low-risk to high-risk.', 'Existing survival methods assume the presence of clear \\textit{end-of-life} signals or introduce them artificially using a pre-defined timeout.', 'In this paper, we forego this assumption and introduce a loss function that differentiates between the empirical lifetime distributions of the clusters using a modified Kuiper statistic.', 'We learn a deep neural network by optimizing this loss, that performs a soft clustering of users into survival groups.', 'We apply our method to a social network dataset with over 1M subjects, and show significant improvement in C-index compared to alternatives.']
### SUMMARY:
| [
"The goal of survival clustering is to map subjects into clusters. Without end-of-life signals, this is a challenging task. To address this task we propose a new loss function by modifying the Kuiper statistics."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Bayesian optimization (BO) is a popular methodology to tune the hyperparameters of expensive black-box functions.', 'Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets.', 'In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics.', 'The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks.', 'We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior.', 'We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy.', 'Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods.']
### SUMMARY:
| [
"We show how using semi-parametric prior estimations can speed up HPO significantly across datasets and metrics."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose Pure CapsNets (P-CapsNets) without routing procedures.', 'Specifically, we make three modifications to CapsNets. ', 'First, we remove routing procedures from CapsNets based on the observation that the coupling coefficients can be learned implicitly.', 'Second, we replace the convolutional layers in CapsNets to improve efficiency.', 'Third, we package the capsules into rank-3 tensors to further improve efficiency.', 'The experiment shows that P-CapsNets achieve better performance than CapsNets with varied routine procedures by using significantly fewer parameters on MNIST&CIFAR10.', 'The high efficiency of P-CapsNets is even comparable to some deep compressing models.', 'For example, we achieve more than 99% percent accuracy on MNIST by using only 3888 parameters. ', 'We visualize the capsules as well as the corresponding correlation matrix to show a possible way of initializing CapsNets in the future.', 'We also explore the adversarial robustness of P-CapsNets compared to CNNs.']
### SUMMARY:
| [
"Routing procedures are not necessary for CapsNets"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['\tA recent line of work has studied the statistical properties of neural networks to great success from a {\\it mean field theory} perspective, making and verifying very precise predictions of neural network behavior and test time performance.\n\t', 'In this paper, we build upon these works to explore two methods for taming the behaviors of random residual networks (with only fully connected layers and no batchnorm).\n\t', 'The first method is {\\it width variation (WV)}, i.e. varying the widths of layers as a function of depth.\n\t', 'We show that width decay reduces gradient explosion without affecting the mean forward dynamics of the random network.\n\t', 'The second method is {\\it variance variation (VV)}, i.e. changing the initialization variances of weights and biases over depth.\n\t', 'We show VV, used appropriately, can reduce gradient explosion of tanh and ReLU resnets from $\\exp(\\Theta(\\sqrt L))$ and $\\exp(\\Theta(L))$ respectively to constant $\\Theta(1)$.\n\t', 'A complete phase-diagram is derived for how variance decay affects different dynamics, such as those of gradient and activation norms.\n\t', 'In particular, we show the existence of many phase transitions where these dynamics switch between exponential, polynomial, logarithmic, and even constant behaviors.\n\t', 'Using the obtained mean field theory, we are able to track surprisingly well how VV at initialization time affects training and test time performance on MNIST after a set number of epochs: the level sets of test/train set accuracies coincide with the level sets of the expectations of certain gradient norms or of metric expressivity (as defined in \\cite{yang_meanfield_2017}), a measure of expansion in a random neural network.\n\t', 'Based on insights from past works in deep mean field theory and information geometry, we also provide a new perspective on the gradient explosion/vanishing problems: they lead to ill-conditioning of the Fisher information matrix, causing optimization troubles.']
### SUMMARY:
| [
"By setting the width or the initialization variance of each layer differently, we can actually subdue gradient explosion problems in residual networks (with fully connected layers and no batchnorm). A mathematical theory is developed that not only tells you how to do it, but also surprisingly is able to predict, after you apply such tricks, how fast your network trains to achieve a certain test set performance. This is some black magic stuff, and it's called \"Deep Mean Field Theory.\""
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We explore the collaborative multi-agent setting where a team of deep reinforcement learning agents attempt to solve a shared task in partially observable environments.', 'In this scenario, learning an effective communication protocol is key.', 'We propose a communication protocol that allows for targeted communication, where agents learn \\emph{what} messages to send and \\emph{who} to send them to.', 'Additionally, we introduce a multi-stage communication approach where the agents co-ordinate via several rounds of communication before taking an action in the environment.', 'We evaluate our approach on several cooperative multi-agent tasks, of varying difficulties with varying number of agents, in a variety of environments ranging from 2D grid layouts of shapes and simulated traffic junctions to complex 3D indoor environments.', 'We demonstrate the benefits of targeted as well as multi-stage communication.', 'Moreover, we show that the targeted communication strategies learned by the agents are quite interpretable and intuitive.']
### SUMMARY:
| [
"Targeted communication in multi-agent cooperative reinforcement learning"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['It is difficult for the beginners of etching latte art to make well-balanced patterns by using two fluids with different viscosities such as foamed milk and syrup.', 'Even though making etching latte art while watching making videos which show the procedure, it is difficult to keep balance.', 'Thus well-balanced etching latte art cannot be made easily. \n', 'In this paper, we propose a system which supports the beginners to make well-balanced etching latte art by projecting a making procedure of etching latte art directly onto a cappuccino. \n', 'The experiment results show the progress by using our system. ', 'We also discuss about the similarity of the etching latte art and the design templates by using background subtraction.']
### SUMMARY:
| [
"We have developed an etching latte art support system which projects the making procedure directly onto a cappuccino to help the beginners to make well-balanced etching latte art."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We focus on temporal self-supervision for GAN-based video generation tasks.', 'While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored.', 'This is crucial for sequential generation tasks, e.g. video super-resolution and unpaired video translation.', 'For the former, state-of-the-art methods often favor simpler norm losses such as L2 over adversarial training.', 'However, their averaging nature easily leads to temporally smooth results with an undesirable lack of spatial detail.', 'For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies.', 'In contrast, we focus on improving the learning objectives and propose a temporally self-supervised algorithm.', 'For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail.', 'We also propose a novel Ping-Pong loss to improve the long-term temporal consistency.', 'It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features.', 'We also propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution.', 'A series of user studies confirms the rankings computed with these metrics.']
### SUMMARY:
| [
"We propose temporal self-supervisions for learning stable temporal functions with GANs."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The scarcity of labeled training data often prohibits the internationalization of NLP models to multiple languages. ', 'Cross-lingual understanding has made progress in this area using language universal representations.', 'However, most current approaches focus on the problem as one of aligning language and do not address the natural domain drift across languages and cultures. ', 'In this paper, We address the domain gap in the setting of semi-supervised cross-lingual document classification, where labeled data is available in a source language and only unlabeled data is available in the target language. ', 'We combine a state-of-the-art unsupervised learning method, masked language modeling pre-training, with a recent method for semi-supervised learning, Unsupervised Data Augmentation (UDA), to simultaneously close the language and the domain gap. ', 'We show that addressing the domain gap in cross-lingual tasks is crucial. ', 'We improve over strong baselines and achieve a new state-of-the-art for cross-lingual document classification.']
### SUMMARY:
| [
"Semi-supervised Cross-lingual Document Classification"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['A distinct commonality between HMMs and RNNs is that they both learn hidden representations for sequential data.', 'In addition, it has been noted that the backward computation of the Baum-Welch algorithm for HMMs is a special case of the back propagation algorithm used for neural networks (Eisner (2016)). ', 'Do these observations suggest that, despite their many apparent differences, HMMs are a special case of RNNs? ', 'In this paper, we investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization, to answer this question.', 'In particular, we investigate three key design factors—independence assumptions between the hidden states and the observation, the placement of softmax, and the use of non-linearity—in order to pin down their empirical effects. ', 'We present a comprehensive empirical study to provide insights on the interplay between expressivity and interpretability with respect to language modeling and parts-of-speech induction.']
### SUMMARY:
| [
"Are HMMs a special case of RNNs? We investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization and provide new insights."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference.', 'This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate).', 'We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier.', 'However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable.', 'Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.']
### SUMMARY:
| [
"We provide an information theoretic and experimental analysis of state-of-the-art variational autoencoders."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs.', 'GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes.', 'Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks.', 'However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations.', 'Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures.', 'Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures.', 'We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test.', 'We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.']
### SUMMARY:
| [
"We develop theoretical foundations for the expressive power of GNNs and design a provably most powerful GNN."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We introduce MTLAB, a new algorithm for learning multiple related tasks with strong theoretical guarantees.', 'Its key idea is to perform learning sequentially over the data of all tasks, without interruptions or restarts at task boundaries.', 'Predictors for individual tasks are derived from this process by an additional online-to-batch conversion step.\n\n', 'By learning across task boundaries, MTLAB achieves a sublinear regret of true risks in the number of tasks.', 'In the lifelong learning setting, this leads to an improved generalization bound that converges with the total number of samples across all observed tasks, instead of the number of examples per tasks or the number of tasks independently.', 'At the same time, it is widely applicable: it can handle finite sets of tasks, as common in multi-task learning, as well as stochastic task sequences, as studied in lifelong learning.']
### SUMMARY:
| [
"A new algorithm for online multi-task learning that learns without restarts at the task borders"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data.', 'In this work, we provide a comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability.', 'We study the impact of linguistic properties of the languages, the architecture of the model, and of the learning objectives.', 'The experimental study is done in the context of three typologically different languages -- Spanish, Hindi, and Russian -- and using two conceptually different NLP tasks, textual entailment and named entity recognition.', 'Among our key conclusions is the fact that lexical overlap between languages plays a negligible role in the cross-lingual success, while the depth of the network is an important part of it']
### SUMMARY:
| [
"Cross-Lingual Ability of Multilingual BERT: An Empirical Study"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We analyze the dynamics of training deep ReLU networks and their implications on generalization capability.', 'Using a teacher-student setting, we discovered a novel relationship between the gradient received by hidden student nodes and the activations of teacher nodes for deep ReLU networks.', 'With this relationship and the assumption of small overlapping teacher node activations, we prove that (1) student nodes whose weights are initialized to be close to teacher nodes converge to them at a faster rate, and (2) in over-parameterized regimes and 2-layer case, while a small set of lucky nodes do converge to the teacher nodes, the fan-out weights of other nodes converge to zero.', 'This framework provides insight into multiple puzzling phenomena in deep learning like over-parameterization, implicit regularization, lottery tickets, etc.', 'We verify our assumption by showing that the majority of BatchNorm biases of pre-trained VGG11/16 models are negative.', 'Experiments on (1) random deep teacher networks with Gaussian inputs, (2) teacher network pre-trained on CIFAR-10 and (3) extensive ablation studies validate our multiple theoretical predictions.']
### SUMMARY:
| [
"A theoretical framework for deep ReLU network that can explains multiple puzzling phenomena like over-parameterization, implicit regularization, lottery tickets, etc. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora.', 'Recent studies showed that the need for parallel data supervision can be alleviated with character-level information.', 'While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet.', 'In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way.', 'Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs.', 'Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese.', 'We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation.', 'Our code, embeddings and dictionaries are publicly available.']
### SUMMARY:
| [
"Aligning languages without the Rosetta Stone: with no parallel data, we construct bilingual dictionaries using adversarial training, cross-domain local scaling, and an accurate proxy criterion for cross-validation."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA).', 'The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image.', 'In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count.', 'Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections.', 'A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image.', 'Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting.']
### SUMMARY:
| [
"We perform counting for visual question answering; our model produces interpretable outputs by counting directly from detected objects."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Graphs are fundamental data structures required to model many important real-world data, from knowledge graphs, physical and social interactions to molecules and proteins.', 'In this paper, we study the problem of learning generative models of graphs from a dataset of graphs of interest.', 'After learning, these models can be used to generate samples with similar properties as the ones in the dataset. ', 'Such models can be useful in a lot of applications, e.g. drug discovery and knowledge graph construction.', 'The task of learning generative models of graphs, however, has its unique challenges.', 'In particular, how to handle symmetries in graphs and ordering of its elements during the generation process are important issues.', 'We propose a generic graph neural net based model that is capable of generating any arbitrary graph. ', 'We study its performance on a few graph generation tasks compared to baselines that exploit domain knowledge. ', 'We discuss potential issues and open problems for such generative models going forward.']
### SUMMARY:
| [
"We study the graph generation problem and propose a powerful deep generative model capable of generating arbitrary graphs."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We introduce a neural network that represents sentences by composing their words according to induced binary parse trees.', 'We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser.', 'Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM.', 'It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees.', 'As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation.', 'We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.', 'Finally, we show how performance can be improved with an attention mechanism which fully exploits the parse chart, by attending over all possible subspans of the sentence.']
### SUMMARY:
| [
"Represent sentences by composing them with Tree-LSTMs according to automatically induced parse trees."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Pruning neural networks for wiring length efficiency is considered.', 'Three techniques are proposed and experimentally tested: distance-based regularization, nested-rank pruning, and layer-by-layer bipartite matching.', 'The first two algorithms are used in the training and pruning phases, respectively, and the third is used in the arranging neurons phase.', 'Experiments show that distance-based regularization with weight based pruning tends to perform the best, with or without layer-by-layer bipartite matching.', 'These results suggest that these techniques may be useful in creating neural networks for implementation in widely deployed specialized circuits.']
### SUMMARY:
| [
"Three new algorithms with ablation studies to prune neural network to optimize for wiring length, as opposed to number of remaining weights."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. ', 'Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with video-sentence pairs. ', 'Thus, an inherent challenge is automatically inferring the latent correspondence between visual and language representations.', 'To facilitate this alignment, we propose our Weakly-supervised Moment Alignment Network (wMAN) which exploits a multi-level co-attention mechanism to learn richer multimodal representations.', 'The aforementioned mechanism is comprised of a Frame-By-Word interaction module as well as a novel Word-Conditioned Visual Graph (WCVG).', 'Our approach also incorporates a novel application of positional encodings, commonly used in Transformers, to learn visual-semantic representations that contain contextual information of their relative positions in the temporal sequence through iterative message-passing.', 'Comprehensive experiments on the DiDeMo and Charades-STA datasets demonstrate the effectiveness of our learned representations: our combined wMAN model not only outperforms the state-of-the-art weakly-supervised method by a significant margin but also does better than strongly-supervised state-of-the-art methods on some metrics.']
### SUMMARY:
| [
"Weakly-Supervised Text-Based Video Moment Retrieval"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In machine learning tasks, overtting frequently crops up when the number of samples of target domain is insufficient, for the generalization ability of the classifier is poor in this circumstance.', 'To solve this problem, transfer learning utilizes the knowledge of similar domains to improve the robustness of the learner.', 'The main idea of existing transfer learning algorithms is to reduce the dierence between domains by sample selection or domain adaptation.', 'However, no matter what transfer learning algorithm we use, the difference always exists and the hybrid training of source and target data leads to reducing fitting capability of the learner on target domain.', 'Moreover, when the relatedness between domains is too low, negative transfer is more likely to occur.', 'To tackle the problem, we proposed a two-phase transfer learning architecture based on ensemble learning, which uses the existing transfer learning algorithms to train the weak learners in the first stage, and uses the predictions of target data to train the final learner in the second stage.', 'Under this architecture, the fitting capability and generalization capability can be guaranteed at the same time.', 'We evaluated the proposed method on public datasets, which demonstrates the effectiveness and robustness of our proposed method.']
### SUMMARY:
| [
"How to use stacked generalization to improve the performance of existing transfer learning algorithms when limited labeled data is available."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data.', 'For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems.', 'In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system.\n', 'Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples.', 'As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed.', 'DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility.\n', 'The resulting DeLaN network performs very well at robot tracking control.', 'The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.']
### SUMMARY:
| [
"This paper introduces a physics prior for Deep Learning and applies the resulting network topology for model-based control."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Many challenging prediction problems, from molecular optimization to program synthesis, involve creating complex structured objects as outputs.', 'However, available training data may not be sufficient for a generative model to learn all possible complex transformations.', 'By leveraging the idea that evaluation is easier than generation, we show how a simple, broadly applicable, iterative target augmentation scheme can be surprisingly effective in guiding the training and use of such models.', 'Our scheme views the generative model as a prior distribution, and employs a separately trained filter as the likelihood.', "In each augmentation step, we filter the model's outputs to obtain additional prediction targets for the next training epoch.", 'Our method is applicable in the supervised as well as semi-supervised settings.', 'We demonstrate that our approach yields significant gains over strong baselines both in molecular optimization and program synthesis.', 'In particular, our augmented model outperforms the previous state-of-the-art in molecular optimization by over 10% in absolute gain.']
### SUMMARY:
| [
"We improve generative models by proposing a meta-algorithm that filters new training data from the model's outputs."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time.', 'This gap between the expressive capabilities and sampling practicalities of energy-based models is exemplified by the protein folding problem, since energy landscapes underlie contemporary knowledge of protein biophysics but computer simulations are challenged to fold all but the smallest proteins from first principles.', 'In this work we aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data.', 'We compose a neural energy function with a novel and efficient simulator based on Langevin dynamics to build an end-to-end-differentiable model of atomic protein structure given amino acid sequence information.', "We introduce techniques for stabilizing backpropagation under long roll-outs and demonstrate the model's capacity to make multimodal predictions and to, in some cases, generalize to unobserved protein fold types when trained on a large corpus of protein structures."]
### SUMMARY:
| [
"We use an unrolled simulator as an end-to-end differentiable model of protein structure and show it can (sometimes) hierarchically generalize to unseen fold topologies."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Progress in understanding how individual animals learn requires high-throughput standardized methods for behavioral training and ways of adapting training.', 'During the course of training with hundreds or thousands of trials, an animal may change its underlying strategy abruptly, and capturing these changes requires real-time inference of the animal’s latent decision-making strategy.', 'To address this challenge, we have developed an integrated platform for automated animal training, and an iterative decision-inference model that is able to infer the momentary decision-making policy, and predict the animal’s choice on each trial with an accuracy of ~80\\%, even when the animal is performing poorly.', 'We also combined decision predictions at single-trial resolution with automated pose estimation to assess movement trajectories.', 'Analysis of these features revealed categories of movement trajectories that associate with decision confidence.']
### SUMMARY:
| [
"Automated mice training for neuroscience with online iterative latent strategy inference for behavior prediction"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recurrent neural networks (RNNs) are a powerful tool for modeling sequential data.', 'Despite their widespread usage, understanding how RNNs solve complex problems remains elusive. ', 'Here, we characterize how popular RNN architectures perform document-level sentiment classification.', 'Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. ', 'We identify a simple mechanism, integration along an approximate line attractor, and find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs).', 'Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.']
### SUMMARY:
| [
"We analyze recurrent networks trained on sentiment classification, and find that they all exhibit approximate line attractor dynamics when solving this task."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems.', 'However, this interaction between fields is less developed in the study of motor control.', 'In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control.', 'We then use this platform to study motor activity across contexts by training a model to solve four complex tasks.', "Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodent's behavior and goals.", 'We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics.', 'These representations are reflected in the sequential activity and population dynamics of neural subpopulations.', 'Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience.']
### SUMMARY:
| [
"We built a physical simulation of a rodent, trained it to solve a set of tasks, and analyzed the resulting networks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution.', 'This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients.', 'Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art results on several popular benchmark problems for image generation.']
### SUMMARY:
| [
"An extension of GANs combining optimal transport in primal form with an energy distance defined in an adversarially learned feature space."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We build a virtual agent for learning language in a 2D maze-like world.', 'The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards.', 'It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and question answering.', 'It learns simultaneously the visual representations of the world, the language, and the action control.', 'By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences.', 'The new words are transferred from the answers of language prediction.', 'Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words.', 'The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences.', 'In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix.']
### SUMMARY:
| [
"Training an agent in a 2D virtual world for grounded language acquisition and generalization."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Reinforcement learning algorithms, though successful, tend to over-fit to training environments, thereby hampering their application to the real-world.', 'This paper proposes $\\text{W}\\text{R}^{2}\\text{L}$ -- a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks.', 'Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver.', 'Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. \n', 'We empirically demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments']
### SUMMARY:
| [
"An RL algorithm that learns to be robust to changes in dynamics"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Partially observable Markov decision processes (POMDPs) are a natural model for scenarios where one has to deal with incomplete knowledge and random events.\n', 'Applications include, but are not limited to, robotics and motion planning.\n', 'However, many relevant properties of POMDPs are either undecidable or very expensive to compute in terms of both runtime and memory consumption.\n', 'In our work, we develop a game-based abstraction method that is able to deliver safe bounds and tight\n approximations for important sub-classes of such properties.\n', 'We discuss the theoretical implications and showcase the applicability of our results on a broad spectrum of benchmarks.\n']
### SUMMARY:
| [
"This paper provides a game-based abstraction scheme to compute provably sound policies for POMDPs."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper we approach two relevant deep learning topics:', 'i) tackling of graph structured input data and', 'ii) a better understanding and analysis of deep networks and related learning algorithms.', 'With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes).', 'Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures.', 'We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task.', 'The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit.', 'Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum.', 'We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs.', 'We further support our claims with training experiments and numerical analysis of the cost function on networks with up to $128$ layers.']
### SUMMARY:
| [
"A toy dataset based on critical percolation in a planar graph provides an analytical window to the training dynamics of deep neural networks "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['While neural networks can be trained to map from one specific dataset to another, they usually do not learn a generalized transformation that can extrapolate accurately outside the space of training.', 'For instance, a generative adversarial network (GAN) exclusively trained to transform images of cars from light to dark might not have the same effect on images of horses.', 'This is because neural networks are good at generation within the manifold of the data that they are trained on.', 'However, generating new samples outside of the manifold or extrapolating "out-of-sample" is a much harder problem that has been less well studied.', 'To address this, we introduce a technique called neuron editing that learns how neurons encode an edit for a particular transformation in a latent space.', 'We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons.', "By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations.", 'We showcase our technique on image domain/style transfer and two biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.']
### SUMMARY:
| [
"We reframe the generation problem as one of editing existing points, and as a result extrapolate better than traditional GANs."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present a representation for describing transition models in complex uncertain domains using relational rules. ', 'For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state. ', 'An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. ', "Feed-forward neural networks are used to learn the transition distribution on the relevant objects' properties. ", 'This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table.']
### SUMMARY:
| [
"A new approach that learns a representation for describing transition models in complex uncertaindomains using relational rules. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Differentiable planning network architecture has shown to be powerful in solving transfer planning tasks while possesses a simple end-to-end training feature.', 'Many great planning architectures that have been proposed later in literature are inspired by this design principle in which a recursive network architecture is applied to emulate backup operations of a value iteration algorithm.', 'However existing frame-works can only learn and plan effectively on domains with a lattice structure, i.e. regular graphs embedded in a certain Euclidean space.', 'In this paper, we propose a general planning network, called Graph-based Motion Planning Networks (GrMPN), that will be able to', 'i) learn and plan on general irregular graphs, hence', 'ii) render existing planning network architectures special cases.', 'The proposed GrMPN framework is invariant to task graph permutation, i.e. graph isormophism.', 'As a result, GrMPN possesses the generalization strength and data-efficiency ability.', 'We demonstrate the performance of the proposed GrMPN method against other baselines on three domains ranging from 2D mazes (regular graph), path planning on irregular graphs, and motion planning (an irregular graph of robot configurations).']
### SUMMARY:
| [
"We propose an end-to-end differentiable planning network for graphs. This can be applicable to many motion planning problems"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We describe techniques for training high-quality image denoising models that require only single instances of corrupted images as training data.', 'Inspired by a recent technique that removes the need for supervision through image pairs by employing networks with a "blind spot" in the receptive field, we address two of its shortcomings: inefficient training and poor final denoising performance.', 'This is achieved through a novel blind-spot convolutional network architecture that allows efficient self-supervised training, as well as application of Bayesian distribution prediction on output colors.', 'Together, they bring the self-supervised model on par with fully supervised deep learning techniques in terms of both quality and training speed in the case of i.i.d. Gaussian noise.']
### SUMMARY:
| [
"We learn high-quality denoising using only single instances of corrupted images as training data."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates.', 'This has been a notable problem in training deep RL agents to perform web-based tasks, such as booking flights or replying to emails, where a single mistake can ruin the entire sequence of actions.', 'A common remedy is to "warm-start" the agent by pre-training it to mimic expert demonstrations, but this is prone to overfitting.', 'Instead, we propose to constrain exploration using demonstrations.', 'From each demonstration, we induce high-level "workflows" which constrain the allowable actions at each time step to be similar to those in the demonstration (e.g., "Step 1: click on a textbox; Step 2: enter some text").', 'Our exploration policy then learns to identify successful workflows and samples actions that satisfy these workflows.', 'Workflows prune out bad exploration directions and accelerate the agent’s ability to discover rewards.', 'We use our approach to train a novel neural policy designed to handle the semi-structured nature of websites, and evaluate on a suite of web tasks, including the recent World of Bits benchmark.', 'We achieve new state-of-the-art results, and show that workflow-guided exploration improves sample efficiency over behavioral cloning by more than 100x.']
### SUMMARY:
| [
"We solve the sparse rewards problem on web UI tasks using exploration guided by demonstrations"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Nowadays deep learning is one of the main topics in almost every field.', 'It helped to get amazing results in a great number of tasks.', 'The main problem is that this kind of learning and consequently neural networks, that can be defined deep, are resource intensive.', 'They need specialized hardware to perform a computation in a reasonable time.', 'Unfortunately, it is not sufficient to make deep learning "usable" in real life.', 'Many tasks are mandatory to be as much as possible real-time.', 'So it is needed to optimize many components such as code, algorithms, numeric accuracy and hardware, to make them "efficient and usable".', 'All these optimizations can help us to produce incredibly accurate and fast learning models.']
### SUMMARY:
| [
"Embedded architecture for deep learning on optimized devices for face detection and emotion recognition "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Word embedding is a useful approach to capture co-occurrence structures in a large corpus of text.', 'In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e.g. the demographic of the author, time and venue of publication, etc.---and we would like the embedding to naturally capture the information of the covariates.', 'In this paper, we propose a new tensor decomposition model for word embeddings with covariates.', 'Our model jointly learns a \\emph{base} embedding for all the words as well as a weighted diagonal transformation to model how each covariate modifies the base embedding.', 'To obtain the specific embedding for a particular author or venue, for example, we can then simply multiply the base embedding by the transformation matrix associated with that time or venue.', 'The main advantages of our approach is data efficiency and interpretability of the covariate transformation matrix.', 'Our experiments demonstrate that our joint model learns substantially better embeddings conditioned on each covariate compared to the standard approach of learning a separate embedding for each covariate using only the relevant subset of data.', "Furthermore, our model encourages the embeddings to be ``topic-aligned'' in the sense that the dimensions have specific independent meanings.", 'This allows our covariate-specific embeddings to be compared by topic, enabling downstream differential analysis.', 'We empirically evaluate the benefits of our algorithm on several datasets, and demonstrate how it can be used to address many natural questions about the effects of covariates.']
### SUMMARY:
| [
"Using the same embedding across covariates doesn't make sense, we show that a tensor decomposition algorithm learns sparse covariate-specific embeddings and naturally separable topics jointly and data-efficiently."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks.', 'Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently results for the complexity of training deep neural networks have been obtained.', 'In this work we show that large classes of deep neural networks with various architectures (e.g., DNNs, CNNs, Binary Neural Networks, and ResNets), activation functions (e.g., ReLUs and leaky ReLUs), and loss functions (e.g., Hinge loss, Euclidean loss, etc) can be trained to near optimality with desired target accuracy using linear programming in time that is exponential in the input data and parameter space dimension and polynomial in the size of the data set; improvements of the dependence in the input dimension are known to be unlikely assuming $P\\neq NP$, and improving the dependence on the parameter space dimension remains open.', 'In particular, we obtain polynomial time algorithms for training for a given fixed network architecture.', 'Our work applies more broadly to empirical risk minimization problems which allows us to generalize various previous results and obtain new complexity results for previously unstudied architectures in the proper learning setting.']
### SUMMARY:
| [
"Using linear programming we show that the computational complexity of approximate Deep Neural Network training depends polynomially on the data size for several architectures"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The extended Kalman filter (EKF) is a classical signal processing algorithm which performs efficient approximate Bayesian inference in non-conjugate models by linearising the local measurement function, avoiding the need to compute intractable integrals when calculating the posterior.', 'In some cases the EKF outperforms methods which rely on cubature to solve such integrals, especially in time-critical real-world problems.', 'The drawback of the EKF is its local nature, whereas state-of-the-art methods such as variational inference or expectation propagation (EP) are considered global approximations.', 'We formulate power EP as a nonlinear Kalman filter, before showing that linearisation results in a globally iterated algorithm that exactly matches the EKF on the first pass through the data, and iteratively improves the linearisation on subsequent passes.', 'An additional benefit is the ability to calculate the limit as the EP power tends to zero, which removes the instability of the EP-like algorithm.', 'The resulting inference scheme solves non-conjugate temporal Gaussian process models in linear time, $\\mathcal{O}(n)$, and in closed form.']
### SUMMARY:
| [
"We unify the extended Kalman filter (EKF) and the state space approach to power expectation propagation (PEP) by solving the intractable moment matching integrals in PEP via linearisation. This leads to a globally iterated extension of the EKF."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['This paper explores the simplicity of learned neural networks under various settings: learned on real vs random data, varying size/architecture and using large minibatch size vs small minibatch size.', 'The notion of simplicity used here is that of learnability i.e., how accurately can the prediction function of a neural network be learned from labeled samples from it.', 'While learnability is different from (in fact often higher than) test accuracy, the results herein suggest that there is a strong correlation between small generalization errors and high learnability.\n', 'This work also shows that there exist significant qualitative differences in shallow networks as compared to popular deep networks.', 'More broadly, this paper extends in a new direction, previous work on understanding the properties of learned neural networks.', 'Our hope is that such an empirical study of understanding learned neural networks might shed light on the right assumptions that can be made for a theoretical study of deep learning.']
### SUMMARY:
| [
"Exploring the Learnability of Learned Neural Networks"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
[' With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits.', 'Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us \\emph{why} or \\emph{how} a particular method is better and how dataset biases influence the choices of model design.\n ', 'In this paper, we present a general methodology for {\\emph{interpretable}} evaluation of NLP systems and choose the task of named entity recognition (NER) as a case study, which is a core task of identifying people, places, or organizations in text.', 'The proposed evaluation method enables us to interpret the \\textit{model biases}, \\textit{dataset biases}, and how the \\emph{differences in the datasets} affect the design of the models, identifying the strengths and weaknesses of current approaches.', 'By making our {analysis} tool available, we make it easy for future researchers to run similar analyses and drive the progress in this area.']
### SUMMARY:
| [
"We propose a generalized evaluation methodology to interpret model biases, dataset biases, and their correlation."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The task of visually grounded dialog involves learning goal-oriented cooperative dialog between autonomous agents who exchange information about a scene through several rounds of questions and answers.', 'We posit that requiring agents to adhere to rules of human language while also maximizing information exchange is an ill-posed problem, and observe that humans do not stray from a common language, because they are social creatures and have to communicate with many people everyday, and it is far easier to stick to a common language even at the cost of some efficiency loss.', 'Using this as inspiration, we propose and evaluate a multi-agent dialog framework where each agent interacts with, and learns from, multiple agents, and show that this results in more relevant and coherent dialog (as judged by human evaluators) without sacrificing task performance (as judged by quantitative metrics).']
### SUMMARY:
| [
"Social agents learn to talk to each other in natural language towards a goal"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Posterior collapse in Variational Autoencoders (VAEs) arises when the variational distribution closely matches the uninformative prior for a subset of latent variables.', 'This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA (pPCA).', 'We identify how local maxima can emerge from the marginal log-likelihood of pPCA, which yields similar local maxima for the evidence lower bound (ELBO).', 'We show that training a linear VAE with variational inference recovers a uniquely identifiable global maximum corresponding to the principal component directions.', 'We provide empirical evidence that the presence of local maxima causes posterior collapse in deep non-linear VAEs.', 'Our findings help to explain a wide range of heuristic approaches in the literature that attempt to diminish the effect of the KL term in the ELBO to reduce posterior collapse.']
### SUMMARY:
| [
"We show that posterior collapse in linear VAEs is caused entirely by marginal log-likelihood (not ELBO). Experiments on deep VAEs suggest a similar phenomenon is at play."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Transformers have achieved state-of-the-art results on a variety of natural language processing tasks. \n', 'Despite good performance, Transformers are still weak in long sentence modeling where the global attention map is too dispersed to capture valuable information.\n', 'In such case, the local/token features that are also significant to sequence modeling are omitted to some extent.\n', 'To address this problem, we propose a Multi-scale attention model (MUSE) by concatenating attention networks with convolutional networks and position-wise feed-forward networks to explicitly capture local and token features.', 'Considering the parameter size and computation efficiency, we re-use the feed-forward layer in the original Transformer and adopt a lightweight dynamic convolution as implementation. \n', 'Experimental results show that the proposed model achieves substantial performance improvements over Transformer, especially on long sentences, and pushes the state-of-the-art from 35.6 to 36.2 on IWSLT 2014 German to English translation task, from 30.6 to 31.3 on IWSLT 2015 English to Vietnamese translation task.', 'We also reach the state-of-art performance on WMT 2014 English to French translation dataset, with a BLEU score of 43.2.']
### SUMMARY:
| [
"This paper propose a new model which combines multi scale information for sequence to sequence learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Training neural networks with verifiable robustness guarantees is challenging.', 'Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures.', 'Meanwhile, interval bound propagation (IBP) based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training.', 'In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.', 'CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks.', 'We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in L_inf robustness.\n', 'Notably, we achieve 7.02% verified test error on MNIST at epsilon=0.3, and 66.94% on CIFAR-10 with epsilon=8/255.']
### SUMMARY:
| [
"We propose a new certified adversarial training method, CROWN-IBP, that achieves state-of-the-art robustness for L_inf norm adversarial perturbations."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings.', 'A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning.', 'During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy.', 'In this work, we make several surprising observations which contradict common beliefs.', 'For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights.', 'For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch.', "Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned ``important'' weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited ``important'' weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.", 'Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. ', 'We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.']
### SUMMARY:
| [
"In structured network pruning, fine-tuning a pruned model only gives comparable performance with training it from scratch."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
["Brushing techniques have a long history with the first interactive selection tools appearing in the 1990's.", 'Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues.', 'Selection is especially difficult in large datasets where many visual items tangle and create overlapping.', 'This paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area.', 'Firstly, the user brushes the region where trajectories of interest are visible.', 'Secondly, the shape of the brushed area is used to select similar items.', 'Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories.', 'This technique encompasses two types of comparison metrics, the piece-wise Pearson correlation and the similarity measurement based on information geometry.', 'We apply it to concrete scenarios with datasets from air traffic control, eye-tracking data and GPS trajectories.\n']
### SUMMARY:
| [
"Interactive technique to improve brushing in dense trajectory datasets by taking into account the shape of the brush."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism, but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.', 'Capturing such complex interactions between different objects in the world, including their relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging problem.', 'In this work, we propose to model object composition in a GAN framework as a self-consistent composition-decomposition network.', 'Our model is conditioned on the object images from their marginal distributions and can generate a realistic image from their joint distribution.', 'We evaluate our model through qualitative experiments and user evaluations in scenarios when either paired or unpaired examples for the individual object images and the joint scenes are given during training.', 'Our results reveal that the learned model captures potential interactions between the two object domains given as input to output new instances of composed scene at test time in a reasonable fashion.']
### SUMMARY:
| [
"We develop a novel approach to model object compositionality in images in a GAN framework."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent studies have highlighted adversarial examples as a ubiquitous threat to different neural network models and many downstream applications.', 'Nonetheless, as unique data properties have inspired distinct and powerful learning principles, this paper aims to explore their potentials towards mitigating adversarial inputs.', 'In particular, our results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples.', 'Tested on the automatic speech recognition (ASR) tasks and three recent audio adversarial attacks, we find that', '(i) input transformation developed from image adversarial defense provides limited robustness improvement and is subtle to advanced attacks;', '(ii) temporal dependency can be exploited to gain discriminative power against audio adversarial examples and is resistant to adaptive attacks considered in our experiments.', 'Our results not only show promising means of improving the robustness of ASR systems, but also offer novel insights in exploiting domain-specific data properties to mitigate negative effects of adversarial examples.']
### SUMMARY:
| [
"Adversarial audio discrimination using temporal dependency"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In order to alleviate the notorious mode collapse phenomenon in generative adversarial networks (GANs), we propose a novel training method of GANs in which certain fake samples can be reconsidered as real ones during the training process.', 'This strategy can reduce the gradient value that generator receives in the region where gradient exploding happens.', 'We show that the theoretical equilibrium between the generators and discriminations actually can be seldom realized in practice.', 'And this results in an unbalanced generated distribution that deviates from the target one, when fake datepoints overfit to real ones, which explains the non-stability of GANs.', 'We also prove that, by penalizing the difference between discriminator outputs and considering certain fake datapoints as real for adjacent real and fake sample pairs, gradient exploding can be alleviated.', 'Accordingly, a modified GAN training method is proposed with a more stable training process and a better generalization.', 'Experiments on different datasets verify our theoretical analysis.']
### SUMMARY:
| [
" We propose a novel GAN training method by considering certain fake samples as real to alleviate mode collapse and stabilize training process."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present a tool for Interactive Visual Exploration of Latent Space (IVELS) for model selection. ', 'Evaluating generative models of discrete sequences from a continuous latent space is a challenging problem, since their optimization involves multiple competing objective terms. ', 'We introduce a model-selection pipeline to compare and filter models throughout consecutive stages of more complex and expensive metrics.', 'We present the pipeline in an interactive visual tool to enable the exploration of the metrics, analysis of the learned latent space, and selection of the best model for a given task. ', 'We focus specifically on the variational auto-encoder family in a case study of modeling peptide sequences, which are short sequences of amino acids.', 'This task is especially interesting due to the presence of multiple attributes we want to model.', 'We demonstrate how an interactive visual comparison can assist in evaluating how well an unsupervised auto-encoder meaningfully captures the attributes of interest in its latent space.']
### SUMMARY:
| [
"We present a visual tool to interactively explore the latent space of an auto-encoder for peptide sequences and their attributes."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Neural networks trained through stochastic gradient descent (SGD) have been around for more than 30 years, but they still escape our understanding.', 'This paper takes an experimental approach, with a divide-and-conquer strategy in mind: we start by studying what happens in single neurons.', 'While being the core building block of deep neural networks, the way they encode information about the inputs and how such encodings emerge is still unknown.', 'We report experiments providing strong evidence that hidden neurons behave like binary classifiers during training and testing.', 'During training, analysis of the gradients reveals that a neuron separates two categories of inputs, which are impressively constant across training.', 'During testing, we show that the fuzzy, binary partition described above embeds the core information used by the network for its prediction.', 'These observations bring to light some of the core internal mechanics of deep neural networks, and have the potential to guide the next theoretical and practical developments.']
### SUMMARY:
| [
"We report experiments providing strong evidence that a neuron behaves like a binary classifier during training and testing"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Automatic classification of objects is one of the most important tasks in engineering\n', 'and data mining applications.', 'Although using more complex and advanced\n', 'classifiers can help to improve the accuracy of classification systems, it can be\n', 'done by analyzing data sets and their features for a particular problem.', 'Feature\n', 'combination is the one which can improve the quality of the features.', 'In this paper,\n', 'a structure similar to Feed-Forward Neural Network (FFNN) is used to generate an\n', 'optimized linear or non-linear combination of features for classification.', 'Genetic\n', 'Algorithm (GA) is applied to update weights and biases.', 'Since nature of data sets\n', 'and their features impact on the effectiveness of combination and classification\n', 'system, linear and non-linear activation functions (or transfer function) are used\n', 'to achieve more reliable system.', 'Experiments of several UCI data sets and using\n', 'minimum distance classifier as a simple classifier indicate that proposed linear and\n', 'non-linear intelligent FFNN-based feature combination can present more reliable\n', 'and promising results.', 'By using such a feature combination method, there is no\n', 'need to use more powerful and complex classifier anymore.']
### SUMMARY:
| [
"A method for enriching and combining features to improve classification accuracy"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent improvements to Generative Adversarial Networks (GANs) have made it possible to generate realistic images in high resolution based on natural language descriptions such as image captions.', 'Furthermore, conditional GANs allow us to control the image generation process through labels or even natural language descriptions.', 'However, fine-grained control of the image layout, i.e. where in the image specific objects should be located, is still difficult to achieve.', 'This is especially true for images that should contain multiple distinct objects at different spatial locations.', 'We introduce a new approach which allows us to control the location of arbitrarily many objects within an image by adding an object pathway to both the generator and the discriminator.', 'Our approach does not need a detailed semantic layout but only bounding boxes and the respective labels of the desired objects are needed.', 'The object pathway focuses solely on the individual objects and is iteratively applied at the locations specified by the bounding boxes.', 'The global pathway focuses on the image background and the general image layout.', 'We perform experiments on the Multi-MNIST, CLEVR, and the more complex MS-COCO data set.', 'Our experiments show that through the use of the object pathway we can control object locations within images and can model complex scenes with multiple objects at various locations.', 'We further show that the object pathway focuses on the individual objects and learns features relevant for these, while the global pathway focuses on global image characteristics and the image background.']
### SUMMARY:
| [
"Extend GAN architecture to obtain control over locations and identities of multiple objects within generated images."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The demand for abstractive dialog summary is growing in real-world applications.', 'For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction.', 'However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets.', 'We propose an abstractive dialog summarization dataset based on MultiWOZ.', 'If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched.', 'To address these two drawbacks, we propose Scaffold Pointer Network (SPNet) to utilize the existing annotation on speaker role, semantic slot and dialog domain.', 'SPNet incorporates these semantic scaffolds for dialog summarization.', 'Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text.', 'On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.']
### SUMMARY:
| [
"We propose a novel end-to-end model (SPNet) to incorporate semantic scaffolds for improving abstractive dialog summarization."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Knowledge bases (KB), both automatically and manually constructed, are often incomplete --- many valid facts can be inferred from the KB by synthesizing existing information.', 'A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities.', 'Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple.', 'Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them.', 'We propose a new algorithm, MINERVA, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity.', 'Since random walks are impractical in a setting with unknown destination and combinatorially many paths from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths.', 'On a comprehensive evaluation on seven knowledge base datasets, we found MINERVA to be competitive with many current state-of-the-art methods.']
### SUMMARY:
| [
"We present a RL agent MINERVA which learns to walk on a knowledge graph and answer queries"
] |