categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG | null | 0904.4608 | null | null | http://arxiv.org/pdf/0904.4608v2 | 2009-04-30T17:02:30Z | 2009-04-29T13:17:31Z | Temporal data mining for root-cause analysis of machine faults in
automotive assembly lines | Engine assembly is a complex and heavily automated distributed-control
process, with large amounts of faults data logged everyday. We describe an
application of temporal data mining for analyzing fault logs in an engine
assembly plant. Frequent episode discovery framework is a model-free method
that can be used to deduce (temporal) correlations among events from the logs
in an efficient manner. In addition to being theoretically elegant and
computationally efficient, frequent episodes are also easy to interpret in the
form actionable recommendations. Incorporation of domain-specific information
is critical to successful application of the method for analyzing fault logs in
the manufacturing domain. We show how domain-specific knowledge can be
incorporated using heuristic rules that act as pre-filters and post-filters to
frequent episode discovery. The system described here is currently being used
in one of the engine assembly plants of General Motors and is planned for
adaptation in other plants. To the best of our knowledge, this paper presents
the first real, large-scale application of temporal data mining in the
manufacturing domain. We believe that the ideas presented in this paper can
help practitioners engineer tools for analysis in other similar or related
application domains as well.
| [
"['Srivatsan Laxman' 'Basel Shadid' 'P. S. Sastry' 'K. P. Unnikrishnan']",
"Srivatsan Laxman, Basel Shadid, P. S. Sastry and K. P. Unnikrishnan"
] |
cs.LG cs.AI cs.GT nlin.AO | 10.1007/s10458-011-9181-6 | 0904.4717 | null | null | http://arxiv.org/abs/0904.4717v2 | 2011-09-22T20:32:36Z | 2009-04-29T23:00:03Z | Continuous Strategy Replicator Dynamics for Multi--Agent Learning | The problem of multi-agent learning and adaptation has attracted a great deal
of attention in recent years. It has been suggested that the dynamics of multi
agent learning can be studied using replicator equations from population
biology. Most existing studies so far have been limited to discrete strategy
spaces with a small number of available actions. In many cases, however, the
choices available to agents are better characterized by continuous spectra.
This paper suggests a generalization of the replicator framework that allows to
study the adaptive dynamics of Q-learning agents with continuous strategy
spaces. Instead of probability vectors, agents strategies are now characterized
by probability measures over continuous variables. As a result, the ordinary
differential equations for the discrete case are replaced by a system of
coupled integral--differential replicator equations that describe the mutual
evolution of individual agent strategies. We derive a set of functional
equations describing the steady state of the replicator dynamics, examine their
solutions for several two-player games, and confirm our analytical results
using simulations.
| [
"Aram Galstyan",
"['Aram Galstyan']"
] |
cs.IT cs.LG math.IT | null | 0904.4774 | null | null | http://arxiv.org/pdf/0904.4774v2 | 2010-03-01T11:28:31Z | 2009-04-30T10:04:09Z | Dictionary Identification - Sparse Matrix-Factorisation via
$\ell_1$-Minimisation | This article treats the problem of learning a dictionary providing sparse
representations for a given signal class, via $\ell_1$-minimisation. The
problem can also be seen as factorising a $\ddim \times \nsig$ matrix $Y=(y_1
>... y_\nsig), y_n\in \R^\ddim$ of training signals into a $\ddim \times
\natoms$ dictionary matrix $\dico$ and a $\natoms \times \nsig$ coefficient
matrix $\X=(x_1... x_\nsig), x_n \in \R^\natoms$, which is sparse. The exact
question studied here is when a dictionary coefficient pair $(\dico,\X)$ can be
recovered as local minimum of a (nonconvex) $\ell_1$-criterion with input
$Y=\dico \X$. First, for general dictionaries and coefficient matrices,
algebraic conditions ensuring local identifiability are derived, which are then
specialised to the case when the dictionary is a basis. Finally, assuming a
random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown
that sufficiently incoherent bases are locally identifiable with high
probability. The perhaps surprising result is that the typically sufficient
number of training samples $\nsig$ grows up to a logarithmic factor only
linearly with the signal dimension, i.e. $\nsig \approx C \natoms \log
\natoms$, in contrast to previous approaches requiring combinatorially many
samples.
| [
"Remi Gribonval, Karin Schnass",
"['Remi Gribonval' 'Karin Schnass']"
] |
cs.IT cs.LG math.IT | null | 0905.1546 | null | null | http://arxiv.org/pdf/0905.1546v2 | 2009-05-15T03:38:47Z | 2009-05-11T06:13:49Z | Fast and Near-Optimal Matrix Completion via Randomized Basis Pursuit | Motivated by the philosophy and phenomenal success of compressed sensing, the
problem of reconstructing a matrix from a sampling of its entries has attracted
much attention recently. Such a problem can be viewed as an
information-theoretic variant of the well-studied matrix completion problem,
and the main objective is to design an efficient algorithm that can reconstruct
a matrix by inspecting only a small number of its entries. Although this is an
impossible task in general, Cand\`es and co-authors have recently shown that
under a so-called incoherence assumption, a rank $r$ $n\times n$ matrix can be
reconstructed using semidefinite programming (SDP) after one inspects
$O(nr\log^6n)$ of its entries. In this paper we propose an alternative approach
that is much more efficient and can reconstruct a larger class of matrices by
inspecting a significantly smaller number of the entries. Specifically, we
first introduce a class of so-called stable matrices and show that it includes
all those that satisfy the incoherence assumption. Then, we propose a
randomized basis pursuit (RBP) algorithm and show that it can reconstruct a
stable rank $r$ $n\times n$ matrix after inspecting $O(nr\log n)$ of its
entries. Our sampling bound is only a logarithmic factor away from the
information-theoretic limit and is essentially optimal. Moreover, the runtime
of the RBP algorithm is bounded by $O(nr^2\log n+n^2r)$, which compares very
favorably with the $\Omega(n^4r^2\log^{12}n)$ runtime of the SDP-based
algorithm. Perhaps more importantly, our algorithm will provide an exact
reconstruction of the input matrix in polynomial time. By contrast, the
SDP-based algorithm can only provide an approximate one in polynomial time.
| [
"Zhisu Zhu, Anthony Man-Cho So, Yinyu Ye",
"['Zhisu Zhu' 'Anthony Man-Cho So' 'Yinyu Ye']"
] |
q-bio.NC cs.LG nlin.AO | 10.3389/neuro.10.015.2009 | 0905.2125 | null | null | http://arxiv.org/abs/0905.2125v3 | 2010-03-22T15:38:57Z | 2009-05-13T14:23:36Z | Experience-driven formation of parts-based representations in a model of
layered visual memory | Growing neuropsychological and neurophysiological evidence suggests that the
visual cortex uses parts-based representations to encode, store and retrieve
relevant objects. In such a scheme, objects are represented as a set of
spatially distributed local features, or parts, arranged in stereotypical
fashion. To encode the local appearance and to represent the relations between
the constituent parts, there has to be an appropriate memory structure formed
by previous experience with visual objects. Here, we propose a model how a
hierarchical memory structure supporting efficient storage and rapid recall of
parts-based representations can be established by an experience-driven process
of self-organization. The process is based on the collaboration of slow
bidirectional synaptic plasticity and homeostatic unit activity regulation,
both running at the top of fast activity dynamics with winner-take-all
character modulated by an oscillatory rhythm. These neural mechanisms lay down
the basis for cooperation and competition between the distributed units and
their synaptic connections. Choosing human face recognition as a test task, we
show that, under the condition of open-ended, unsupervised incremental
learning, the system is able to form memory traces for individual faces in a
parts-based fashion. On a lower memory layer the synaptic structure is
developed to represent local facial features and their interrelations, while
the identities of different persons are captured explicitly on a higher layer.
An additional property of the resulting representations is the sparseness of
both the activity during the recall and the synaptic patterns comprising the
memory traces.
| [
"['Jenia Jitsev' 'Christoph von der Malsburg']",
"Jenia Jitsev, Christoph von der Malsburg"
] |
cs.LG | null | 0905.2347 | null | null | http://arxiv.org/pdf/0905.2347v1 | 2009-05-14T14:59:15Z | 2009-05-14T14:59:15Z | Combining Supervised and Unsupervised Learning for GIS Classification | This paper presents a new hybrid learning algorithm for unsupervised
classification tasks. We combined Fuzzy c-means learning algorithm and a
supervised version of Minimerror to develop a hybrid incremental strategy
allowing unsupervised classifications. We applied this new approach to a
real-world database in order to know if the information contained in unlabeled
features of a Geographic Information System (GIS), allows to well classify it.
Finally, we compared our results to a classical supervised classification
obtained by a multilayer perceptron.
| [
"['Juan-Manuel Torres-Moreno' 'Laurent Bougrain' 'Frdéric Alexandre']",
"Juan-Manuel Torres-Moreno and Laurent Bougrain and Fr\\'d\\'eric\n Alexandre"
] |
cs.IT cs.LG math.IT math.ST stat.TH | null | 0905.2639 | null | null | http://arxiv.org/pdf/0905.2639v1 | 2009-05-16T00:41:30Z | 2009-05-16T00:41:30Z | Information-theoretic limits of selecting binary graphical models in
high dimensions | The problem of graphical model selection is to correctly estimate the graph
structure of a Markov random field given samples from the underlying
distribution. We analyze the information-theoretic limitations of the problem
of graph selection for binary Markov random fields under high-dimensional
scaling, in which the graph size $p$ and the number of edges $k$, and/or the
maximal node degree $d$ are allowed to increase to infinity as a function of
the sample size $n$. For pairwise binary Markov random fields, we derive both
necessary and sufficient conditions for correct graph selection over the class
$\mathcal{G}_{p,k}$ of graphs on $p$ vertices with at most $k$ edges, and over
the class $\mathcal{G}_{p,d}$ of graphs on $p$ vertices with maximum degree at
most $d$. For the class $\mathcal{G}_{p, k}$, we establish the existence of
constants $c$ and $c'$ such that if $\numobs < c k \log p$, any method has
error probability at least 1/2 uniformly over the family, and we demonstrate a
graph decoder that succeeds with high probability uniformly over the family for
sample sizes $\numobs > c' k^2 \log p$. Similarly, for the class
$\mathcal{G}_{p,d}$, we exhibit constants $c$ and $c'$ such that for $n < c d^2
\log p$, any method fails with probability at least 1/2, and we demonstrate a
graph decoder that succeeds with high probability for $n > c' d^3 \log p$.
| [
"Narayana Santhanam and Martin J. Wainwright",
"['Narayana Santhanam' 'Martin J. Wainwright']"
] |
cs.LG | null | 0905.2997 | null | null | http://arxiv.org/pdf/0905.2997v1 | 2009-05-18T23:21:35Z | 2009-05-18T23:21:35Z | Average-Case Active Learning with Costs | We analyze the expected cost of a greedy active learning algorithm. Our
analysis extends previous work to a more general setting in which different
queries have different costs. Moreover, queries may have more than two possible
responses and the distribution over hypotheses may be non uniform. Specific
applications include active learning with label costs, active learning for
multiclass and partial label queries, and batch mode active learning. We also
discuss an approximate version of interest when there are very many queries.
| [
"['Andrew Guillory' 'Jeff Bilmes']",
"Andrew Guillory, Jeff Bilmes"
] |
cs.CV cs.LG | null | 0905.3347 | null | null | http://arxiv.org/pdf/0905.3347v1 | 2009-05-20T16:37:16Z | 2009-05-20T16:37:16Z | Information Distance in Multiples | Information distance is a parameter-free similarity measure based on
compression, used in pattern recognition, data mining, phylogeny, clustering,
and classification. The notion of information distance is extended from pairs
to multiples (finite lists). We study maximal overlap, metricity, universality,
minimal overlap, additivity, and normalized information distance in multiples.
We use the theoretical notion of Kolmogorov complexity which for practical
purposes is approximated by the length of the compressed version of the file
involved, using a real-world compression program.
{\em Index Terms}-- Information distance, multiples, pattern recognition,
data mining, similarity, Kolmogorov complexity
| [
"Paul M.B. Vitanyi",
"['Paul M. B. Vitanyi']"
] |
cs.AI cs.LG | null | 0905.3369 | null | null | http://arxiv.org/pdf/0905.3369v2 | 2009-06-03T20:29:16Z | 2009-05-20T18:08:18Z | Learning Nonlinear Dynamic Models | We present a novel approach for learning nonlinear dynamic models, which
leads to a new set of tools capable of solving problems that are otherwise
difficult. We provide theory showing this new approach is consistent for models
with long range structure, and apply the approach to motion capture and
high-dimensional video data, yielding results superior to standard
alternatives.
| [
"['John Langford' 'Ruslan Salakhutdinov' 'Tong Zhang']",
"John Langford, Ruslan Salakhutdinov, and Tong Zhang"
] |
cs.LG astro-ph.IM physics.data-an | 10.1007/s10994-008-5093-3 | 0905.3428 | null | null | http://arxiv.org/abs/0905.3428v1 | 2009-05-21T03:05:38Z | 2009-05-21T03:05:38Z | Finding Anomalous Periodic Time Series: An Application to Catalogs of
Periodic Variable Stars | Catalogs of periodic variable stars contain large numbers of periodic
light-curves (photometric time series data from the astrophysics domain).
Separating anomalous objects from well-known classes is an important step
towards the discovery of new classes of astronomical objects. Most anomaly
detection methods for time series data assume either a single continuous time
series or a set of time series whose periods are aligned. Light-curve data
precludes the use of these methods as the periods of any given pair of
light-curves may be out of sync. One may use an existing anomaly detection
method if, prior to similarity calculation, one performs the costly act of
aligning two light-curves, an operation that scales poorly to massive data
sets. This paper presents PCAD, an unsupervised anomaly detection method for
large sets of unsynchronized periodic time-series data, that outputs a ranked
list of both global and local anomalies. It calculates its anomaly score for
each light-curve in relation to a set of centroids produced by a modified
k-means clustering algorithm. Our method is able to scale to large data sets
through the use of sampling. We validate our method on both light-curve data
and other time series data sets. We demonstrate its effectiveness at finding
known anomalies, and discuss the effect of sample size and number of centroids
on our results. We compare our method to naive solutions and existing time
series anomaly detection methods for unphased data, and show that PCAD's
reported anomalies are comparable to or better than all other methods. Finally,
astrophysicists on our team have verified that PCAD finds true anomalies that
might be indicative of novel astrophysical phenomena.
| [
"Umaa Rebbapragada, Pavlos Protopapas, Carla E. Brodley, Charles Alcock",
"['Umaa Rebbapragada' 'Pavlos Protopapas' 'Carla E. Brodley'\n 'Charles Alcock']"
] |
cond-mat.dis-nn cond-mat.stat-mech cs.LG quant-ph | null | 0905.3527 | null | null | http://arxiv.org/pdf/0905.3527v2 | 2009-05-28T15:36:23Z | 2009-05-21T17:01:12Z | Quantum Annealing for Clustering | This paper studies quantum annealing (QA) for clustering, which can be seen
as an extension of simulated annealing (SA). We derive a QA algorithm for
clustering and propose an annealing schedule, which is crucial in practice.
Experiments show the proposed QA algorithm finds better clustering assignments
than SA. Furthermore, QA is as easy as SA to implement.
| [
"Kenichi Kurihara, Shu Tanaka, and Seiji Miyashita",
"['Kenichi Kurihara' 'Shu Tanaka' 'Seiji Miyashita']"
] |
cond-mat.dis-nn cond-mat.stat-mech cs.LG quant-ph | null | 0905.3528 | null | null | http://arxiv.org/pdf/0905.3528v3 | 2009-05-28T15:44:30Z | 2009-05-21T17:01:28Z | Quantum Annealing for Variational Bayes Inference | This paper presents studies on a deterministic annealing algorithm based on
quantum annealing for variational Bayes (QAVB) inference, which can be seen as
an extension of the simulated annealing for variational Bayes (SAVB) inference.
QAVB is as easy as SAVB to implement. Experiments revealed QAVB finds a better
local optimum than SAVB in terms of the variational free energy in latent
Dirichlet allocation (LDA).
| [
"Issei Sato, Kenichi Kurihara, Shu Tanaka, Hiroshi Nakagawa, and Seiji\n Miyashita",
"['Issei Sato' 'Kenichi Kurihara' 'Shu Tanaka' 'Hiroshi Nakagawa'\n 'Seiji Miyashita']"
] |
cs.GT cs.LG | 10.1155/2010/573107 | 0905.3640 | null | null | http://arxiv.org/abs/0905.3640v1 | 2009-05-22T19:07:21Z | 2009-05-22T19:07:21Z | Coevolutionary Genetic Algorithms for Establishing Nash Equilibrium in
Symmetric Cournot Games | We use co-evolutionary genetic algorithms to model the players' learning
process in several Cournot models, and evaluate them in terms of their
convergence to the Nash Equilibrium. The "social-learning" versions of the two
co-evolutionary algorithms we introduce, establish Nash Equilibrium in those
models, in contrast to the "individual learning" versions which, as we see
here, do not imply the convergence of the players' strategies to the Nash
outcome. When players use "canonical co-evolutionary genetic algorithms" as
learning algorithms, the process of the game is an ergodic Markov Chain, and
therefore we analyze simulation results using both the relevant methodology and
more general statistical tests, to find that in the "social" case, states
leading to NE play are highly frequent at the stationary distribution of the
chain, in contrast to the "individual learning" case, when NE is not reached at
all in our simulations; to find that the expected Hamming distance of the
states at the limiting distribution from the "NE state" is significantly
smaller in the "social" than in the "individual learning case"; to estimate the
expected time that the "social" algorithms need to get to the "NE state" and
verify their robustness and finally to show that a large fraction of the games
played are indeed at the Nash Equilibrium.
| [
"['Mattheos K. Protopapas' 'Elias B. Kosmatopoulos' 'Francesco Battaglia']",
"Mattheos K. Protopapas, Elias B. Kosmatopoulos, Francesco Battaglia"
] |
cs.LG | null | 0905.4022 | null | null | http://arxiv.org/pdf/0905.4022v1 | 2009-05-25T14:29:59Z | 2009-05-25T14:29:59Z | Transfer Learning Using Feature Selection | We present three related ways of using Transfer Learning to improve feature
selection. The three methods address different problems, and hence share
different kinds of information between tasks or feature classes, but all three
are based on the information theoretic Minimum Description Length (MDL)
principle and share the same underlying Bayesian interpretation. The first
method, MIC, applies when predictive models are to be built simultaneously for
multiple tasks (``simultaneous transfer'') that share the same set of features.
MIC allows each feature to be added to none, some, or all of the task models
and is most beneficial for selecting a small set of predictive features from a
large pool of features, as is common in genomic and biological datasets. Our
second method, TPC (Three Part Coding), uses a similar methodology for the case
when the features can be divided into feature classes. Our third method,
Transfer-TPC, addresses the ``sequential transfer'' problem in which the task
to which we want to transfer knowledge may not be known in advance and may have
different amounts of data than the other tasks. Transfer-TPC is most beneficial
when we want to transfer knowledge between tasks which have unequal amounts of
labeled data, for example the data for disambiguating the senses of different
verbs. We demonstrate the effectiveness of these approaches with experimental
results on real world data pertaining to genomics and to Word Sense
Disambiguation (WSD).
| [
"['Paramveer S. Dhillon' 'Dean Foster' 'Lyle Ungar']",
"Paramveer S. Dhillon, Dean Foster and Lyle Ungar"
] |
cs.LG cs.AI | null | 0906.0052 | null | null | http://arxiv.org/pdf/0906.0052v1 | 2009-05-30T03:41:37Z | 2009-05-30T03:41:37Z | A Minimum Description Length Approach to Multitask Feature Selection | Many regression problems involve not one but several response variables
(y's). Often the responses are suspected to share a common underlying
structure, in which case it may be advantageous to share information across
them; this is known as multitask learning. As a special case, we can use
multiple responses to better identify shared predictive features -- a project
we might call multitask feature selection.
This thesis is organized as follows. Section 1 introduces feature selection
for regression, focusing on ell_0 regularization methods and their
interpretation within a Minimum Description Length (MDL) framework. Section 2
proposes a novel extension of MDL feature selection to the multitask setting.
The approach, called the "Multiple Inclusion Criterion" (MIC), is designed to
borrow information across regression tasks by more easily selecting features
that are associated with multiple responses. We show in experiments on
synthetic and real biological data sets that MIC can reduce prediction error in
settings where features are at least partially shared across responses. Section
3 surveys hypothesis testing by regression with a single response, focusing on
the parallel between the standard Bonferroni correction and an MDL approach.
Mirroring the ideas in Section 2, Section 4 proposes a novel MIC approach to
hypothesis testing with multiple responses and shows that on synthetic data
with significant sharing of features across responses, MIC sometimes
outperforms standard FDR-controlling methods in terms of finding true positives
for a given level of false positives. Section 5 concludes.
| [
"Brian Tomasik",
"['Brian Tomasik']"
] |
cs.LG | 10.1587/transfun.E93.A.617 | 0906.0211 | null | null | http://arxiv.org/abs/0906.0211v2 | 2009-06-03T03:25:56Z | 2009-06-01T04:47:15Z | Equations of States in Statistical Learning for a Nonparametrizable and
Regular Case | Many learning machines that have hierarchical structure or hidden variables
are now being used in information science, artificial intelligence, and
bioinformatics. However, several learning machines used in such fields are not
regular but singular statistical models, hence their generalization performance
is still left unknown. To overcome these problems, in the previous papers, we
proved new equations in statistical learning, by which we can estimate the
Bayes generalization loss from the Bayes training loss and the functional
variance, on the condition that the true distribution is a singularity
contained in a learning machine. In this paper, we prove that the same
equations hold even if a true distribution is not contained in a parametric
model. Also we prove that, the proposed equations in a regular case are
asymptotically equivalent to the Takeuchi information criterion. Therefore, the
proposed equations are always applicable without any condition on the unknown
true distribution.
| [
"['Sumio Watanabe']",
"Sumio Watanabe"
] |
cs.LG | null | 0906.0470 | null | null | http://arxiv.org/pdf/0906.0470v1 | 2009-06-02T11:52:36Z | 2009-06-02T11:52:36Z | An optimal linear separator for the Sonar Signals Classification task | The problem of classifying sonar signals from rocks and mines first studied
by Gorman and Sejnowski has become a benchmark against which many learning
algorithms have been tested. We show that both the training set and the test
set of this benchmark are linearly separable, although with different
hyperplanes. Moreover, the complete set of learning and test patterns together,
is also linearly separable. We give the weights that separate these sets, which
may be used to compare results found by other algorithms.
| [
"['Juan-Manuel Torres-Moreno' 'Mirta B. Gordon']",
"Juan-Manuel Torres-Moreno and Mirta B. Gordon"
] |
cs.LG cs.NE | null | 0906.0861 | null | null | http://arxiv.org/pdf/0906.0861v1 | 2009-06-04T09:41:47Z | 2009-06-04T09:41:47Z | Using Genetic Algorithms for Texts Classification Problems | The avalanche quantity of the information developed by mankind has led to
concept of automation of knowledge extraction - Data Mining ([1]). This
direction is connected with a wide spectrum of problems - from recognition of
the fuzzy set to creation of search machines. Important component of Data
Mining is processing of the text information. Such problems lean on concept of
classification and clustering ([2]). Classification consists in definition of
an accessory of some element (text) to one of in advance created classes.
Clustering means splitting a set of elements (texts) on clusters which quantity
are defined by localization of elements of the given set in vicinities of these
some natural centers of these clusters. Realization of a problem of
classification initially should lean on the given postulates, basic of which -
the aprioristic information on primary set of texts and a measure of affinity
of elements and classes.
| [
"A. A. Shumeyko, S. L. Sotnik",
"['A. A. Shumeyko' 'S. L. Sotnik']"
] |
cs.LG cs.NE | null | 0906.0872 | null | null | http://arxiv.org/pdf/0906.0872v1 | 2009-06-04T10:25:08Z | 2009-06-04T10:25:08Z | Fast Weak Learner Based on Genetic Algorithm | An approach to the acceleration of parametric weak classifier boosting is
proposed. Weak classifier is called parametric if it has fixed number of
parameters and, so, can be represented as a point into multidimensional space.
Genetic algorithm is used instead of exhaustive search to learn parameters of
such classifier. Proposed approach also takes cases when effective algorithm
for learning some of the classifier parameters exists into account. Experiments
confirm that such an approach can dramatically decrease classifier training
time while keeping both training and test errors small.
| [
"['Boris Yangel']",
"Boris Yangel"
] |
cs.LG cs.AI cs.IT math.IT | null | 0906.1713 | null | null | http://arxiv.org/pdf/0906.1713v1 | 2009-06-09T12:50:29Z | 2009-06-09T12:50:29Z | Feature Reinforcement Learning: Part I: Unstructured MDPs | General-purpose, intelligent, learning agents cycle through sequences of
observations, actions, and rewards that are complex, uncertain, unknown, and
non-Markovian. On the other hand, reinforcement learning is well-developed for
small finite state Markov decision processes (MDPs). Up to now, extracting the
right state representations out of bare observations, that is, reducing the
general agent setup to the MDP framework, is an art that involves significant
effort by designers. The primary goal of this work is to automate the reduction
process and thereby significantly expand the scope of many existing
reinforcement learning algorithms and the agents that employ them. Before we
can think of mechanizing this search for suitable MDPs, we need a formal
objective criterion. The main contribution of this article is to develop such a
criterion. I also integrate the various parts into one learning algorithm.
Extensions to more realistic dynamic Bayesian networks are developed in Part
II. The role of POMDPs is also considered there.
| [
"Marcus Hutter",
"['Marcus Hutter']"
] |
cs.LG cs.AI | null | 0906.1814 | null | null | http://arxiv.org/pdf/0906.1814v1 | 2009-06-09T20:06:45Z | 2009-06-09T20:06:45Z | Large-Margin kNN Classification Using a Deep Encoder Network | KNN is one of the most popular classification methods, but it often fails to
work well with inappropriate choice of distance metric or due to the presence
of numerous class-irrelevant features. Linear feature transformation methods
have been widely applied to extract class-relevant information to improve kNN
classification, which is very limited in many applications. Kernels have been
used to learn powerful non-linear feature transformations, but these methods
fail to scale to large datasets. In this paper, we present a scalable
non-linear feature mapping method based on a deep neural network pretrained
with restricted boltzmann machines for improving kNN classification in a
large-margin framework, which we call DNet-kNN. DNet-kNN can be used for both
classification and for supervised dimensionality reduction. The experimental
results on two benchmark handwritten digit datasets show that DNet-kNN has much
better performance than large-margin kNN using a linear mapping and kNN based
on a deep autoencoder pretrained with retricted boltzmann machines.
| [
"['Martin Renqiang Min' 'David A. Stanley' 'Zineng Yuan' 'Anthony Bonner'\n 'Zhaolei Zhang']",
"Martin Renqiang Min, David A. Stanley, Zineng Yuan, Anthony Bonner,\n and Zhaolei Zhang"
] |
cs.LG stat.ML | null | 0906.2027 | null | null | http://arxiv.org/pdf/0906.2027v2 | 2012-04-09T17:37:45Z | 2009-06-11T00:22:58Z | Matrix Completion from Noisy Entries | Given a matrix M of low-rank, we consider the problem of reconstructing it
from noisy observations of a small, random subset of its entries. The problem
arises in a variety of applications, from collaborative filtering (the `Netflix
problem') to structure-from-motion and positioning. We study a low complexity
algorithm introduced by Keshavan et al.(2009), based on a combination of
spectral techniques and manifold optimization, that we call here OptSpace. We
prove performance guarantees that are order-optimal in a number of
circumstances.
| [
"['Raghunandan H. Keshavan' 'Andrea Montanari' 'Sewoong Oh']",
"Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh"
] |
cs.LG | 10.1007/978-3-642-04744-2_13 | 0906.2635 | null | null | http://arxiv.org/abs/0906.2635v1 | 2009-06-15T08:43:51Z | 2009-06-15T08:43:51Z | Bayesian History Reconstruction of Complex Human Gene Clusters on a
Phylogeny | Clusters of genes that have evolved by repeated segmental duplication present
difficult challenges throughout genomic analysis, from sequence assembly to
functional analysis. Improved understanding of these clusters is of utmost
importance, since they have been shown to be the source of evolutionary
innovation, and have been linked to multiple diseases, including HIV and a
variety of cancers. Previously, Zhang et al. (2008) developed an algorithm for
reconstructing parsimonious evolutionary histories of such gene clusters, using
only human genomic sequence data. In this paper, we propose a probabilistic
model for the evolution of gene clusters on a phylogeny, and an MCMC algorithm
for reconstruction of duplication histories from genomic sequences in multiple
species. Several projects are underway to obtain high quality BAC-based
assemblies of duplicated clusters in multiple species, and we anticipate that
our method will be useful in analyzing these valuable new data sets.
| [
"Tom\\'a\\v{s} Vina\\v{r}, Bro\\v{n}a Brejov\\'a, Giltae Song, Adam Siepel",
"['Tomáš Vinař' 'Broňa Brejová' 'Giltae Song' 'Adam Siepel']"
] |
cs.LG cs.IT math.IT | 10.1109/TIT.2010.2090235 | 0906.2895 | null | null | http://arxiv.org/abs/0906.2895v6 | 2010-11-08T01:37:37Z | 2009-06-16T10:58:50Z | Entropy Message Passing | The paper proposes a new message passing algorithm for cycle-free factor
graphs. The proposed "entropy message passing" (EMP) algorithm may be viewed as
sum-product message passing over the entropy semiring, which has previously
appeared in automata theory. The primary use of EMP is to compute the entropy
of a model. However, EMP can also be used to compute expressions that appear in
expectation maximization and in gradient descent algorithms.
| [
"Velimir M. Ilic, Miomir S. Stankovic, Branimir T. Todorovic",
"['Velimir M. Ilic' 'Miomir S. Stankovic' 'Branimir T. Todorovic']"
] |
cs.NI cs.LG | null | 0906.3923 | null | null | http://arxiv.org/pdf/0906.3923v4 | 2009-12-03T02:15:11Z | 2009-06-22T07:48:12Z | Bayesian Forecasting of WWW Traffic on the Time Varying Poisson Model | Traffic forecasting from past observed traffic data with small calculation
complexity is one of important problems for planning of servers and networks.
Focusing on World Wide Web (WWW) traffic as fundamental investigation, this
paper would deal with Bayesian forecasting of network traffic on the time
varying Poisson model from a viewpoint from statistical decision theory. Under
this model, we would show that the estimated forecasting value is obtained by
simple arithmetic calculation and expresses real WWW traffic well from both
theoretical and empirical points of view.
| [
"Daiki Koizumi, Toshiyasu Matsushima, and Shigeichi Hirasawa",
"['Daiki Koizumi' 'Toshiyasu Matsushima' 'Shigeichi Hirasawa']"
] |
cs.LG | null | 0906.4032 | null | null | http://arxiv.org/pdf/0906.4032v1 | 2009-06-22T15:25:23Z | 2009-06-22T15:25:23Z | Bayesian two-sample tests | In this paper, we present two classes of Bayesian approaches to the
two-sample problem. Our first class of methods extends the Bayesian t-test to
include all parametric models in the exponential family and their conjugate
priors. Our second class of methods uses Dirichlet process mixtures (DPM) of
such conjugate-exponential distributions as flexible nonparametric priors over
the unknown distributions.
| [
"Karsten M. Borgwardt, Zoubin Ghahramani",
"['Karsten M. Borgwardt' 'Zoubin Ghahramani']"
] |
cs.DB cs.LG | null | 0906.4172 | null | null | http://arxiv.org/pdf/0906.4172v1 | 2009-06-23T06:24:57Z | 2009-06-23T06:24:57Z | Rough Set Model for Discovering Hybrid Association Rules | In this paper, the mining of hybrid association rules with rough set approach
is investigated as the algorithm RSHAR.The RSHAR algorithm is constituted of
two steps mainly. At first, to join the participant tables into a general table
to generate the rules which is expressing the relationship between two or more
domains that belong to several different tables in a database. Then we apply
the mapping code on selected dimension, which can be added directly into the
information system as one certain attribute. To find the association rules,
frequent itemsets are generated in second step where candidate itemsets are
generated through equivalence classes and also transforming the mapping code in
to real dimensions. The searching method for candidate itemset is similar to
apriori algorithm. The analysis of the performance of algorithm has been
carried out.
| [
"['Anjana Pandey' 'K. R. Pardasani']",
"Anjana Pandey, K.R.Pardasani"
] |
cs.LG cs.DS | null | 0906.4539 | null | null | http://arxiv.org/pdf/0906.4539v2 | 2010-05-10T17:19:30Z | 2009-06-24T18:38:31Z | Learning with Spectral Kernels and Heavy-Tailed Data | Two ubiquitous aspects of large-scale data analysis are that the data often
have heavy-tailed properties and that diffusion-based or spectral-based methods
are often used to identify and extract structure of interest. Perhaps
surprisingly, popular distribution-independent methods such as those based on
the VC dimension fail to provide nontrivial results for even simple learning
problems such as binary classification in these two settings. In this paper, we
develop distribution-dependent learning methods that can be used to provide
dimension-independent sample complexity bounds for the binary classification
problem in these two popular settings. In particular, we provide bounds on the
sample complexity of maximum margin classifiers when the magnitude of the
entries in the feature vector decays according to a power law and also when
learning is performed with the so-called Diffusion Maps kernel. Both of these
results rely on bounding the annealed entropy of gap-tolerant classifiers in a
Hilbert space. We provide such a bound, and we demonstrate that our proof
technique generalizes to the case when the margin is measured with respect to
more general Banach space norms. The latter result is of potential interest in
cases where modeling the relationship between data elements as a dot product in
a Hilbert space is too restrictive.
| [
"['Michael W. Mahoney' 'Hariharan Narayanan']",
"Michael W. Mahoney and Hariharan Narayanan"
] |
stat.ML cs.CV cs.LG | 10.1098/rsta.2009.0161 | 0906.4582 | null | null | http://arxiv.org/abs/0906.4582v1 | 2009-06-24T23:40:22Z | 2009-06-24T23:40:22Z | On landmark selection and sampling in high-dimensional data analysis | In recent years, the spectral analysis of appropriately defined kernel
matrices has emerged as a principled way to extract the low-dimensional
structure often prevalent in high-dimensional data. Here we provide an
introduction to spectral methods for linear and nonlinear dimension reduction,
emphasizing ways to overcome the computational limitations currently faced by
practitioners with massive datasets. In particular, a data subsampling or
landmark selection process is often employed to construct a kernel based on
partial information, followed by an approximate spectral analysis termed the
Nystrom extension. We provide a quantitative framework to analyse this
procedure, and use it to demonstrate algorithmic performance bounds on a range
of practical approaches designed to optimize the landmark selection process. We
compare the practical implications of these bounds by way of real-world
examples drawn from the field of computer vision, whereby low-dimensional
manifold structure is shown to emerge from high-dimensional video data streams.
| [
"['Mohamed-Ali Belabbas' 'Patrick J. Wolfe']",
"Mohamed-Ali Belabbas and Patrick J. Wolfe"
] |
cs.LG | null | 0906.4663 | null | null | http://arxiv.org/pdf/0906.4663v1 | 2009-06-25T11:09:39Z | 2009-06-25T11:09:39Z | Acquiring Knowledge for Evaluation of Teachers Performance in Higher
Education using a Questionnaire | In this paper, we present the step by step knowledge acquisition process by
choosing a structured method through using a questionnaire as a knowledge
acquisition tool. Here we want to depict the problem domain as, how to evaluate
teachers performance in higher education through the use of expert system
technology. The problem is how to acquire the specific knowledge for a selected
problem efficiently and effectively from human experts and encode it in the
suitable computer format. Acquiring knowledge from human experts in the process
of expert systems development is one of the most common problems cited till
yet. This questionnaire was sent to 87 domain experts within all public and
private universities in Pakistani. Among them 25 domain experts sent their
valuable opinions. Most of the domain experts were highly qualified, well
experienced and highly responsible persons. The whole questionnaire was divided
into 15 main groups of factors, which were further divided into 99 individual
questions. These facts were analyzed further to give a final shape to the
questionnaire. This knowledge acquisition technique may be used as a learning
tool for further research work.
| [
"Hafeez Ullah Amin, Abdur Rashid Khan",
"['Hafeez Ullah Amin' 'Abdur Rashid Khan']"
] |
cs.LG physics.data-an stat.ML | null | 0906.4779 | null | null | http://arxiv.org/pdf/0906.4779v4 | 2011-09-25T01:33:51Z | 2009-06-25T19:15:44Z | Minimum Probability Flow Learning | Fitting probabilistic models to data is often difficult, due to the general
intractability of the partition function and its derivatives. Here we propose a
new parameter estimation technique that does not require computing an
intractable normalization factor or sampling from the equilibrium distribution
of the model. This is achieved by establishing dynamics that would transform
the observed data distribution into the model distribution, and then setting as
the objective the minimization of the KL divergence between the data
distribution and the distribution produced by running the dynamics for an
infinitesimal time. Score matching, minimum velocity learning, and certain
forms of contrastive divergence are shown to be special cases of this learning
technique. We demonstrate parameter estimation in Ising models, deep belief
networks and an independent component analysis model of natural scenes. In the
Ising model case, current state of the art techniques are outperformed by at
least an order of magnitude in learning time, with lower error in recovered
coupling parameters.
| [
"['Jascha Sohl-Dickstein' 'Peter Battaglino' 'Michael R. DeWeese']",
"Jascha Sohl-Dickstein, Peter Battaglino and Michael R. DeWeese"
] |
cs.CR cs.LG | null | 0906.5110 | null | null | http://arxiv.org/pdf/0906.5110v1 | 2009-06-27T23:24:14Z | 2009-06-27T23:24:14Z | Statistical Analysis of Privacy and Anonymity Guarantees in Randomized
Security Protocol Implementations | Security protocols often use randomization to achieve probabilistic
non-determinism. This non-determinism, in turn, is used in obfuscating the
dependence of observable values on secret data. Since the correctness of
security protocols is very important, formal analysis of security protocols has
been widely studied in literature. Randomized security protocols have also been
analyzed using formal techniques such as process-calculi and probabilistic
model checking. In this paper, we consider the problem of validating
implementations of randomized protocols. Unlike previous approaches which treat
the protocol as a white-box, our approach tries to verify an implementation
provided as a black box. Our goal is to infer the secrecy guarantees provided
by a security protocol through statistical techniques. We learn the
probabilistic dependency of the observable outputs on secret inputs using
Bayesian network. This is then used to approximate the leakage of secret. In
order to evaluate the accuracy of our statistical approach, we compare our
technique with the probabilistic model checking technique on two examples:
crowds protocol and dining crypotgrapher's protocol.
| [
"['Susmit Jha']",
"Susmit Jha"
] |
cs.LG | null | 0906.5151 | null | null | http://arxiv.org/pdf/0906.5151v1 | 2009-06-28T17:47:22Z | 2009-06-28T17:47:22Z | Unsupervised Search-based Structured Prediction | We describe an adaptation and application of a search-based structured
prediction algorithm "Searn" to unsupervised learning problems. We show that it
is possible to reduce unsupervised learning to supervised learning and
demonstrate a high-quality unsupervised shift-reduce parsing model. We
additionally show a close connection between unsupervised Searn and expectation
maximization. Finally, we demonstrate the efficacy of a semi-supervised
extension. The key idea that enables this is an application of the predict-self
idea for unsupervised learning.
| [
"Hal Daum\\'e III",
"['Hal Daumé III']"
] |
cs.LG cs.MM | 10.1109/TIP.2009.2035228 | 0906.5325 | null | null | http://arxiv.org/abs/0906.5325v1 | 2009-06-29T17:48:40Z | 2009-06-29T17:48:40Z | Online Reinforcement Learning for Dynamic Multimedia Systems | In our previous work, we proposed a systematic cross-layer framework for
dynamic multimedia systems, which allows each layer to make autonomous and
foresighted decisions that maximize the system's long-term performance, while
meeting the application's real-time delay constraints. The proposed solution
solved the cross-layer optimization offline, under the assumption that the
multimedia system's probabilistic dynamics were known a priori. In practice,
however, these dynamics are unknown a priori and therefore must be learned
online. In this paper, we address this problem by allowing the multimedia
system layers to learn, through repeated interactions with each other, to
autonomously optimize the system's long-term performance at run-time. We
propose two reinforcement learning algorithms for optimizing the system under
different design constraints: the first algorithm solves the cross-layer
optimization in a centralized manner, and the second solves it in a
decentralized manner. We analyze both algorithms in terms of their required
computation, memory, and inter-layer communication overheads. After noting that
the proposed reinforcement learning algorithms learn too slowly, we introduce a
complementary accelerated learning algorithm that exploits partial knowledge
about the system's dynamics in order to dramatically improve the system's
performance. In our experiments, we demonstrate that decentralized learning can
perform as well as centralized learning, while enabling the layers to act
autonomously. Additionally, we show that existing application-independent
reinforcement learning algorithms, and existing myopic learning algorithms
deployed in multimedia systems, perform significantly worse than our proposed
application-aware and foresighted learning methods.
| [
"['Nicholas Mastronarde' 'Mihaela van der Schaar']",
"Nicholas Mastronarde and Mihaela van der Schaar"
] |
cs.NE cs.LG | null | 0907.0229 | null | null | http://arxiv.org/pdf/0907.0229v1 | 2009-07-01T19:54:39Z | 2009-07-01T19:54:39Z | A new model of artificial neuron: cyberneuron and its use | This article describes a new type of artificial neuron, called the authors
"cyberneuron". Unlike classical models of artificial neurons, this type of
neuron used table substitution instead of the operation of multiplication of
input values for the weights. This allowed to significantly increase the
information capacity of a single neuron, but also greatly simplify the process
of learning. Considered an example of the use of "cyberneuron" with the task of
detecting computer viruses.
| [
"S. V. Polikarpov, V. S. Dergachev, K. E. Rumyantsev, D. M. Golubchikov",
"['S. V. Polikarpov' 'V. S. Dergachev' 'K. E. Rumyantsev'\n 'D. M. Golubchikov']"
] |
cs.LG | null | 0907.0453 | null | null | http://arxiv.org/pdf/0907.0453v2 | 2009-07-20T08:59:45Z | 2009-07-02T17:54:45Z | Random DFAs are Efficiently PAC Learnable | This paper has been withdrawn due to an error found by Dana Angluin and Lev
Reyzin.
| [
"Leonid Aryeh Kontorovich",
"['Leonid Aryeh Kontorovich']"
] |
cs.AI cs.IT cs.LG math.IT | null | 0907.0746 | null | null | http://arxiv.org/pdf/0907.0746v1 | 2009-07-04T08:45:22Z | 2009-07-04T08:45:22Z | Open Problems in Universal Induction & Intelligence | Specialized intelligent systems can be found everywhere: finger print,
handwriting, speech, and face recognition, spam filtering, chess and other game
programs, robots, et al. This decade the first presumably complete mathematical
theory of artificial intelligence based on universal
induction-prediction-decision-action has been proposed. This
information-theoretic approach solidifies the foundations of inductive
inference and artificial intelligence. Getting the foundations right usually
marks a significant progress and maturing of a field. The theory provides a
gold standard and guidance for researchers working on intelligent algorithms.
The roots of universal induction have been laid exactly half-a-century ago and
the roots of universal intelligence exactly one decade ago. So it is timely to
take stock of what has been achieved and what remains to be done. Since there
are already good recent surveys, I describe the state-of-the-art only in
passing and refer the reader to the literature. This article concentrates on
the open problems in universal induction and its extension to universal
intelligence.
| [
"Marcus Hutter",
"['Marcus Hutter']"
] |
cs.LG | null | 0907.0783 | null | null | http://arxiv.org/pdf/0907.0783v1 | 2009-07-04T18:35:52Z | 2009-07-04T18:35:52Z | Bayesian Multitask Learning with Latent Hierarchies | We learn multiple hypotheses for related tasks under a latent hierarchical
relationship between tasks. We exploit the intuition that for domain
adaptation, we wish to share classifier structure, but for multitask learning,
we wish to share covariance structure. Our hierarchical model is seen to
subsume several previously proposed multitask learning models and performs well
on three distinct real-world data sets.
| [
"Hal Daum\\'e III",
"['Hal Daumé III']"
] |
cs.LG cs.CL | null | 0907.0784 | null | null | http://arxiv.org/pdf/0907.0784v1 | 2009-07-04T18:42:01Z | 2009-07-04T18:42:01Z | Cross-Task Knowledge-Constrained Self Training | We present an algorithmic framework for learning multiple related tasks. Our
framework exploits a form of prior knowledge that relates the output spaces of
these tasks. We present PAC learning results that analyze the conditions under
which such learning is possible. We present results on learning a shallow
parser and named-entity recognition system that exploits our framework, showing
consistent improvements over baseline methods.
| [
"Hal Daum\\'e III",
"['Hal Daumé III']"
] |
cs.LG cs.CL | null | 0907.0786 | null | null | http://arxiv.org/pdf/0907.0786v1 | 2009-07-04T18:48:34Z | 2009-07-04T18:48:34Z | Search-based Structured Prediction | We present Searn, an algorithm for integrating search and learning to solve
complex structured prediction problems such as those that occur in natural
language, speech, computational biology, and vision. Searn is a meta-algorithm
that transforms these complex problems into simple classification problems to
which any binary classifier may be applied. Unlike current algorithms for
structured learning that require decomposition of both the loss function and
the feature functions over the predicted structure, Searn is able to learn
prediction functions for any loss function and any class of features. Moreover,
Searn comes with a strong, natural theoretical guarantee: good performance on
the derived classification problems implies good performance on the structured
prediction problem.
| [
"['Hal Daumé III' 'John Langford' 'Daniel Marcu']",
"Hal Daum\\'e III and John Langford and Daniel Marcu"
] |
cs.LG | null | 0907.0808 | null | null | http://arxiv.org/pdf/0907.0808v1 | 2009-07-04T22:32:58Z | 2009-07-04T22:32:58Z | A Bayesian Model for Supervised Clustering with the Dirichlet Process
Prior | We develop a Bayesian framework for tackling the supervised clustering
problem, the generic problem encountered in tasks such as reference matching,
coreference resolution, identity uncertainty and record linkage. Our clustering
model is based on the Dirichlet process prior, which enables us to define
distributions over the countably infinite sets that naturally arise in this
problem. We add supervision to our model by positing the existence of a set of
unobserved random variables (we call these "reference types") that are generic
across all clusters. Inference in our framework, which requires integrating
over infinitely many parameters, is solved using Markov chain Monte Carlo
techniques. We present algorithms for both conjugate and non-conjugate priors.
We present a simple--but general--parameterization of our model based on a
Gaussian assumption. We evaluate this model on one artificial task and three
real-world tasks, comparing it against both unsupervised and state-of-the-art
supervised algorithms. Our results show that our model is able to outperform
other models across a variety of tasks and performance metrics.
| [
"['Hal Daumé III' 'Daniel Marcu']",
"Hal Daum\\'e III and Daniel Marcu"
] |
cs.LG cs.CL | null | 0907.0809 | null | null | http://arxiv.org/pdf/0907.0809v1 | 2009-07-04T22:34:25Z | 2009-07-04T22:34:25Z | Learning as Search Optimization: Approximate Large Margin Methods for
Structured Prediction | Mappings to structured output spaces (strings, trees, partitions, etc.) are
typically learned using extensions of classification algorithms to simple
graphical structures (eg., linear chains) in which search and parameter
estimation can be performed exactly. Unfortunately, in many complex problems,
it is rare that exact search or parameter estimation is tractable. Instead of
learning exact models and searching via heuristic means, we embrace this
difficulty and treat the structured output problem in terms of approximate
search. We present a framework for learning as search optimization, and two
parameter updates with convergence theorems and bounds. Empirical evidence
shows that our integrated approach to learning and decoding can outperform
exact models at smaller computational cost.
| [
"['Hal Daumé III' 'Daniel Marcu']",
"Hal Daum\\'e III and Daniel Marcu"
] |
cs.LG cs.DS | null | 0907.1054 | null | null | http://arxiv.org/pdf/0907.1054v2 | 2010-05-13T19:20:36Z | 2009-07-06T17:41:57Z | Learning Gaussian Mixtures with Arbitrary Separation | In this paper we present a method for learning the parameters of a mixture of
$k$ identical spherical Gaussians in $n$-dimensional space with an arbitrarily
small separation between the components. Our algorithm is polynomial in all
parameters other than $k$. The algorithm is based on an appropriate grid search
over the space of parameters. The theoretical analysis of the algorithm hinges
on a reduction of the problem to 1 dimension and showing that two 1-dimensional
mixtures whose densities are close in the $L^2$ norm must have similar means
and mixing coefficients. To produce such a lower bound for the $L^2$ norm in
terms of the distances between the corresponding means, we analyze the behavior
of the Fourier transform of a mixture of Gaussians in 1 dimension around the
origin, which turns out to be closely related to the properties of the
Vandermonde matrix obtained from the component means. Analysis of this matrix
together with basic function approximation results allows us to provide a lower
bound for the norm of the mixture in the Fourier domain.
In recent years much research has been aimed at understanding the
computational aspects of learning parameters of Gaussians mixture distributions
in high dimension. To the best of our knowledge all existing work on learning
parameters of Gaussian mixtures assumes minimum separation between components
of the mixture which is an increasing function of either the dimension of the
space $n$ or the number of components $k$. In our paper we prove the first
result showing that parameters of a $n$-dimensional Gaussian mixture model with
arbitrarily small component separation can be learned in time polynomial in
$n$.
| [
"Mikhail Belkin and Kaushik Sinha",
"['Mikhail Belkin' 'Kaushik Sinha']"
] |
null | null | 0907.1413 | null | null | http://arxiv.org/pdf/0907.1413v3 | 2011-06-21T17:05:53Z | 2009-07-09T06:51:54Z | Privacy constraints in regularized convex optimization | This paper is withdrawn due to some errors, which are corrected in arXiv:0912.0071v4 [cs.LG]. | [
"['Kamalika Chaudhuri' 'Anand D. Sarwate']"
] |
cs.LG | null | 0907.1812 | null | null | http://arxiv.org/pdf/0907.1812v1 | 2009-07-10T13:23:37Z | 2009-07-10T13:23:37Z | Fast search for Dirichlet process mixture models | Dirichlet process (DP) mixture models provide a flexible Bayesian framework
for density estimation. Unfortunately, their flexibility comes at a cost:
inference in DP mixture models is computationally expensive, even when
conjugate distributions are used. In the common case when one seeks only a
maximum a posteriori assignment of data points to clusters, we show that search
algorithms provide a practical alternative to expensive MCMC and variational
techniques. When a true posterior sample is desired, the solution found by
search can serve as a good initializer for MCMC. Experimental results show that
using these techniques is it possible to apply DP mixture models to very large
data sets.
| [
"Hal Daum\\'e III",
"['Hal Daumé III']"
] |
cs.CL cs.IR cs.LG | null | 0907.1814 | null | null | http://arxiv.org/pdf/0907.1814v1 | 2009-07-10T13:24:55Z | 2009-07-10T13:24:55Z | Bayesian Query-Focused Summarization | We present BayeSum (for ``Bayesian summarization''), a model for sentence
extraction in query-focused summarization. BayeSum leverages the common case in
which multiple documents are relevant to a single query. Using these documents
as reinforcement for query terms, BayeSum is not afflicted by the paucity of
information in short queries. We show that approximate inference in BayeSum is
possible on large data sets and results in a state-of-the-art summarization
system. Furthermore, we show how BayeSum can be understood as a justified query
expansion technique in the language modeling for IR framework.
| [
"Hal Daum\\'e III",
"['Hal Daumé III']"
] |
cs.LG cs.CL | null | 0907.1815 | null | null | http://arxiv.org/pdf/0907.1815v1 | 2009-07-10T13:25:48Z | 2009-07-10T13:25:48Z | Frustratingly Easy Domain Adaptation | We describe an approach to domain adaptation that is appropriate exactly in
the case when one has enough ``target'' data to do slightly better than just
using only ``source'' data. Our approach is incredibly simple, easy to
implement as a preprocessing step (10 lines of Perl!) and outperforms
state-of-the-art approaches on a range of datasets. Moreover, it is trivially
extended to a multi-domain adaptation problem, where one has data from a
variety of different domains.
| [
"Hal Daum\\'e III",
"['Hal Daumé III']"
] |
cs.GT cs.LG | null | 0907.1916 | null | null | http://arxiv.org/pdf/0907.1916v1 | 2009-07-10T21:26:54Z | 2009-07-10T21:26:54Z | Learning Equilibria in Games by Stochastic Distributed Algorithms | We consider a class of fully stochastic and fully distributed algorithms,
that we prove to learn equilibria in games.
Indeed, we consider a family of stochastic distributed dynamics that we prove
to converge weakly (in the sense of weak convergence for probabilistic
processes) towards their mean-field limit, i.e an ordinary differential
equation (ODE) in the general case. We focus then on a class of stochastic
dynamics where this ODE turns out to be related to multipopulation replicator
dynamics.
Using facts known about convergence of this ODE, we discuss the convergence
of the initial stochastic dynamics: For general games, there might be
non-convergence, but when convergence of the ODE holds, considered stochastic
algorithms converge towards Nash equilibria. For games admitting Lyapunov
functions, that we call Lyapunov games, the stochastic dynamics converge. We
prove that any ordinal potential game, and hence any potential game is a
Lyapunov game, with a multiaffine Lyapunov function. For Lyapunov games with a
multiaffine Lyapunov function, we prove that this Lyapunov function is a
super-martingale over the stochastic dynamics. This leads a way to provide
bounds on their time of convergence by martingale arguments. This applies in
particular for many classes of games that have been considered in literature,
including several load balancing game scenarios and congestion games.
| [
"['Olivier Bournez' 'Johanne Cohen']",
"Olivier Bournez and Johanne Cohen"
] |
math.OC cs.LG math.ST stat.AP stat.CO stat.ME stat.ML stat.TH | null | 0907.2079 | null | null | http://arxiv.org/pdf/0907.2079v1 | 2009-07-13T00:45:51Z | 2009-07-13T00:45:51Z | An Augmented Lagrangian Approach for Sparse Principal Component Analysis | Principal component analysis (PCA) is a widely used technique for data
analysis and dimension reduction with numerous applications in science and
engineering. However, the standard PCA suffers from the fact that the principal
components (PCs) are usually linear combinations of all the original variables,
and it is thus often difficult to interpret the PCs. To alleviate this
drawback, various sparse PCA approaches were proposed in literature [15, 6, 17,
28, 8, 25, 18, 7, 16]. Despite success in achieving sparsity, some important
properties enjoyed by the standard PCA are lost in these methods such as
uncorrelation of PCs and orthogonality of loading vectors. Also, the total
explained variance that they attempt to maximize can be too optimistic. In this
paper we propose a new formulation for sparse PCA, aiming at finding sparse and
nearly uncorrelated PCs with orthogonal loading vectors while explaining as
much of the total variance as possible. We also develop a novel augmented
Lagrangian method for solving a class of nonsmooth constrained optimization
problems, which is well suited for our formulation of sparse PCA. We show that
it converges to a feasible point, and moreover under some regularity
assumptions, it converges to a stationary point. Additionally, we propose two
nonmonotone gradient methods for solving the augmented Lagrangian subproblems,
and establish their global and local convergence. Finally, we compare our
sparse PCA approach with several existing methods on synthetic, random, and
real data, respectively. The computational results demonstrate that the sparse
PCs produced by our approach substantially outperform those by other methods in
terms of total explained variance, correlation of PCs, and orthogonality of
loading vectors.
| [
"['Zhaosong Lu' 'Yong Zhang']",
"Zhaosong Lu and Yong Zhang"
] |
cs.NI cs.LG | 10.1109/COMSWA.2008.4554505 | 0907.2222 | null | null | http://arxiv.org/abs/0907.2222v1 | 2009-07-13T18:18:28Z | 2009-07-13T18:18:28Z | Network-aware Adaptation with Real-Time Channel Statistics for Wireless
LAN Multimedia Transmissions in the Digital Home | This paper suggests the use of intelligent network-aware processing agents in
wireless local area network drivers to generate metrics for bandwidth
estimation based on real-time channel statistics to enable wireless multimedia
application adaptation. Various configurations in the wireless digital home are
studied and the experimental results with performance variations are presented.
| [
"['Dilip Krishnaswamy' 'Shanyu Zhao']",
"Dilip Krishnaswamy, Shanyu Zhao"
] |
cs.LG cs.NE | 10.1007/s10846-005-3806-y | 0907.3342 | null | null | http://arxiv.org/abs/0907.3342v1 | 2009-07-20T05:58:24Z | 2009-07-20T05:58:24Z | Neural Modeling and Control of Diesel Engine with Pollution Constraints | The paper describes a neural approach for modelling and control of a
turbocharged Diesel engine. A neural model, whose structure is mainly based on
some physical equations describing the engine behaviour, is built for the
rotation speed and the exhaust gas opacity. The model is composed of three
interconnected neural submodels, each of them constituting a nonlinear
multi-input single-output error model. The structural identification and the
parameter estimation from data gathered on a real engine are described. The
neural direct model is then used to determine a neural controller of the
engine, in a specialized training scheme minimising a multivariable criterion.
Simulations show the effect of the pollution constraint weighting on a
trajectory tracking of the engine speed. Neural networks, which are flexible
and parsimonious nonlinear black-box models, with universal approximation
capabilities, can accurately describe or control complex nonlinear systems,
with little a priori theoretical knowledge. The presented work extends optimal
neuro-control to the multivariable case and shows the flexibility of neural
optimisers. Considering the preliminary results, it appears that neural
networks can be used as embedded models for engine control, to satisfy the more
and more restricting pollutant emission legislation. Particularly, they are
able to model nonlinear dynamics and outperform during transients the control
schemes based on static mappings.
| [
"Mustapha Ouladsine (LSIS), G\\'erard Bloch (CRAN), Xavier Dovifaaz\n (CRAN)",
"['Mustapha Ouladsine' 'Gérard Bloch' 'Xavier Dovifaaz']"
] |
cs.DS cs.LG | null | 0907.3986 | null | null | http://arxiv.org/pdf/0907.3986v5 | 2014-05-20T03:52:46Z | 2009-07-23T06:41:33Z | Contextual Bandits with Similarity Information | In a multi-armed bandit (MAB) problem, an online algorithm makes a sequence
of choices. In each round it chooses from a time-invariant set of alternatives
and receives the payoff associated with this alternative. While the case of
small strategy sets is by now well-understood, a lot of recent work has focused
on MAB problems with exponentially or infinitely large strategy sets, where one
needs to assume extra structure in order to make the problem tractable. In
particular, recent literature considered information on similarity between
arms.
We consider similarity information in the setting of "contextual bandits", a
natural extension of the basic MAB problem where before each round an algorithm
is given the "context" -- a hint about the payoffs in this round. Contextual
bandits are directly motivated by placing advertisements on webpages, one of
the crucial problems in sponsored search. A particularly simple way to
represent similarity information in the contextual bandit setting is via a
"similarity distance" between the context-arm pairs which gives an upper bound
on the difference between the respective expected payoffs.
Prior work on contextual bandits with similarity uses "uniform" partitions of
the similarity space, which is potentially wasteful. We design more efficient
algorithms that are based on adaptive partitions adjusted to "popular" context
and "high-payoff" arms.
| [
"Aleksandrs Slivkins",
"['Aleksandrs Slivkins']"
] |
stat.ML cs.LG math.OC | null | 0908.0050 | null | null | http://arxiv.org/pdf/0908.0050v2 | 2010-02-11T07:33:02Z | 2009-08-01T06:09:18Z | Online Learning for Matrix Factorization and Sparse Coding | Sparse coding--that is, modelling data vectors as sparse linear combinations
of basis elements--is widely used in machine learning, neuroscience, signal
processing, and statistics. This paper focuses on the large-scale matrix
factorization problem that consists of learning the basis set, adapting it to
specific data. Variations of this problem include dictionary learning in signal
processing, non-negative matrix factorization and sparse principal component
analysis. In this paper, we propose to address these tasks with a new online
optimization algorithm, based on stochastic approximations, which scales up
gracefully to large datasets with millions of training samples, and extends
naturally to various matrix factorization formulations, making it suitable for
a wide range of learning problems. A proof of convergence is presented, along
with experiments with natural images and genomic data demonstrating that it
leads to state-of-the-art performance in terms of speed and optimization for
both small and large datasets.
| [
"['Julien Mairal' 'Francis Bach' 'Jean Ponce' 'Guillermo Sapiro']",
"Julien Mairal (INRIA Rocquencourt), Francis Bach (INRIA Rocquencourt),\n Jean Ponce (INRIA Rocquencourt, LIENS), Guillermo Sapiro"
] |
stat.ML cs.AI cs.LG cs.NI | null | 0908.0319 | null | null | http://arxiv.org/pdf/0908.0319v1 | 2009-08-03T19:25:58Z | 2009-08-03T19:25:58Z | Regret Bounds for Opportunistic Channel Access | We consider the task of opportunistic channel access in a primary system
composed of independent Gilbert-Elliot channels where the secondary (or
opportunistic) user does not dispose of a priori information regarding the
statistical characteristics of the system. It is shown that this problem may be
cast into the framework of model-based learning in a specific class of
Partially Observed Markov Decision Processes (POMDPs) for which we introduce an
algorithm aimed at striking an optimal tradeoff between the exploration (or
estimation) and exploitation requirements. We provide finite horizon regret
bounds for this algorithm as well as a numerical evaluation of its performance
in the single channel model as well as in the case of stochastically identical
channels.
| [
"Sarah Filippi (LTCI), Olivier Capp\\'e (LTCI), Aur\\'elien Garivier\n (LTCI)",
"['Sarah Filippi' 'Olivier Cappé' 'Aurélien Garivier']"
] |
cs.LG stat.ML | null | 0908.0570 | null | null | http://arxiv.org/pdf/0908.0570v1 | 2009-08-05T01:10:09Z | 2009-08-05T01:10:09Z | The Infinite Hierarchical Factor Regression Model | We propose a nonparametric Bayesian factor regression model that accounts for
uncertainty in the number of factors, and the relationship between factors. To
accomplish this, we propose a sparse variant of the Indian Buffet Process and
couple this with a hierarchical model over factors, based on Kingman's
coalescent. We apply this model to two problems (factor analysis and factor
regression) in gene-expression data analysis.
| [
"['Piyush Rai' 'Hal Daumé III']",
"Piyush Rai and Hal Daum\\'e III"
] |
cs.LG stat.ML | null | 0908.0572 | null | null | http://arxiv.org/pdf/0908.0572v1 | 2009-08-05T00:40:23Z | 2009-08-05T00:40:23Z | Streamed Learning: One-Pass SVMs | We present a streaming model for large-scale classification (in the context
of $\ell_2$-SVM) by leveraging connections between learning and computational
geometry. The streaming model imposes the constraint that only a single pass
over the data is allowed. The $\ell_2$-SVM is known to have an equivalent
formulation in terms of the minimum enclosing ball (MEB) problem, and an
efficient algorithm based on the idea of \emph{core sets} exists (Core Vector
Machine, CVM). CVM learns a $(1+\varepsilon)$-approximate MEB for a set of
points and yields an approximate solution to corresponding SVM instance.
However CVM works in batch mode requiring multiple passes over the data. This
paper presents a single-pass SVM which is based on the minimum enclosing ball
of streaming data. We show that the MEB updates for the streaming case can be
easily adapted to learn the SVM weight vector in a way similar to using online
stochastic gradient updates. Our algorithm performs polylogarithmic computation
at each example, and requires very small and constant storage. Experimental
results show that, even in such restrictive settings, we can learn efficiently
in just one pass and get accuracies comparable to other state-of-the-art SVM
solvers (batch and online). We also give an analysis of the algorithm, and
discuss some open issues and possible extensions.
| [
"['Piyush Rai' 'Hal Daumé III' 'Suresh Venkatasubramanian']",
"Piyush Rai, Hal Daum\\'e III, Suresh Venkatasubramanian"
] |
cs.LG cs.DS | null | 0908.0772 | null | null | http://arxiv.org/pdf/0908.0772v1 | 2009-08-05T23:56:22Z | 2009-08-05T23:56:22Z | Online Learning of Assignments that Maximize Submodular Functions | Which ads should we display in sponsored search in order to maximize our
revenue? How should we dynamically rank information sources to maximize value
of information? These applications exhibit strong diminishing returns:
Selection of redundant ads and information sources decreases their marginal
utility. We show that these and other problems can be formalized as repeatedly
selecting an assignment of items to positions to maximize a sequence of
monotone submodular functions that arrive one by one. We present an efficient
algorithm for this general problem and analyze it in the no-regret model. Our
algorithm possesses strong theoretical guarantees, such as a performance ratio
that converges to the optimal constant of 1-1/e. We empirically evaluate our
algorithm on two real-world online optimization problems on the web: ad
allocation with submodular utilities, and dynamically ranking blogs to detect
information cascades.
| [
"['Daniel Golovin' 'Andreas Krause' 'Matthew Streeter']",
"Daniel Golovin, Andreas Krause, and Matthew Streeter"
] |
cs.LG | null | 0908.0939 | null | null | http://arxiv.org/pdf/0908.0939v1 | 2009-08-06T19:48:20Z | 2009-08-06T19:48:20Z | Clustering for Improved Learning in Maze Traversal Problem | The maze traversal problem (finding the shortest distance to the goal from
any position in a maze) has been an interesting challenge in computational
intelligence. Recent work has shown that the cellular simultaneous recurrent
neural network (CSRN) can solve this problem for simple mazes. This thesis
focuses on exploiting relevant information about the maze to improve learning
and decrease the training time for the CSRN to solve mazes. Appropriate
variables are identified to create useful clusters using relevant information.
The CSRN was next modified to allow for an additional external input. With this
additional input, several methods were tested and results show that clustering
the mazes improves the overall learning of the traversal problem for the CSRN.
| [
"['Eddie White']",
"Eddie White"
] |
cs.DB cs.LG | null | 0908.0984 | null | null | http://arxiv.org/pdf/0908.0984v1 | 2009-08-07T05:36:51Z | 2009-08-07T05:36:51Z | An Application of Bayesian classification to Interval Encoded Temporal
mining with prioritized items | In real life, media information has time attributes either implicitly or
explicitly known as temporal data. This paper investigates the usefulness of
applying Bayesian classification to an interval encoded temporal database with
prioritized items. The proposed method performs temporal mining by encoding the
database with weighted items which prioritizes the items according to their
importance from the user perspective. Naive Bayesian classification helps in
making the resulting temporal rules more effective. The proposed priority based
temporal mining (PBTM) method added with classification aids in solving
problems in a well informed and systematic manner. The experimental results are
obtained from the complaints database of the telecommunications system, which
shows the feasibility of this method of classification based temporal mining.
| [
"C. Balasubramanian, K. Duraiswamy",
"['C. Balasubramanian' 'K. Duraiswamy']"
] |
cs.LG cs.IT math.IT | null | 0908.1769 | null | null | http://arxiv.org/pdf/0908.1769v1 | 2009-08-12T18:27:54Z | 2009-08-12T18:27:54Z | Approximating the Permanent with Belief Propagation | This work describes a method of approximating matrix permanents efficiently
using belief propagation. We formulate a probability distribution whose
partition function is exactly the permanent, then use Bethe free energy to
approximate this partition function. After deriving some speedups to standard
belief propagation, the resulting algorithm requires $(n^2)$ time per
iteration. Finally, we demonstrate the advantages of using this approximation.
| [
"['Bert Huang' 'Tony Jebara']",
"Bert Huang and Tony Jebara"
] |
cs.GT cs.LG cs.NI | null | 0908.3265 | null | null | http://arxiv.org/pdf/0908.3265v1 | 2009-08-22T16:45:32Z | 2009-08-22T16:45:32Z | Rate Constrained Random Access over a Fading Channel | In this paper, we consider uplink transmissions involving multiple users
communicating with a base station over a fading channel. We assume that the
base station does not coordinate the transmissions of the users and hence the
users employ random access communication. The situation is modeled as a
non-cooperative repeated game with incomplete information. Each user attempts
to minimize its long term power consumption subject to a minimum rate
requirement. We propose a two timescale stochastic gradient algorithm (TTSGA)
for tuning the users' transmission probabilities. The algorithm includes a
'waterfilling threshold update mechanism' that ensures that the rate
constraints are satisfied. We prove that under the algorithm, the users'
transmission probabilities converge to a Nash equilibrium. Moreover, we also
prove that the rate constraints are satisfied; this is also demonstrated using
simulation studies.
| [
"['Nitin Salodkar' 'Abhay Karandikar']",
"Nitin Salodkar and Abhay Karandikar"
] |
astro-ph.CO astro-ph.IM cs.LG cs.NE | null | 0908.3706 | null | null | http://arxiv.org/pdf/0908.3706v1 | 2009-08-25T23:21:39Z | 2009-08-25T23:21:39Z | Uncovering delayed patterns in noisy and irregularly sampled time
series: an astronomy application | We study the problem of estimating the time delay between two signals
representing delayed, irregularly sampled and noisy versions of the same
underlying pattern. We propose and demonstrate an evolutionary algorithm for
the (hyper)parameter estimation of a kernel-based technique in the context of
an astronomical problem, namely estimating the time delay between two
gravitationally lensed signals from a distant quasar. Mixed types (integer and
real) are used to represent variables within the evolutionary algorithm. We
test the algorithm on several artificial data sets, and also on real
astronomical observations of quasar Q0957+561. By carrying out a statistical
analysis of the results we present a detailed comparison of our method with the
most popular methods for time delay estimation in astrophysics. Our method
yields more accurate and more stable time delay estimates: for Q0957+561, we
obtain 419.6 days for the time delay between images A and B. Our methodology
can be readily applied to current state-of-the-art optical monitoring data in
astronomy, but can also be applied in other disciplines involving similar time
series data.
| [
"Juan C. Cuevas-Tello, Peter Tino, Somak Raychaudhury, Xin Yao, Markus\n Harva",
"['Juan C. Cuevas-Tello' 'Peter Tino' 'Somak Raychaudhury' 'Xin Yao'\n 'Markus Harva']"
] |
cs.LG cs.AI | null | 0908.4144 | null | null | http://arxiv.org/pdf/0908.4144v1 | 2009-08-28T07:09:19Z | 2009-08-28T07:09:19Z | ABC-LogitBoost for Multi-class Classification | We develop abc-logitboost, based on the prior work on abc-boost and robust
logitboost. Our extensive experiments on a variety of datasets demonstrate the
considerable improvement of abc-logitboost over logitboost and abc-mart.
| [
"Ping Li",
"['Ping Li']"
] |
q-bio.GN cs.IT cs.LG math.IT q-bio.QM stat.AP stat.ML | null | 0909.0400 | null | null | http://arxiv.org/pdf/0909.0400v1 | 2009-09-02T13:25:48Z | 2009-09-02T13:25:48Z | Rare-Allele Detection Using Compressed Se(que)nsing | Detection of rare variants by resequencing is important for the
identification of individuals carrying disease variants. Rapid sequencing by
new technologies enables low-cost resequencing of target regions, although it
is still prohibitive to test more than a few individuals. In order to improve
cost trade-offs, it has recently been suggested to apply pooling designs which
enable the detection of carriers of rare alleles in groups of individuals.
However, this was shown to hold only for a relatively low number of individuals
in a pool, and requires the design of pooling schemes for particular cases.
We propose a novel pooling design, based on a compressed sensing approach,
which is both general, simple and efficient. We model the experimental
procedure and show via computer simulations that it enables the recovery of
rare allele carriers out of larger groups than were possible before, especially
in situations where high coverage is obtained for each individual.
Our approach can also be combined with barcoding techniques to enhance
performance and provide a feasible solution based on current resequencing
costs. For example, when targeting a small enough genomic region (~100
base-pairs) and using only ~10 sequencing lanes and ~10 distinct barcodes, one
can recover the identity of 4 rare allele carriers out of a population of over
4000 individuals.
| [
"Noam Shental, Amnon Amir and Or Zuk",
"['Noam Shental' 'Amnon Amir' 'Or Zuk']"
] |
cs.LG cs.IT math.IT | 10.1007/978-3-642-01805-3_4 | 0909.0635 | null | null | http://arxiv.org/abs/0909.0635v1 | 2009-09-03T12:04:57Z | 2009-09-03T12:04:57Z | Advances in Feature Selection with Mutual Information | The selection of features that are relevant for a prediction or
classification problem is an important problem in many domains involving
high-dimensional data. Selecting features helps fighting the curse of
dimensionality, improving the performances of prediction or classification
methods, and interpreting the application. In a nonlinear context, the mutual
information is widely used as relevance criterion for features and sets of
features. Nevertheless, it suffers from at least three major limitations:
mutual information estimators depend on smoothing parameters, there is no
theoretically justified stopping criterion in the feature selection greedy
procedure, and the estimation itself suffers from the curse of dimensionality.
This chapter shows how to deal with these problems. The two first ones are
addressed by using resampling techniques that provide a statistical basis to
select the estimator parameters and to stop the search procedure. The third one
is addressed by modifying the mutual information criterion into a measure of
how features are complementary (and not only informative) for the problem at
hand.
| [
"Michel Verleysen (DICE - MLG), Fabrice Rossi (LTCI), Damien\n Fran\\c{c}ois (CESAME)",
"['Michel Verleysen' 'Fabrice Rossi' 'Damien François']"
] |
cs.LG q-bio.QM | 10.1007/978-3-642-01805-3_6 | 0909.0638 | null | null | http://arxiv.org/abs/0909.0638v1 | 2009-09-03T12:09:08Z | 2009-09-03T12:09:08Z | Median topographic maps for biomedical data sets | Median clustering extends popular neural data analysis methods such as the
self-organizing map or neural gas to general data structures given by a
dissimilarity matrix only. This offers flexible and robust global data
inspection methods which are particularly suited for a variety of data as
occurs in biomedical domains. In this chapter, we give an overview about median
clustering and its properties and extensions, with a particular focus on
efficient implementations adapted to large scale data analysis.
| [
"['Barbara Hammer' 'Alexander Hasenfuß' 'Fabrice Rossi']",
"Barbara Hammer, Alexander Hasenfu{\\ss}, Fabrice Rossi (LTCI)"
] |
q-bio.QM cs.LG q-bio.GN | null | 0909.0737 | null | null | http://arxiv.org/pdf/0909.0737v2 | 2012-10-16T21:57:24Z | 2009-09-03T19:29:56Z | Efficient algorithms for training the parameters of hidden Markov models
using stochastic expectation maximization EM training and Viterbi training | Background: Hidden Markov models are widely employed by numerous
bioinformatics programs used today. Applications range widely from comparative
gene prediction to time-series analyses of micro-array data. The parameters of
the underlying models need to be adjusted for specific data sets, for example
the genome of a particular species, in order to maximize the prediction
accuracy. Computationally efficient algorithms for parameter training are thus
key to maximizing the usability of a wide range of bioinformatics applications.
Results: We introduce two computationally efficient training algorithms, one
for Viterbi training and one for stochastic expectation maximization (EM)
training, which render the memory requirements independent of the sequence
length. Unlike the existing algorithms for Viterbi and stochastic EM training
which require a two-step procedure, our two new algorithms require only one
step and scan the input sequence in only one direction. We also implement these
two new algorithms and the already published linear-memory algorithm for EM
training into the hidden Markov model compiler HMM-Converter and examine their
respective practical merits for three small example models.
Conclusions: Bioinformatics applications employing hidden Markov models can
use the two algorithms in order to make Viterbi training and stochastic EM
training more computationally efficient. Using these algorithms, parameter
training can thus be attempted for more complex models and longer training
sequences. The two new algorithms have the added advantage of being easier to
implement than the corresponding default algorithms for Viterbi training and
stochastic EM training.
| [
"['Tin Yin Lam' 'Irmtraud M. Meyer']",
"Tin Yin Lam and Irmtraud M. Meyer"
] |
cs.AI cs.IT cs.LG math.IT | null | 0909.0801 | null | null | http://arxiv.org/pdf/0909.0801v2 | 2010-12-26T11:01:10Z | 2009-09-04T03:13:58Z | A Monte Carlo AIXI Approximation | This paper introduces a principled approach for the design of a scalable
general reinforcement learning agent. Our approach is based on a direct
approximation of AIXI, a Bayesian optimality notion for general reinforcement
learning agents. Previously, it has been unclear whether the theory of AIXI
could motivate the design of practical algorithms. We answer this hitherto open
question in the affirmative, by providing the first computationally feasible
approximation to the AIXI agent. To develop our approximation, we introduce a
new Monte-Carlo Tree Search algorithm along with an agent-specific extension to
the Context Tree Weighting algorithm. Empirically, we present a set of
encouraging results on a variety of stochastic and partially observable
domains. We conclude by proposing a number of directions for future research.
| [
"Joel Veness and Kee Siong Ng and Marcus Hutter and William Uther and\n David Silver",
"['Joel Veness' 'Kee Siong Ng' 'Marcus Hutter' 'William Uther'\n 'David Silver']"
] |
cs.LG math.ST stat.TH | null | 0909.0844 | null | null | http://arxiv.org/pdf/0909.0844v1 | 2009-09-04T09:43:38Z | 2009-09-04T09:43:38Z | High-Dimensional Non-Linear Variable Selection through Hierarchical
Kernel Learning | We consider the problem of high-dimensional non-linear variable selection for
supervised learning. Our approach is based on performing linear selection among
exponentially many appropriately defined positive definite kernels that
characterize non-linear interactions between the original variables. To select
efficiently from these many kernels, we use the natural hierarchical structure
of the problem to extend the multiple kernel learning framework to kernels that
can be embedded in a directed acyclic graph; we show that it is then possible
to perform kernel selection through a graph-adapted sparsity-inducing norm, in
polynomial time in the number of selected kernels. Moreover, we study the
consistency of variable selection in high-dimensional settings, showing that
under certain assumptions, our regularization framework allows a number of
irrelevant variables which is exponential in the number of observations. Our
simulations on synthetic datasets and datasets from the UCI repository show
state-of-the-art predictive performance for non-linear regression problems.
| [
"['Francis Bach']",
"Francis Bach (INRIA Rocquencourt)"
] |
cs.CG cs.DS cs.LG | null | 0909.1062 | null | null | http://arxiv.org/pdf/0909.1062v4 | 2010-09-15T00:54:09Z | 2009-09-05T23:24:32Z | New Approximation Algorithms for Minimum Enclosing Convex Shapes | Given $n$ points in a $d$ dimensional Euclidean space, the Minimum Enclosing
Ball (MEB) problem is to find the ball with the smallest radius which contains
all $n$ points. We give a $O(nd\Qcal/\sqrt{\epsilon})$ approximation algorithm
for producing an enclosing ball whose radius is at most $\epsilon$ away from
the optimum (where $\Qcal$ is an upper bound on the norm of the points). This
improves existing results using \emph{coresets}, which yield a $O(nd/\epsilon)$
greedy algorithm. Finding the Minimum Enclosing Convex Polytope (MECP) is a
related problem wherein a convex polytope of a fixed shape is given and the aim
is to find the smallest magnification of the polytope which encloses the given
points. For this problem we present a $O(mnd\Qcal/\epsilon)$ approximation
algorithm, where $m$ is the number of faces of the polytope. Our algorithms
borrow heavily from convex duality and recently developed techniques in
non-smooth optimization, and are in contrast with existing methods which rely
on geometric arguments. In particular, we specialize the excessive gap
framework of \citet{Nesterov05a} to obtain our results.
| [
"['Ankan Saha' 'S. V. N. Vishwanathan' 'Xinhua Zhang']",
"Ankan Saha (1), S.V.N. Vishwanathan (2), Xinhua Zhang (3) ((1)\n University of Chicago, (2) Purdue University, (3) University of Alberta)"
] |
cs.LG cs.CL | 10.1109/JSTSP.2010.2076150 | 0909.1308 | null | null | http://arxiv.org/abs/0909.1308v2 | 2010-01-03T16:48:14Z | 2009-09-07T18:48:42Z | Efficient Learning of Sparse Conditional Random Fields for Supervised
Sequence Labelling | Conditional Random Fields (CRFs) constitute a popular and efficient approach
for supervised sequence labelling. CRFs can cope with large description spaces
and can integrate some form of structural dependency between labels. In this
contribution, we address the issue of efficient feature selection for CRFs
based on imposing sparsity through an L1 penalty. We first show how sparsity of
the parameter set can be exploited to significantly speed up training and
labelling. We then introduce coordinate descent parameter update schemes for
CRFs with L1 regularization. We finally provide some empirical comparisons of
the proposed approach with state-of-the-art CRF training strategies. In
particular, it is shown that the proposed approach is able to take profit of
the sparsity to speed up processing and hence potentially handle larger
dimensional models.
| [
"Nataliya Sokolovska (LTCI), Thomas Lavergne (LIMSI), Olivier Capp\\'e\n (LTCI), Fran\\c{c}ois Yvon (LIMSI)",
"['Nataliya Sokolovska' 'Thomas Lavergne' 'Olivier Cappé' 'François Yvon']"
] |
cs.LG cs.AI cs.DS | null | 0909.1334 | null | null | http://arxiv.org/pdf/0909.1334v2 | 2009-09-08T22:00:12Z | 2009-09-07T20:58:47Z | Lower Bounds for BMRM and Faster Rates for Training SVMs | Regularized risk minimization with the binary hinge loss and its variants
lies at the heart of many machine learning problems. Bundle methods for
regularized risk minimization (BMRM) and the closely related SVMStruct are
considered the best general purpose solvers to tackle this problem. It was
recently shown that BMRM requires $O(1/\epsilon)$ iterations to converge to an
$\epsilon$ accurate solution. In the first part of the paper we use the
Hadamard matrix to construct a regularized risk minimization problem and show
that these rates cannot be improved. We then show how one can exploit the
structure of the objective function to devise an algorithm for the binary hinge
loss which converges to an $\epsilon$ accurate solution in
$O(1/\sqrt{\epsilon})$ iterations.
| [
"Ankan Saha (1), Xinhua Zhang (2), S.V.N. Vishwanathan (3) ((1)\n University of Chicago, (2) Australian National University, NICTA, (3) Purdue\n University)",
"['Ankan Saha' 'Xinhua Zhang' 'S. V. N. Vishwanathan']"
] |
cs.DB cs.LG | null | 0909.1776 | null | null | http://arxiv.org/pdf/0909.1776v1 | 2009-09-09T18:10:07Z | 2009-09-09T18:10:07Z | Sailing the Information Ocean with Awareness of Currents: Discovery and
Application of Source Dependence | The Web has enabled the availability of a huge amount of useful information,
but has also eased the ability to spread false information and rumors across
multiple sources, making it hard to distinguish between what is true and what
is not. Recent examples include the premature Steve Jobs obituary, the second
bankruptcy of United airlines, the creation of Black Holes by the operation of
the Large Hadron Collider, etc. Since it is important to permit the expression
of dissenting and conflicting opinions, it would be a fallacy to try to ensure
that the Web provides only consistent information. However, to help in
separating the wheat from the chaff, it is essential to be able to determine
dependence between sources. Given the huge number of data sources and the vast
volume of conflicting data available on the Web, doing so in a scalable manner
is extremely challenging and has not been addressed by existing work yet.
In this paper, we present a set of research problems and propose some
preliminary solutions on the issues involved in discovering dependence between
sources. We also discuss how this knowledge can benefit a variety of
technologies, such as data integration and Web 2.0, that help users manage and
access the totality of the available information from various sources.
| [
"['Laure Berti-Equille' 'Anish Das Sarma' 'Xin' 'Dong' 'Amelie Marian'\n 'Divesh Srivastava']",
"Laure Berti-Equille (Universite de Rennes 1), Anish Das Sarma\n (Stanford University), Xin (Luna) Dong (AT&T Labs-Research), Amelie Marian\n (Rutgus University), Divesh Srivastava (ATT Labs-Research)"
] |
cs.LG math.ST stat.ML stat.TH | null | 0909.1933 | null | null | http://arxiv.org/pdf/0909.1933v2 | 2010-06-04T08:43:38Z | 2009-09-10T11:51:10Z | Chromatic PAC-Bayes Bounds for Non-IID Data: Applications to Ranking and
Stationary $\beta$-Mixing Processes | Pac-Bayes bounds are among the most accurate generalization bounds for
classifiers learned from independently and identically distributed (IID) data,
and it is particularly so for margin classifiers: there have been recent
contributions showing how practical these bounds can be either to perform model
selection (Ambroladze et al., 2007) or even to directly guide the learning of
linear classifiers (Germain et al., 2009). However, there are many practical
situations where the training data show some dependencies and where the
traditional IID assumption does not hold. Stating generalization bounds for
such frameworks is therefore of the utmost interest, both from theoretical and
practical standpoints. In this work, we propose the first - to the best of our
knowledge - Pac-Bayes generalization bounds for classifiers trained on data
exhibiting interdependencies. The approach undertaken to establish our results
is based on the decomposition of a so-called dependency graph that encodes the
dependencies within the data, in sets of independent data, thanks to graph
fractional covers. Our bounds are very general, since being able to find an
upper bound on the fractional chromatic number of the dependency graph is
sufficient to get new Pac-Bayes bounds for specific settings. We show how our
results can be used to derive bounds for ranking statistics (such as Auc) and
classifiers trained on data distributed according to a stationary {\ss}-mixing
process. In the way, we show how our approach seemlessly allows us to deal with
U-processes. As a side note, we also provide a Pac-Bayes generalization bound
for classifiers learned on data from stationary $\varphi$-mixing distributions.
| [
"Liva Ralaivola (LIF), Marie Szafranski (IBISC), Guillaume Stempfel\n (LIF)",
"['Liva Ralaivola' 'Marie Szafranski' 'Guillaume Stempfel']"
] |
cs.DS cs.DB cs.LG | null | 0909.2194 | null | null | http://arxiv.org/pdf/0909.2194v1 | 2009-09-11T15:32:03Z | 2009-09-11T15:32:03Z | Approximate Nearest Neighbor Search through Comparisons | This paper addresses the problem of finding the nearest neighbor (or one of
the R-nearest neighbors) of a query object q in a database of n objects. In
contrast with most existing approaches, we can only access the ``hidden'' space
in which the objects live through a similarity oracle. The oracle, given two
reference objects and a query object, returns the reference object closest to
the query object. The oracle attempts to model the behavior of human users,
capable of making statements about similarity, but not of assigning meaningful
numerical values to distances between objects.
| [
"['Dominique Tschopp' 'Suhas Diggavi']",
"Dominique Tschopp, Suhas Diggavi"
] |
cs.IT cs.LG math.IT math.ST stat.TH | 10.1109/TIT.2011.2104670 | 0909.2234 | null | null | http://arxiv.org/abs/0909.2234v3 | 2010-09-09T06:56:44Z | 2009-09-11T18:35:52Z | Universal and Composite Hypothesis Testing via Mismatched Divergence | For the universal hypothesis testing problem, where the goal is to decide
between the known null hypothesis distribution and some other unknown
distribution, Hoeffding proposed a universal test in the nineteen sixties.
Hoeffding's universal test statistic can be written in terms of
Kullback-Leibler (K-L) divergence between the empirical distribution of the
observations and the null hypothesis distribution. In this paper a modification
of Hoeffding's test is considered based on a relaxation of the K-L divergence
test statistic, referred to as the mismatched divergence. The resulting
mismatched test is shown to be a generalized likelihood-ratio test (GLRT) for
the case where the alternate distribution lies in a parametric family of the
distributions characterized by a finite dimensional parameter, i.e., it is a
solution to the corresponding composite hypothesis testing problem. For certain
choices of the alternate distribution, it is shown that both the Hoeffding test
and the mismatched test have the same asymptotic performance in terms of error
exponents. A consequence of this result is that the GLRT is optimal in
differentiating a particular distribution from others in an exponential family.
It is also shown that the mismatched test has a significant advantage over the
Hoeffding test in terms of finite sample size performance. This advantage is
due to the difference in the asymptotic variances of the two test statistics
under the null hypothesis. In particular, the variance of the K-L divergence
grows linearly with the alphabet size, making the test impractical for
applications involving large alphabet distributions. The variance of the
mismatched divergence on the other hand grows linearly with the dimension of
the parameter space, and can hence be controlled through a prudent choice of
the function class defining the mismatched divergence.
| [
"Jayakrishnan Unnikrishnan, Dayu Huang, Sean Meyn, Amit Surana and\n Venugopal Veeravalli",
"['Jayakrishnan Unnikrishnan' 'Dayu Huang' 'Sean Meyn' 'Amit Surana'\n 'Venugopal Veeravalli']"
] |
cs.LG cs.CC | null | 0909.2927 | null | null | http://arxiv.org/pdf/0909.2927v1 | 2009-09-16T06:19:12Z | 2009-09-16T06:19:12Z | Distribution-Specific Agnostic Boosting | We consider the problem of boosting the accuracy of weak learning algorithms
in the agnostic learning framework of Haussler (1992) and Kearns et al. (1992).
Known algorithms for this problem (Ben-David et al., 2001; Gavinsky, 2002;
Kalai et al., 2008) follow the same strategy as boosting algorithms in the PAC
model: the weak learner is executed on the same target function but over
different distributions on the domain. We demonstrate boosting algorithms for
the agnostic learning framework that only modify the distribution on the labels
of the points (or, equivalently, modify the target function). This allows
boosting a distribution-specific weak agnostic learner to a strong agnostic
learner with respect to the same distribution.
When applied to the weak agnostic parity learning algorithm of Goldreich and
Levin (1989) our algorithm yields a simple PAC learning algorithm for DNF and
an agnostic learning algorithm for decision trees over the uniform distribution
using membership queries. These results substantially simplify Jackson's famous
DNF learning algorithm (1994) and the recent result of Gopalan et al. (2008).
We also strengthen the connection to hard-core set constructions discovered
by Klivans and Servedio (1999) by demonstrating that hard-core set
constructions that achieve the optimal hard-core set size (given by Holenstein
(2005) and Barak et al. (2009)) imply distribution-specific agnostic boosting
algorithms. Conversely, our boosting algorithm gives a simple hard-core set
construction with an (almost) optimal hard-core set size.
| [
"['Vitaly Feldman']",
"Vitaly Feldman"
] |
cs.LG cs.AI | null | 0909.2934 | null | null | http://arxiv.org/pdf/0909.2934v1 | 2009-09-16T07:08:44Z | 2009-09-16T07:08:44Z | A Convergent Online Single Time Scale Actor Critic Algorithm | Actor-Critic based approaches were among the first to address reinforcement
learning in a general setting. Recently, these algorithms have gained renewed
interest due to their generality, good convergence properties, and possible
biological relevance. In this paper, we introduce an online temporal difference
based actor-critic algorithm which is proved to converge to a neighborhood of a
local maximum of the average reward. Linear function approximation is used by
the critic in order estimate the value function, and the temporal difference
signal, which is passed from the critic to the actor. The main distinguishing
feature of the present convergence proof is that both the actor and the critic
operate on a similar time scale, while in most current convergence proofs they
are required to have very different time scales in order to converge. Moreover,
the same temporal difference signal is used to update the parameters of both
the actor and the critic. A limitation of the proposed approach, compared to
results available for two time scale convergence, is that convergence is
guaranteed only to a neighborhood of an optimal value, rather to an optimal
value itself. The single time scale and identical temporal difference signal
used by the actor and the critic, may provide a step towards constructing more
biologically realistic models of reinforcement learning in the brain.
| [
"D. Di Castro and R. Meir",
"['D. Di Castro' 'R. Meir']"
] |
cs.CV cs.LG | 10.1109/ICCVW.2009.5457695 | 0909.3123 | null | null | http://arxiv.org/abs/0909.3123v1 | 2009-09-16T23:09:16Z | 2009-09-16T23:09:16Z | Median K-flats for hybrid linear modeling with many outliers | We describe the Median K-Flats (MKF) algorithm, a simple online method for
hybrid linear modeling, i.e., for approximating data by a mixture of flats.
This algorithm simultaneously partitions the data into clusters while finding
their corresponding best approximating l1 d-flats, so that the cumulative l1
error is minimized. The current implementation restricts d-flats to be
d-dimensional linear subspaces. It requires a negligible amount of storage, and
its complexity, when modeling data consisting of N points in D-dimensional
Euclidean space with K d-dimensional linear subspaces, is of order O(n K d D+n
d^2 D), where n is the number of iterations required for convergence
(empirically on the order of 10^4). Since it is an online algorithm, data can
be supplied to it incrementally and it can incrementally produce the
corresponding output. The performance of the algorithm is carefully evaluated
using synthetic and real data.
| [
"Teng Zhang, Arthur Szlam and Gilad Lerman",
"['Teng Zhang' 'Arthur Szlam' 'Gilad Lerman']"
] |
cs.LG cs.AI | null | 0909.3593 | null | null | http://arxiv.org/pdf/0909.3593v2 | 2010-09-25T02:27:46Z | 2009-09-19T16:10:19Z | Exploiting Unlabeled Data to Enhance Ensemble Diversity | Ensemble learning aims to improve generalization ability by using multiple
base learners. It is well-known that to construct a good ensemble, the base
learners should be accurate as well as diverse. In this paper, unlabeled data
is exploited to facilitate ensemble learning by helping augment the diversity
among the base learners. Specifically, a semi-supervised ensemble method named
UDEED is proposed. Unlike existing semi-supervised ensemble methods where
error-prone pseudo-labels are estimated for unlabeled data to enlarge the
labeled data to improve accuracy, UDEED works by maximizing accuracies of base
learners on labeled data while maximizing diversity among them on unlabeled
data. Experiments show that UDEED can effectively utilize unlabeled data for
ensemble learning and is highly competitive to well-established semi-supervised
ensemble methods.
| [
"Min-Ling Zhang and Zhi-Hua Zhou",
"['Min-Ling Zhang' 'Zhi-Hua Zhou']"
] |
cs.LG cs.CV | null | 0909.3606 | null | null | http://arxiv.org/pdf/0909.3606v1 | 2009-09-19T23:46:21Z | 2009-09-19T23:46:21Z | Extension of Path Probability Method to Approximate Inference over Time | There has been a tremendous growth in publicly available digital video
footage over the past decade. This has necessitated the development of new
techniques in computer vision geared towards efficient analysis, storage and
retrieval of such data. Many mid-level computer vision tasks such as
segmentation, object detection, tracking, etc. involve an inference problem
based on the video data available. Video data has a high degree of spatial and
temporal coherence. The property must be intelligently leveraged in order to
obtain better results.
Graphical models, such as Markov Random Fields, have emerged as a powerful
tool for such inference problems. They are naturally suited for expressing the
spatial dependencies present in video data, It is however, not clear, how to
extend the existing techniques for the problem of inference over time. This
thesis explores the Path Probability Method, a variational technique in
statistical mechanics, in the context of graphical models and approximate
inference problems. It extends the method to a general framework for problems
involving inference in time, resulting in an algorithm, \emph{DynBP}. We
explore the relation of the algorithm with existing techniques, and find the
algorithm competitive with existing approaches.
The main contribution of this thesis are the extended GBP algorithm, the
extension of Path Probability Methods to the DynBP algorithm and the
relationship between them. We have also explored some applications in computer
vision involving temporal evolution with promising results.
| [
"Vinay Jethava",
"['Vinay Jethava']"
] |
cs.LG | null | 0909.3609 | null | null | http://arxiv.org/pdf/0909.3609v1 | 2009-09-19T23:40:10Z | 2009-09-19T23:40:10Z | Randomized Algorithms for Large scale SVMs | We propose a randomized algorithm for training Support vector machines(SVMs)
on large datasets. By using ideas from Random projections we show that the
combinatorial dimension of SVMs is $O({log} n)$ with high probability. This
estimate of combinatorial dimension is used to derive an iterative algorithm,
called RandSVM, which at each step calls an existing solver to train SVMs on a
randomly chosen subset of size $O({log} n)$. The algorithm has probabilistic
guarantees and is capable of training SVMs with Kernels for both classification
and regression problems. Experiments done on synthetic and real life data sets
demonstrate that the algorithm scales up existing SVM learners, without loss of
accuracy.
| [
"['Vinay Jethava' 'Krishnan Suresh' 'Chiranjib Bhattacharyya'\n 'Ramesh Hariharan']",
"Vinay Jethava, Krishnan Suresh, Chiranjib Bhattacharyya, Ramesh\n Hariharan"
] |
math.PR cs.IT cs.LG math.IT math.ST stat.ML stat.TH | null | 0909.4588 | null | null | http://arxiv.org/pdf/0909.4588v1 | 2009-09-25T02:57:17Z | 2009-09-25T02:57:17Z | Discrete MDL Predicts in Total Variation | The Minimum Description Length (MDL) principle selects the model that has the
shortest code for data plus model. We show that for a countable class of
models, MDL predictions are close to the true distribution in a strong sense.
The result is completely general. No independence, ergodicity, stationarity,
identifiability, or other assumption on the model class need to be made. More
formally, we show that for any countable class of models, the distributions
selected by MDL (or MAP) asymptotically predict (merge with) the true measure
in the class in total variation distance. Implications for non-i.i.d. domains
like time-series forecasting, discriminative learning, and reinforcement
learning are discussed.
| [
"Marcus Hutter",
"['Marcus Hutter']"
] |
cs.LG | null | 0909.4603 | null | null | http://arxiv.org/pdf/0909.4603v1 | 2009-09-25T05:23:33Z | 2009-09-25T05:23:33Z | Scalable Inference for Latent Dirichlet Allocation | We investigate the problem of learning a topic model - the well-known Latent
Dirichlet Allocation - in a distributed manner, using a cluster of C processors
and dividing the corpus to be learned equally among them. We propose a simple
approximated method that can be tuned, trading speed for accuracy according to
the task at hand. Our approach is asynchronous, and therefore suitable for
clusters of heterogenous machines.
| [
"['James Petterson' 'Tiberio Caetano']",
"James Petterson, Tiberio Caetano"
] |
cs.LG cs.NA cs.NE | null | 0909.5000 | null | null | http://arxiv.org/pdf/0909.5000v1 | 2009-09-28T04:25:03Z | 2009-09-28T04:25:03Z | Eignets for function approximation on manifolds | Let $\XX$ be a compact, smooth, connected, Riemannian manifold without
boundary, $G:\XX\times\XX\to \RR$ be a kernel. Analogous to a radial basis
function network, an eignet is an expression of the form $\sum_{j=1}^M
a_jG(\circ,y_j)$, where $a_j\in\RR$, $y_j\in\XX$, $1\le j\le M$. We describe a
deterministic, universal algorithm for constructing an eignet for approximating
functions in $L^p(\mu;\XX)$ for a general class of measures $\mu$ and kernels
$G$. Our algorithm yields linear operators. Using the minimal separation
amongst the centers $y_j$ as the cost of approximation, we give modulus of
smoothness estimates for the degree of approximation by our eignets, and show
by means of a converse theorem that these are the best possible for every
\emph{individual function}. We also give estimates on the coefficients $a_j$ in
terms of the norm of the eignet. Finally, we demonstrate that if any sequence
of eignets satisfies the optimal estimates for the degree of approximation of a
smooth function, measured in terms of the minimal separation, then the
derivatives of the eignets also approximate the corresponding derivatives of
the target function in an optimal manner.
| [
"H. N. Mhaskar",
"['H. N. Mhaskar']"
] |
cs.CC cs.LG | 10.4086/toc.2014.v010a001 | 0909.5175 | null | null | http://arxiv.org/abs/0909.5175v4 | 2009-11-09T23:23:19Z | 2009-09-28T19:55:46Z | Bounding the Sensitivity of Polynomial Threshold Functions | We give the first non-trivial upper bounds on the average sensitivity and
noise sensitivity of polynomial threshold functions. More specifically, for a
Boolean function f on n variables equal to the sign of a real, multivariate
polynomial of total degree d we prove
1) The average sensitivity of f is at most O(n^{1-1/(4d+6)}) (we also give a
combinatorial proof of the bound O(n^{1-1/2^d}).
2) The noise sensitivity of f with noise rate \delta is at most
O(\delta^{1/(4d+6)}).
Previously, only bounds for the linear case were known. Along the way we show
new structural theorems about random restrictions of polynomial threshold
functions obtained via hypercontractivity. These structural results may be of
independent interest as they provide a generic template for transforming
problems related to polynomial threshold functions defined on the Boolean
hypercube to polynomial threshold functions defined in Gaussian space.
| [
"['Prahladh Harsha' 'Adam Klivans' 'Raghu Meka']",
"Prahladh Harsha, Adam Klivans, Raghu Meka"
] |
cs.LG cs.IT math.IT | null | 0909.5457 | null | null | http://arxiv.org/pdf/0909.5457v3 | 2009-10-19T21:21:57Z | 2009-09-30T14:44:54Z | Guaranteed Rank Minimization via Singular Value Projection | Minimizing the rank of a matrix subject to affine constraints is a
fundamental problem with many important applications in machine learning and
statistics. In this paper we propose a simple and fast algorithm SVP (Singular
Value Projection) for rank minimization with affine constraints (ARMP) and show
that SVP recovers the minimum rank solution for affine constraints that satisfy
the "restricted isometry property" and show robustness of our method to noise.
Our results improve upon a recent breakthrough by Recht, Fazel and Parillo
(RFP07) and Lee and Bresler (LB09) in three significant ways:
1) our method (SVP) is significantly simpler to analyze and easier to
implement,
2) we give recovery guarantees under strictly weaker isometry assumptions
3) we give geometric convergence guarantees for SVP even in presense of noise
and, as demonstrated empirically, SVP is significantly faster on real-world and
synthetic problems.
In addition, we address the practically important problem of low-rank matrix
completion (MCP), which can be seen as a special case of ARMP. We empirically
demonstrate that our algorithm recovers low-rank incoherent matrices from an
almost optimal number of uniformly sampled entries. We make partial progress
towards proving exact recovery and provide some intuition for the strong
performance of SVP applied to matrix completion by showing a more restricted
isometry property. Our algorithm outperforms existing methods, such as those of
\cite{RFP07,CR08,CT09,CCS08,KOM09,LB09}, for ARMP and the matrix-completion
problem by an order of magnitude and is also significantly more robust to
noise.
| [
"Raghu Meka, Prateek Jain, Inderjit S. Dhillon",
"['Raghu Meka' 'Prateek Jain' 'Inderjit S. Dhillon']"
] |
cs.DS cs.DB cs.LG | null | 0910.0112 | null | null | http://arxiv.org/pdf/0910.0112v2 | 2010-02-17T09:32:14Z | 2009-10-01T09:02:54Z | Finding Associations and Computing Similarity via Biased Pair Sampling | This version is ***superseded*** by a full version that can be found at
http://www.itu.dk/people/pagh/papers/mining-jour.pdf, which contains stronger
theoretical results and fixes a mistake in the reporting of experiments.
Abstract: Sampling-based methods have previously been proposed for the
problem of finding interesting associations in data, even for low-support
items. While these methods do not guarantee precise results, they can be vastly
more efficient than approaches that rely on exact counting. However, for many
similarity measures no such methods have been known. In this paper we show how
a wide variety of measures can be supported by a simple biased sampling method.
The method also extends to find high-confidence association rules. We
demonstrate theoretically that our method is superior to exact methods when the
threshold for "interesting similarity/confidence" is above the average pairwise
similarity/confidence, and the average support is not too low. Our method is
particularly good when transactions contain many items. We confirm in
experiments on standard association mining benchmarks that this gives a
significant speedup on real data sets (sometimes much larger than the
theoretical guarantees). Reductions in computation time of over an order of
magnitude, and significant savings in space, are observed.
| [
"['Andrea Campagna' 'Rasmus Pagh']",
"Andrea Campagna and Rasmus Pagh"
] |
cs.IT cs.LG math.IT | null | 0910.0239 | null | null | http://arxiv.org/pdf/0910.0239v2 | 2010-01-25T15:40:48Z | 2009-10-01T19:49:36Z | Compressed Blind De-convolution | Suppose the signal x is realized by driving a k-sparse signal u through an
arbitrary unknown stable discrete-linear time invariant system H. These types
of processes arise naturally in Reflection Seismology. In this paper we are
interested in several problems: (a) Blind-Deconvolution: Can we recover both
the filter $H$ and the sparse signal $u$ from noisy measurements? (b)
Compressive Sensing: Is x compressible in the conventional sense of compressed
sensing? Namely, can x, u and H be reconstructed from a sparse set of
measurements. We develop novel L1 minimization methods to solve both cases and
establish sufficient conditions for exact recovery for the case when the
unknown system H is auto-regressive (i.e. all pole) of a known order. In the
compressed sensing/sampling setting it turns out that both H and x can be
reconstructed from O(k log(n)) measurements under certain technical conditions
on the support structure of u. Our main idea is to pass x through a linear time
invariant system G and collect O(k log(n)) sequential measurements. The filter
G is chosen suitably, namely, its associated Toeplitz matrix satisfies the RIP
property. We develop a novel LP optimization algorithm and show that both the
unknown filter H and the sparse input u can be reliably estimated.
| [
"V. Saligrama, M. Zhao",
"['V. Saligrama' 'M. Zhao']"
] |
cs.LG | 10.1109/ICDMW.2008.87 | 0910.0349 | null | null | http://arxiv.org/abs/0910.0349v1 | 2009-10-02T08:40:01Z | 2009-10-02T08:40:01Z | Post-Processing of Discovered Association Rules Using Ontologies | In Data Mining, the usefulness of association rules is strongly limited by
the huge amount of delivered rules. In this paper we propose a new approach to
prune and filter discovered rules. Using Domain Ontologies, we strengthen the
integration of user knowledge in the post-processing task. Furthermore, an
interactive and iterative framework is designed to assist the user along the
analyzing task. On the one hand, we represent user domain knowledge using a
Domain Ontology over database. On the other hand, a novel technique is
suggested to prune and to filter discovered rules. The proposed framework was
applied successfully over the client database provided by Nantes Habitat.
| [
"Claudia Marinica (LINA), Fabrice Guillet (LINA), Henri Briand (LINA)",
"['Claudia Marinica' 'Fabrice Guillet' 'Henri Briand']"
] |
stat.ML cs.LG stat.AP | null | 0910.0483 | null | null | http://arxiv.org/pdf/0910.0483v1 | 2009-10-05T19:43:40Z | 2009-10-05T19:43:40Z | Statistical Decision Making for Authentication and Intrusion Detection | User authentication and intrusion detection differ from standard
classification problems in that while we have data generated from legitimate
users, impostor or intrusion data is scarce or non-existent. We review existing
techniques for dealing with this problem and propose a novel alternative based
on a principled statistical decision-making view point. We examine the
technique on a toy problem and validate it on complex real-world data from an
RFID based access control system. The results indicate that it can
significantly outperform the classical world model approach. The method could
be more generally useful in other decision-making scenarios where there is a
lack of adversary data.
| [
"Christos Dimitrakakis, Aikaterini Mitrokotsa",
"['Christos Dimitrakakis' 'Aikaterini Mitrokotsa']"
] |
cs.LG stat.ML | null | 0910.0610 | null | null | http://arxiv.org/pdf/0910.0610v2 | 2010-10-17T21:34:13Z | 2009-10-04T14:48:46Z | Regularization Techniques for Learning with Matrices | There is growing body of learning problems for which it is natural to
organize the parameters into matrix, so as to appropriately regularize the
parameters under some matrix norm (in order to impose some more sophisticated
prior knowledge). This work describes and analyzes a systematic method for
constructing such matrix-based, regularization methods. In particular, we focus
on how the underlying statistical properties of a given problem can help us
decide which regularization function is appropriate.
Our methodology is based on the known duality fact: that a function is
strongly convex with respect to some norm if and only if its conjugate function
is strongly smooth with respect to the dual norm. This result has already been
found to be a key component in deriving and analyzing several learning
algorithms. We demonstrate the potential of this framework by deriving novel
generalization and regret bounds for multi-task learning, multi-class learning,
and kernel learning.
| [
"Sham M. Kakade, Shai Shalev-Shwartz, Ambuj Tewari",
"['Sham M. Kakade' 'Shai Shalev-Shwartz' 'Ambuj Tewari']"
] |
cs.LG | null | 0910.0668 | null | null | http://arxiv.org/pdf/0910.0668v2 | 2009-10-07T21:52:48Z | 2009-10-05T03:30:13Z | Variable sigma Gaussian processes: An expectation propagation
perspective | Gaussian processes (GPs) provide a probabilistic nonparametric representation
of functions in regression, classification, and other problems. Unfortunately,
exact learning with GPs is intractable for large datasets. A variety of
approximate GP methods have been proposed that essentially map the large
dataset into a small set of basis points. The most advanced of these, the
variable-sigma GP (VSGP) (Walder et al., 2008), allows each basis point to have
its own length scale. However, VSGP was only derived for regression. We
describe how VSGP can be applied to classification and other problems, by
deriving it as an expectation propagation algorithm. In this view, sparse GP
approximations correspond to a KL-projection of the true posterior onto a
compact exponential family of GPs. VSGP constitutes one such family, and we
show how to enlarge this family to get additional accuracy. In particular, we
show that endowing each basis point with its own full covariance matrix
provides a significant increase in approximation power.
| [
"Yuan Qi, Ahmed H. Abdel-Gawad and Thomas P. Minka",
"['Yuan Qi' 'Ahmed H. Abdel-Gawad' 'Thomas P. Minka']"
] |
cs.LG q-bio.QM | null | 0910.0820 | null | null | http://arxiv.org/pdf/0910.0820v2 | 2009-10-08T11:05:26Z | 2009-10-05T18:36:11Z | Prediction of Zoonosis Incidence in Human using Seasonal Auto Regressive
Integrated Moving Average (SARIMA) | Zoonosis refers to the transmission of infectious diseases from animal to
human. The increasing number of zoonosis incidence makes the great losses to
lives, including humans and animals, and also the impact in social economic. It
motivates development of a system that can predict the future number of
zoonosis occurrences in human. This paper analyses and presents the use of
Seasonal Autoregressive Integrated Moving Average (SARIMA) method for
developing a forecasting model that able to support and provide prediction
number of zoonosis human incidence. The dataset for model development was
collected on a time series data of human tuberculosis occurrences in United
States which comprises of fourteen years of monthly data obtained from a study
published by Centers for Disease Control and Prevention (CDC). Several trial
models of SARIMA were compared to obtain the most appropriate model. Then,
diagnostic tests were used to determine model validity. The result showed that
the SARIMA(9,0,14)(12,1,24)12 is the fittest model. While in the measure of
accuracy, the selected model achieved 0.062 of Theils U value. It implied that
the model was highly accurate and a close fit. It was also indicated the
capability of final model to closely represent and made prediction based on the
tuberculosis historical dataset.
| [
"['Adhistya Erna Permanasari' 'Dayang Rohaya Awang Rambli'\n 'Dhanapal Durai Dominic']",
"Adhistya Erna Permanasari, Dayang Rohaya Awang Rambli, Dhanapal Durai\n Dominic"
] |
cs.LG cs.AI | null | 0910.0902 | null | null | http://arxiv.org/pdf/0910.0902v3 | 2009-12-22T23:31:57Z | 2009-10-06T06:00:47Z | Reduced-Rank Hidden Markov Models | We introduce the Reduced-Rank Hidden Markov Model (RR-HMM), a generalization
of HMMs that can model smooth state evolution as in Linear Dynamical Systems
(LDSs) as well as non-log-concave predictive distributions as in
continuous-observation HMMs. RR-HMMs assume an m-dimensional latent state and n
discrete observations, with a transition matrix of rank k <= m. This implies
the dynamics evolve in a k-dimensional subspace, while the shape of the set of
predictive distributions is determined by m. Latent state belief is represented
with a k-dimensional state vector and inference is carried out entirely in R^k,
making RR-HMMs as computationally efficient as k-state HMMs yet more
expressive. To learn RR-HMMs, we relax the assumptions of a recently proposed
spectral learning algorithm for HMMs (Hsu, Kakade and Zhang 2009) and apply it
to learn k-dimensional observable representations of rank-k RR-HMMs. The
algorithm is consistent and free of local optima, and we extend its performance
guarantees to cover the RR-HMM case. We show how this algorithm can be used in
conjunction with a kernel density estimator to efficiently model
high-dimensional multivariate continuous data. We also relax the assumption
that single observations are sufficient to disambiguate state, and extend the
algorithm accordingly. Experiments on synthetic data and a toy video, as well
as on a difficult robot vision modeling problem, yield accurate models that
compare favorably with standard alternatives in simulation quality and
prediction capability.
| [
"Sajid M. Siddiqi, Byron Boots, Geoffrey J. Gordon",
"['Sajid M. Siddiqi' 'Byron Boots' 'Geoffrey J. Gordon']"
] |
cs.LG cs.NA | null | 0910.0921 | null | null | http://arxiv.org/pdf/0910.0921v2 | 2009-11-03T23:56:31Z | 2009-10-06T04:41:05Z | Low-rank Matrix Completion with Noisy Observations: a Quantitative
Comparison | We consider a problem of significant practical importance, namely, the
reconstruction of a low-rank data matrix from a small subset of its entries.
This problem appears in many areas such as collaborative filtering, computer
vision and wireless sensor networks. In this paper, we focus on the matrix
completion problem in the case when the observed samples are corrupted by
noise. We compare the performance of three state-of-the-art matrix completion
algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and
present numerical results. We show that in practice these efficient algorithms
can be used to reconstruct real data matrices, as well as randomly generated
matrices, accurately.
| [
"['Raghunandan H. Keshavan' 'Andrea Montanari' 'Sewoong Oh']",
"Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh"
] |
cs.CV cs.LG | null | 0910.1273 | null | null | http://arxiv.org/pdf/0910.1273v1 | 2009-10-07T14:26:01Z | 2009-10-07T14:26:01Z | Adaboost with "Keypoint Presence Features" for Real-Time Vehicle Visual
Detection | We present promising results for real-time vehicle visual detection, obtained
with adaBoost using new original ?keypoints presence features?. These
weak-classifiers produce a boolean response based on presence or absence in the
tested image of a ?keypoint? (~ a SURF interest point) with a descriptor
sufficiently similar (i.e. within a given distance) to a reference descriptor
characterizing the feature. A first experiment was conducted on a public image
dataset containing lateral-viewed cars, yielding 95% recall with 95% precision
on test set. Moreover, analysis of the positions of adaBoost-selected keypoints
show that they correspond to a specific part of the object category (such as
?wheel? or ?side skirt?) and thus have a ?semantic? meaning.
| [
"['Taoufik Bdiri' 'Fabien Moutarde' 'Nicolas Bourdis' 'Bruno Steux']",
"Taoufik Bdiri (CAOR), Fabien Moutarde (CAOR), Nicolas Bourdis (CAOR),\n Bruno Steux (CAOR)"
] |
cs.CV cs.LG | null | 0910.1293 | null | null | http://arxiv.org/pdf/0910.1293v1 | 2009-10-07T15:42:03Z | 2009-10-07T15:42:03Z | Introducing New AdaBoost Features for Real-Time Vehicle Detection | This paper shows how to improve the real-time object detection in complex
robotics applications, by exploring new visual features as AdaBoost weak
classifiers. These new features are symmetric Haar filters (enforcing global
horizontal and vertical symmetry) and N-connexity control points. Experimental
evaluation on a car database show that the latter appear to provide the best
results for the vehicle-detection problem.
| [
"Bogdan Stanciulescu (CAOR), Amaury Breheret (CAOR), Fabien Moutarde\n (CAOR)",
"['Bogdan Stanciulescu' 'Amaury Breheret' 'Fabien Moutarde']"
] |
cs.CV cs.LG | null | 0910.1294 | null | null | http://arxiv.org/pdf/0910.1294v1 | 2009-10-07T15:42:30Z | 2009-10-07T15:42:30Z | Visual object categorization with new keypoint-based adaBoost features | We present promising results for visual object categorization, obtained with
adaBoost using new original ?keypoints-based features?. These weak-classifiers
produce a boolean response based on presence or absence in the tested image of
a ?keypoint? (a kind of SURF interest point) with a descriptor sufficiently
similar (i.e. within a given distance) to a reference descriptor characterizing
the feature. A first experiment was conducted on a public image dataset
containing lateral-viewed cars, yielding 95% recall with 95% precision on test
set. Preliminary tests on a small subset of a pedestrians database also gives
promising 97% recall with 92 % precision, which shows the generality of our new
family of features. Moreover, analysis of the positions of adaBoost-selected
keypoints show that they correspond to a specific part of the object category
(such as ?wheel? or ?side skirt? in the case of lateral-cars) and thus have a
?semantic? meaning. We also made a first test on video for detecting vehicles
from adaBoostselected keypoints filtered in real-time from all detected
keypoints.
| [
"Taoufik Bdiri (CAOR), Fabien Moutarde (CAOR), Bruno Steux (CAOR)",
"['Taoufik Bdiri' 'Fabien Moutarde' 'Bruno Steux']"
] |