categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.CC cs.LG | null | 1002.3183 | null | null | http://arxiv.org/pdf/1002.3183v3 | 2013-11-25T04:57:18Z | 2010-02-16T22:35:39Z | A Complete Characterization of Statistical Query Learning with
Applications to Evolvability | Statistical query (SQ) learning model of Kearns (1993) is a natural
restriction of the PAC learning model in which a learning algorithm is allowed
to obtain estimates of statistical properties of the examples but cannot see
the examples themselves. We describe a new and simple characterization of the
query complexity of learning in the SQ learning model. Unlike the previously
known bounds on SQ learning our characterization preserves the accuracy and the
efficiency of learning. The preservation of accuracy implies that that our
characterization gives the first characterization of SQ learning in the
agnostic learning framework. The preservation of efficiency is achieved using a
new boosting technique and allows us to derive a new approach to the design of
evolutionary algorithms in Valiant's (2006) model of evolvability. We use this
approach to demonstrate the existence of a large class of monotone evolutionary
learning algorithms based on square loss performance estimation. These results
differ significantly from the few known evolutionary algorithms and give
evidence that evolvability in Valiant's model is a more versatile phenomenon
than there had been previous reason to suspect.
| [
"['Vitaly Feldman']",
"Vitaly Feldman"
] |
cs.LG | null | 1002.3345 | null | null | http://arxiv.org/pdf/1002.3345v2 | 2010-05-20T23:39:23Z | 2010-02-17T18:43:59Z | Interactive Submodular Set Cover | We introduce a natural generalization of submodular set cover and exact
active learning with a finite hypothesis class (query learning). We call this
new problem interactive submodular set cover. Applications include advertising
in social networks with hidden information. We give an approximation guarantee
for a novel greedy algorithm and give a hardness of approximation result which
matches up to constant factors. We also discuss negative results for simpler
approaches and present encouraging early experimental results.
| [
"['Andrew Guillory' 'Jeff Bilmes']",
"Andrew Guillory, Jeff Bilmes"
] |
cs.LG | null | 1002.4007 | null | null | http://arxiv.org/pdf/1002.4007v1 | 2010-02-21T19:48:16Z | 2010-02-21T19:48:16Z | Word level Script Identification from Bangla and Devanagri Handwritten
Texts mixed with Roman Script | India is a multi-lingual country where Roman script is often used alongside
different Indic scripts in a text document. To develop a script specific
handwritten Optical Character Recognition (OCR) system, it is therefore
necessary to identify the scripts of handwritten text correctly. In this paper,
we present a system, which automatically separates the scripts of handwritten
words from a document, written in Bangla or Devanagri mixed with Roman scripts.
In this script separation technique, we first, extract the text lines and words
from document pages using a script independent Neighboring Component Analysis
technique. Then we have designed a Multi Layer Perceptron (MLP) based
classifier for script separation, trained with 8 different wordlevel holistic
features. Two equal sized datasets, one with Bangla and Roman scripts and the
other with Devanagri and Roman scripts, are prepared for the system evaluation.
On respective independent text samples, word-level script identification
accuracies of 99.29% and 98.43% are achieved.
| [
"Ram Sarkar, Nibaran Das, Subhadip Basu, Mahantapas Kundu, Mita\n Nasipuri, Dipak Kumar Basu",
"['Ram Sarkar' 'Nibaran Das' 'Subhadip Basu' 'Mahantapas Kundu'\n 'Mita Nasipuri' 'Dipak Kumar Basu']"
] |
cs.CV cs.LG | null | 1002.4040 | null | null | http://arxiv.org/pdf/1002.4040v2 | 2010-02-23T06:44:32Z | 2010-02-22T02:58:49Z | Handwritten Bangla Basic and Compound character recognition using MLP
and SVM classifier | A novel approach for recognition of handwritten compound Bangla characters,
along with the Basic characters of Bangla alphabet, is presented here. Compared
to English like Roman script, one of the major stumbling blocks in Optical
Character Recognition (OCR) of handwritten Bangla script is the large number of
complex shaped character classes of Bangla alphabet. In addition to 50 basic
character classes, there are nearly 160 complex shaped compound character
classes in Bangla alphabet. Dealing with such a large varieties of handwritten
characters with a suitably designed feature set is a challenging problem.
Uncertainty and imprecision are inherent in handwritten script. Moreover, such
a large varieties of complex shaped characters, some of which have close
resemblance, makes the problem of OCR of handwritten Bangla characters more
difficult. Considering the complexity of the problem, the present approach
makes an attempt to identify compound character classes from most frequently to
less frequently occurred ones, i.e., in order of importance. This is to develop
a frame work for incrementally increasing the number of learned classes of
compound characters from more frequently occurred ones to less frequently
occurred ones along with Basic characters. On experimentation, the technique is
observed produce an average recognition rate of 79.25 after three fold cross
validation of data with future scope of improvement and extension.
| [
"['Nibaran Das' 'Bindaban Das' 'Ram Sarkar' 'Subhadip Basu'\n 'Mahantapas Kundu' 'Mita Nasipuri']",
"Nibaran Das, Bindaban Das, Ram Sarkar, Subhadip Basu, Mahantapas\n Kundu, Mita Nasipuri"
] |
cs.LG cs.CV | null | 1002.4046 | null | null | http://arxiv.org/pdf/1002.4046v1 | 2010-02-22T03:12:14Z | 2010-02-22T03:12:14Z | Supervised Classification Performance of Multispectral Images | Nowadays government and private agencies use remote sensing imagery for a
wide range of applications from military applications to farm development. The
images may be a panchromatic, multispectral, hyperspectral or even
ultraspectral of terra bytes. Remote sensing image classification is one
amongst the most significant application worlds for remote sensing. A few
number of image classification algorithms have proved good precision in
classifying remote sensing data. But, of late, due to the increasing
spatiotemporal dimensions of the remote sensing data, traditional
classification algorithms have exposed weaknesses necessitating further
research in the field of remote sensing image classification. So an efficient
classifier is needed to classify the remote sensing images to extract
information. We are experimenting with both supervised and unsupervised
classification. Here we compare the different classification methods and their
performances. It is found that Mahalanobis classifier performed the best in our
classification.
| [
"K. Perumal, R. Bhaskaran",
"['K. Perumal' 'R. Bhaskaran']"
] |
cs.LG | null | 1002.4058 | null | null | http://arxiv.org/pdf/1002.4058v3 | 2011-10-27T19:28:49Z | 2010-02-22T07:11:39Z | Contextual Bandit Algorithms with Supervised Learning Guarantees | We address the problem of learning in an online, bandit setting where the
learner must repeatedly select among $K$ actions, but only receives partial
feedback based on its choices. We establish two new facts: First, using a new
algorithm called Exp4.P, we show that it is possible to compete with the best
in a set of $N$ experts with probability $1-\delta$ while incurring regret at
most $O(\sqrt{KT\ln(N/\delta)})$ over $T$ time steps. The new algorithm is
tested empirically in a large-scale, real-world dataset. Second, we give a new
algorithm called VE that competes with a possibly infinite set of policies of
VC-dimension $d$ while incurring regret at most $O(\sqrt{T(d\ln(T) + \ln
(1/\delta))})$ with probability $1-\delta$. These guarantees improve on those
of all previous algorithms, whether in a stochastic or adversarial environment,
and bring us closer to providing supervised learning type guarantees for the
contextual bandit setting.
| [
"['Alina Beygelzimer' 'John Langford' 'Lihong Li' 'Lev Reyzin'\n 'Robert E. Schapire']",
"Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert E.\n Schapire"
] |
stat.ML cs.LG stat.ME | null | 1002.4658 | null | null | http://arxiv.org/pdf/1002.4658v2 | 2010-05-13T03:06:22Z | 2010-02-24T23:24:17Z | Principal Component Analysis with Contaminated Data: The High
Dimensional Case | We consider the dimensionality-reduction problem (finding a subspace
approximation of observed data) for contaminated data in the high dimensional
regime, where the number of observations is of the same magnitude as the number
of variables of each observation, and the data set contains some (arbitrarily)
corrupted observations. We propose a High-dimensional Robust Principal
Component Analysis (HR-PCA) algorithm that is tractable, robust to contaminated
points, and easily kernelizable. The resulting subspace has a bounded deviation
from the desired one, achieves maximal robustness -- a breakdown point of 50%
while all existing algorithms have a breakdown point of zero, and unlike
ordinary PCA algorithms, achieves optimality in the limit case where the
proportion of corrupted points goes to zero.
| [
"Huan Xu, Constantine Caramanis, Shie Mannor",
"['Huan Xu' 'Constantine Caramanis' 'Shie Mannor']"
] |
cs.LG stat.ML | null | 1002.4802 | null | null | http://arxiv.org/pdf/1002.4802v2 | 2010-03-12T10:41:26Z | 2010-02-25T15:10:06Z | Gaussian Process Structural Equation Models with Latent Variables | In a variety of disciplines such as social sciences, psychology, medicine and
economics, the recorded data are considered to be noisy measurements of latent
variables connected by some causal structure. This corresponds to a family of
graphical models known as the structural equation model with latent variables.
While linear non-Gaussian variants have been well-studied, inference in
nonparametric structural equation models is still underdeveloped. We introduce
a sparse Gaussian process parameterization that defines a non-linear structure
connecting latent variables, unlike common formulations of Gaussian process
latent variable models. The sparse parameterization is given a full Bayesian
treatment without compromising Markov chain Monte Carlo efficiency. We compare
the stability of the sampling procedure and the predictive ability of the model
against the current practice.
| [
"Ricardo Silva and Robert B. Gramacy",
"['Ricardo Silva' 'Robert B. Gramacy']"
] |
cs.LG cs.AI | null | 1002.4862 | null | null | http://arxiv.org/pdf/1002.4862v1 | 2010-02-25T20:31:05Z | 2010-02-25T20:31:05Z | Less Regret via Online Conditioning | We analyze and evaluate an online gradient descent algorithm with adaptive
per-coordinate adjustment of learning rates. Our algorithm can be thought of as
an online version of batch gradient descent with a diagonal preconditioner.
This approach leads to regret bounds that are stronger than those of standard
online gradient descent for general online convex optimization problems.
Experimentally, we show that our algorithm is competitive with state-of-the-art
algorithms for large scale machine learning problems.
| [
"['Matthew Streeter' 'H. Brendan McMahan']",
"Matthew Streeter and H. Brendan McMahan"
] |
cs.LG | null | 1002.4908 | null | null | http://arxiv.org/pdf/1002.4908v2 | 2010-07-07T19:07:16Z | 2010-02-26T01:36:34Z | Adaptive Bound Optimization for Online Convex Optimization | We introduce a new online convex optimization algorithm that adaptively
chooses its regularization function based on the loss functions observed so
far. This is in contrast to previous algorithms that use a fixed regularization
function such as L2-squared, and modify it only via a single time-dependent
parameter. Our algorithm's regret bounds are worst-case optimal, and for
certain realistic classes of loss functions they are much better than existing
bounds. These bounds are problem-dependent, which means they can exploit the
structure of the actual problem instance. Critically, however, our algorithm
does not need to know this structure in advance. Rather, we prove competitive
guarantees that show the algorithm provides a bound within a constant factor of
the best possible bound (of a certain functional form) in hindsight.
| [
"H. Brendan McMahan, Matthew Streeter",
"['H. Brendan McMahan' 'Matthew Streeter']"
] |
cs.LG | null | 1003.0024 | null | null | http://arxiv.org/pdf/1003.0024v1 | 2010-02-26T21:59:02Z | 2010-02-26T21:59:02Z | Asymptotic Analysis of Generative Semi-Supervised Learning | Semisupervised learning has emerged as a popular framework for improving
modeling accuracy while controlling labeling cost. Based on an extension of
stochastic composite likelihood we quantify the asymptotic accuracy of
generative semi-supervised learning. In doing so, we complement
distribution-free analysis by providing an alternative framework to measure the
value associated with different labeling policies and resolve the fundamental
question of how much data to label and in what manner. We demonstrate our
approach with both simulation studies and real world experiments using naive
Bayes for text classification and MRFs and CRFs for structured prediction in
NLP.
| [
"['Joshua V Dillon' 'Krishnakumar Balasubramanian' 'Guy Lebanon']",
"Joshua V Dillon, Krishnakumar Balasubramanian and Guy Lebanon"
] |
cs.AI cs.LG | null | 1003.0034 | null | null | http://arxiv.org/pdf/1003.0034v1 | 2010-02-26T23:27:22Z | 2010-02-26T23:27:22Z | A New Understanding of Prediction Markets Via No-Regret Learning | We explore the striking mathematical connections that exist between market
scoring rules, cost function based prediction markets, and no-regret learning.
We show that any cost function based prediction market can be interpreted as an
algorithm for the commonly studied problem of learning from expert advice by
equating trades made in the market with losses observed by the learning
algorithm. If the loss of the market organizer is bounded, this bound can be
used to derive an O(sqrt(T)) regret bound for the corresponding learning
algorithm. We then show that the class of markets with convex cost functions
exactly corresponds to the class of Follow the Regularized Leader learning
algorithms, with the choice of a cost function in the market corresponding to
the choice of a regularizer in the learning problem. Finally, we show an
equivalence between market scoring rules and prediction markets with convex
cost functions. This implies that market scoring rules can also be interpreted
naturally as Follow the Regularized Leader algorithms, and may be of
independent interest. These connections provide new insight into how it is that
commonly studied markets, such as the Logarithmic Market Scoring Rule, can
aggregate opinions into accurate estimates of the likelihood of future events.
| [
"['Yiling Chen' 'Jennifer Wortman Vaughan']",
"Yiling Chen and Jennifer Wortman Vaughan"
] |
cs.LG stat.ML | null | 1003.0079 | null | null | http://arxiv.org/pdf/1003.0079v3 | 2010-10-26T20:21:35Z | 2010-02-27T08:54:29Z | Non-Sparse Regularization for Multiple Kernel Learning | Learning linear combinations of multiple kernels is an appealing strategy
when the right choice of features is unknown. Previous approaches to multiple
kernel learning (MKL) promote sparse kernel combinations to support
interpretability and scalability. Unfortunately, this 1-norm MKL is rarely
observed to outperform trivial baselines in practical applications. To allow
for robust kernel mixtures, we generalize MKL to arbitrary norms. We devise new
insights on the connection between several existing MKL formulations and
develop two efficient interleaved optimization strategies for arbitrary norms,
like p-norms with p>1. Empirically, we demonstrate that the interleaved
optimization strategies are much faster compared to the commonly used wrapper
approaches. A theoretical analysis and an experiment on controlled artificial
data experiment sheds light on the appropriateness of sparse, non-sparse and
$\ell_\infty$-norm MKL in various scenarios. Empirical applications of p-norm
MKL to three real-world problems from computational biology show that
non-sparse MKL achieves accuracies that go beyond the state-of-the-art.
| [
"Marius Kloft, Ulf Brefeld, Soeren Sonnenburg, Alexander Zien",
"['Marius Kloft' 'Ulf Brefeld' 'Soeren Sonnenburg' 'Alexander Zien']"
] |
cs.LG cs.AI | null | 1003.0120 | null | null | http://arxiv.org/pdf/1003.0120v2 | 2010-06-14T16:06:16Z | 2010-02-27T17:53:46Z | Learning from Logged Implicit Exploration Data | We provide a sound and consistent foundation for the use of \emph{nonrandom}
exploration data in "contextual bandit" or "partially labeled" settings where
only the value of a chosen action is learned.
The primary challenge in a variety of settings is that the exploration
policy, in which "offline" data is logged, is not explicitly known. Prior
solutions here require either control of the actions during the learning
process, recorded random exploration, or actions chosen obliviously in a
repeated manner. The techniques reported here lift these restrictions, allowing
the learning of a policy for choosing actions given features from historical
data where no randomization occurred or was logged.
We empirically verify our solution on two reasonably sized sets of real-world
data obtained from Yahoo!.
| [
"Alex Strehl, John Langford, Sham Kakade, Lihong Li",
"['Alex Strehl' 'John Langford' 'Sham Kakade' 'Lihong Li']"
] |
cs.LG cs.AI cs.IR | 10.1145/1772690.1772758 | 1003.0146 | null | null | http://arxiv.org/abs/1003.0146v2 | 2012-03-01T23:49:42Z | 2010-02-28T02:18:59Z | A Contextual-Bandit Approach to Personalized News Article Recommendation | Personalized web services strive to adapt their services (advertisements,
news articles, etc) to individual users by making use of both content and user
information. Despite a few recent advances, this problem remains challenging
for at least two reasons. First, web service is featured with dynamically
changing pools of content, rendering traditional collaborative filtering
methods inapplicable. Second, the scale of most web services of practical
interest calls for solutions that are both fast in learning and computation.
In this work, we model personalized recommendation of news articles as a
contextual bandit problem, a principled approach in which a learning algorithm
sequentially selects articles to serve users based on contextual information
about the users and articles, while simultaneously adapting its
article-selection strategy based on user-click feedback to maximize total user
clicks.
The contributions of this work are three-fold. First, we propose a new,
general contextual bandit algorithm that is computationally efficient and well
motivated from learning theory. Second, we argue that any bandit algorithm can
be reliably evaluated offline using previously recorded random traffic.
Finally, using this offline evaluation method, we successfully applied our new
algorithm to a Yahoo! Front Page Today Module dataset containing over 33
million events. Results showed a 12.5% click lift compared to a standard
context-free bandit algorithm, and the advantage becomes even greater when data
gets more scarce.
| [
"['Lihong Li' 'Wei Chu' 'John Langford' 'Robert E. Schapire']",
"Lihong Li, Wei Chu, John Langford, Robert E. Schapire"
] |
cs.IT cs.LG math.IT math.ST stat.TH | null | 1003.0205 | null | null | http://arxiv.org/pdf/1003.0205v1 | 2010-02-28T18:23:11Z | 2010-02-28T18:23:11Z | Detecting Weak but Hierarchically-Structured Patterns in Networks | The ability to detect weak distributed activation patterns in networks is
critical to several applications, such as identifying the onset of anomalous
activity or incipient congestion in the Internet, or faint traces of a
biochemical spread by a sensor network. This is a challenging problem since
weak distributed patterns can be invisible in per node statistics as well as a
global network-wide aggregate. Most prior work considers situations in which
the activation/non-activation of each node is statistically independent, but
this is unrealistic in many problems. In this paper, we consider structured
patterns arising from statistical dependencies in the activation process. Our
contributions are three-fold. First, we propose a sparsifying transform that
succinctly represents structured activation patterns that conform to a
hierarchical dependency graph. Second, we establish that the proposed transform
facilitates detection of very weak activation patterns that cannot be detected
with existing methods. Third, we show that the structure of the hierarchical
dependency graph governing the activation process, and hence the network
transform, can be learnt from very few (logarithmic in network size)
independent snapshots of network activity.
| [
"Aarti Singh, Robert D. Nowak and Robert Calderbank",
"['Aarti Singh' 'Robert D. Nowak' 'Robert Calderbank']"
] |
cs.LG | null | 1003.0470 | null | null | http://arxiv.org/pdf/1003.0470v2 | 2010-07-21T21:19:35Z | 2010-03-01T22:32:18Z | Unsupervised Supervised Learning II: Training Margin Based Classifiers
without Labels | Many popular linear classifiers, such as logistic regression, boosting, or
SVM, are trained by optimizing a margin-based risk function. Traditionally,
these risk functions are computed based on a labeled dataset. We develop a
novel technique for estimating such risks using only unlabeled data and the
marginal label distribution. We prove that the proposed risk estimator is
consistent on high-dimensional datasets and demonstrate it on synthetic and
real-world data. In particular, we show how the estimate is used for evaluating
classifiers in transfer learning, and for training classifiers with no labeled
data whatsoever.
| [
"['Krishnakumar Balasubramanian' 'Pinar Donmez' 'Guy Lebanon']",
"Krishnakumar Balasubramanian, Pinar Donmez, Guy Lebanon"
] |
cs.LG | null | 1003.0516 | null | null | http://arxiv.org/pdf/1003.0516v1 | 2010-03-02T08:21:07Z | 2010-03-02T08:21:07Z | Model Selection with the Loss Rank Principle | A key issue in statistics and machine learning is to automatically select the
"right" model complexity, e.g., the number of neighbors to be averaged over in
k nearest neighbor (kNN) regression or the polynomial degree in regression with
polynomials. We suggest a novel principle - the Loss Rank Principle (LoRP) -
for model selection in regression and classification. It is based on the loss
rank, which counts how many other (fictitious) data would be fitted better.
LoRP selects the model that has minimal loss rank. Unlike most penalized
maximum likelihood variants (AIC, BIC, MDL), LoRP depends only on the
regression functions and the loss function. It works without a stochastic noise
model, and is directly applicable to any non-parametric regressor, like kNN.
| [
"['Marcus Hutter' 'Minh-Ngoc Tran']",
"Marcus Hutter and Minh-Ngoc Tran"
] |
cs.LG cs.CG cs.CV | null | 1003.0529 | null | null | http://arxiv.org/pdf/1003.0529v2 | 2010-03-30T17:21:53Z | 2010-03-02T09:11:44Z | A Unified Algorithmic Framework for Multi-Dimensional Scaling | In this paper, we propose a unified algorithmic framework for solving many
known variants of \mds. Our algorithm is a simple iterative scheme with
guaranteed convergence, and is \emph{modular}; by changing the internals of a
single subroutine in the algorithm, we can switch cost functions and target
spaces easily. In addition to the formal guarantees of convergence, our
algorithms are accurate; in most cases, they converge to better quality
solutions than existing methods, in comparable time. We expect that this
framework will be useful for a number of \mds variants that have not yet been
studied.
Our framework extends to embedding high-dimensional points lying on a sphere
to points on a lower dimensional sphere, preserving geodesic distances. As a
compliment to this result, we also extend the Johnson-Lindenstrauss Lemma to
this spherical setting, where projecting to a random $O((1/\eps^2) \log
n)$-dimensional sphere causes $\eps$-distortion.
| [
"Arvind Agarwal, Jeff M. Phillips, Suresh Venkatasubramanian",
"['Arvind Agarwal' 'Jeff M. Phillips' 'Suresh Venkatasubramanian']"
] |
cs.LG | null | 1003.0691 | null | null | http://arxiv.org/pdf/1003.0691v1 | 2010-03-02T21:54:16Z | 2010-03-02T21:54:16Z | Statistical and Computational Tradeoffs in Stochastic Composite
Likelihood | Maximum likelihood estimators are often of limited practical use due to the
intensive computation they require. We propose a family of alternative
estimators that maximize a stochastic variation of the composite likelihood
function. Each of the estimators resolve the computation-accuracy tradeoff
differently, and taken together they span a continuous spectrum of
computation-accuracy tradeoff resolutions. We prove the consistency of the
estimators, provide formulas for their asymptotic variance, statistical
robustness, and computational complexity. We discuss experimental results in
the context of Boltzmann machines and conditional random fields. The
theoretical and experimental studies demonstrate the effectiveness of the
estimators when the computational resources are insufficient. They also
demonstrate that in some cases reduced computational complexity is associated
with robustness thereby increasing statistical accuracy.
| [
"['Joshua V Dillon' 'Guy Lebanon']",
"Joshua V Dillon and Guy Lebanon"
] |
cs.LG | null | 1003.0696 | null | null | http://arxiv.org/pdf/1003.0696v1 | 2010-03-02T22:27:31Z | 2010-03-02T22:27:31Z | Exponential Family Hybrid Semi-Supervised Learning | We present an approach to semi-supervised learning based on an exponential
family characterization. Our approach generalizes previous work on coupled
priors for hybrid generative/discriminative models. Our model is more flexible
and natural than previous approaches. Experimental results on several data sets
show that our approach also performs better in practice.
| [
"Arvind Agarwal, Hal Daume III",
"['Arvind Agarwal' 'Hal Daume III']"
] |
cond-mat.dis-nn cond-mat.stat-mech cs.LG q-bio.NC | 10.1088/1742-5468/2010/08/P08014 | 1003.1020 | null | null | http://arxiv.org/abs/1003.1020v2 | 2010-05-30T03:44:54Z | 2010-03-04T11:38:33Z | Learning by random walks in the weight space of the Ising perceptron | Several variants of a stochastic local search process for constructing the
synaptic weights of an Ising perceptron are studied. In this process, binary
patterns are sequentially presented to the Ising perceptron and are then
learned as the synaptic weight configuration is modified through a chain of
single- or double-weight flips within the compatible weight configuration space
of the earlier learned patterns. This process is able to reach a storage
capacity of $\alpha \approx 0.63$ for pattern length N = 101 and $\alpha
\approx 0.41$ for N = 1001. If in addition a relearning process is exploited,
the learning performance is further improved to a storage capacity of $\alpha
\approx 0.80$ for N = 101 and $\alpha \approx 0.42$ for N=1001. We found that,
for a given learning task, the solutions constructed by the random walk
learning process are separated by a typical Hamming distance, which decreases
with the constraint density $\alpha$ of the learning task; at a fixed value of
$\alpha$, the width of the Hamming distance distributions decreases with $N$.
| [
"Haiping Huang and Haijun Zhou",
"['Haiping Huang' 'Haijun Zhou']"
] |
cs.CL cs.IR cs.LG | 10.1613/jair.2934 | 1003.1141 | null | null | http://arxiv.org/abs/1003.1141v1 | 2010-03-04T21:07:18Z | 2010-03-04T21:07:18Z | From Frequency to Meaning: Vector Space Models of Semantics | Computers understand very little of the meaning of human language. This
profoundly limits our ability to give instructions to computers, the ability of
computers to explain their actions to us, and the ability of computers to
analyse and process text. Vector space models (VSMs) of semantics are beginning
to address these limits. This paper surveys the use of VSMs for semantic
processing of text. We organize the literature on VSMs according to the
structure of the matrix in a VSM. There are currently three broad classes of
VSMs, based on term-document, word-context, and pair-pattern matrices, yielding
three classes of applications. We survey a broad range of applications in these
three categories and we take a detailed look at a specific open source project
in each category. Our goal in this survey is to show the breadth of
applications of VSMs for semantics, to provide a new perspective on VSMs for
those who are already familiar with the area, and to provide pointers into the
literature for those who are less familiar with the field.
| [
"Peter D. Turney and Patrick Pantel",
"['Peter D. Turney' 'Patrick Pantel']"
] |
cs.DS cs.LG math.PR | null | 1003.1266 | null | null | http://arxiv.org/pdf/1003.1266v2 | 2011-05-26T08:07:41Z | 2010-03-05T13:54:11Z | Hitting and commute times in large graphs are often misleading | Next to the shortest path distance, the second most popular distance function
between vertices in a graph is the commute distance (resistance distance). For
two vertices u and v, the hitting time H_{uv} is the expected time it takes a
random walk to travel from u to v. The commute time is its symmetrized version
C_{uv} = H_{uv} + H_{vu}. In our paper we study the behavior of hitting times
and commute distances when the number n of vertices in the graph is very large.
We prove that as n converges to infinty, hitting times and commute distances
converge to expressions that do not take into account the global structure of
the graph at all. Namely, the hitting time H_{uv} converges to 1/d_v and the
commute time to 1/d_u + 1/d_v where d_u and d_v denote the degrees of vertices
u and v. In these cases, the hitting and commute times are misleading in the
sense that they do not provide information about the structure of the graph. We
focus on two major classes of random graphs: random geometric graphs (k-nearest
neighbor graphs, epsilon-graphs, Gaussian similarity graphs) and random graphs
with given expected degrees (in particular, Erdos-Renyi graphs with and without
planted partitions)
| [
"Ulrike von Luxburg, Agnes Radl, Matthias Hein",
"['Ulrike von Luxburg' 'Agnes Radl' 'Matthias Hein']"
] |
cs.LG cs.CC | null | 1003.1354 | null | null | http://arxiv.org/pdf/1003.1354v1 | 2010-03-06T05:49:19Z | 2010-03-06T05:49:19Z | Faster Rates for training Max-Margin Markov Networks | Structured output prediction is an important machine learning problem both in
theory and practice, and the max-margin Markov network (\mcn) is an effective
approach. All state-of-the-art algorithms for optimizing \mcn\ objectives take
at least $O(1/\epsilon)$ number of iterations to find an $\epsilon$ accurate
solution. Recent results in structured optimization suggest that faster rates
are possible by exploiting the structure of the objective function. Towards
this end \citet{Nesterov05} proposed an excessive gap reduction technique based
on Euclidean projections which converges in $O(1/\sqrt{\epsilon})$ iterations
on strongly convex functions. Unfortunately when applied to \mcn s, this
approach does not admit graphical model factorization which, as in many
existing algorithms, is crucial for keeping the cost per iteration tractable.
In this paper, we present a new excessive gap reduction technique based on
Bregman projections which admits graphical model factorization naturally, and
converges in $O(1/\sqrt{\epsilon})$ iterations. Compared with existing
algorithms, the convergence rate of our method has better dependence on
$\epsilon$ and other parameters of the problem, and can be easily kernelized.
| [
"['Xinhua Zhang' 'Ankan Saha' 'S. V. N. Vishwanathan']",
"Xinhua Zhang (1), Ankan Saha (2), S.V.N. Vishwanathan (1)((1) Purdue\n University, (2) University of Chicago)"
] |
cs.GR cs.CL cs.LG | null | 1003.1410 | null | null | http://arxiv.org/pdf/1003.1410v2 | 2013-08-08T18:05:00Z | 2010-03-06T18:08:12Z | Local Space-Time Smoothing for Version Controlled Documents | Unlike static documents, version controlled documents are continuously edited
by one or more authors. Such collaborative revision process makes traditional
modeling and visualization techniques inappropriate. In this paper we propose a
new representation based on local space-time smoothing that captures important
revision patterns. We demonstrate the applicability of our framework using
experiments on synthetic and real-world data.
| [
"Seungyeon Kim, Guy Lebanon",
"['Seungyeon Kim' 'Guy Lebanon']"
] |
cs.LG | null | 1003.1450 | null | null | http://arxiv.org/pdf/1003.1450v1 | 2010-03-07T11:08:33Z | 2010-03-07T11:08:33Z | A New Clustering Approach based on Page's Path Similarity for Navigation
Patterns Mining | In recent years, predicting the user's next request in web navigation has
received much attention. An information source to be used for dealing with such
problem is the left information by the previous web users stored at the web
access log on the web servers. Purposed systems for this problem work based on
this idea that if a large number of web users request specific pages of a
website on a given session, it can be concluded that these pages are satisfying
similar information needs, and therefore they are conceptually related. In this
study, a new clustering approach is introduced that employs logical path
storing of a website pages as another parameter which is regarded as a
similarity parameter and conceptual relation between web pages. The results of
simulation have shown that the proposed approach is more than others precise in
determining the clusters.
| [
"Heidar Mamosian, Amir Masoud Rahmani, Mashalla Abbasi Dezfouli",
"['Heidar Mamosian' 'Amir Masoud Rahmani' 'Mashalla Abbasi Dezfouli']"
] |
null | null | 1003.1499 | null | null | http://arxiv.org/pdf/1003.1499v1 | 2010-03-07T17:36:26Z | 2010-03-07T17:36:26Z | Evaluation of E-Learners Behaviour using Different Fuzzy Clustering
Models: A Comparative Study | This paper introduces an evaluation methodologies for the e-learners' behaviour that will be a feedback to the decision makers in e-learning system. Learner's profile plays a crucial role in the evaluation process to improve the e-learning process performance. The work focuses on the clustering of the e-learners based on their behaviour into specific categories that represent the learner's profiles. The learners' classes named as regular, workers, casual, bad, and absent. The work may answer the question of how to return bad students to be regular ones. The work presented the use of different fuzzy clustering techniques as fuzzy c-means and kernelized fuzzy c-means to find the learners' categories and predict their profiles. The paper presents the main phases as data description, preparation, features selection, and the experiments design using different fuzzy clustering models. Analysis of the obtained results and comparison with the real world behavior of those learners proved that there is a match with percentage of 78%. Fuzzy clustering reflects the learners' behavior more than crisp clustering. Comparison between FCM and KFCM proved that the KFCM is much better than FCM in predicting the learners' behaviour. | [
"['Mofreh A. Hogo']"
] |
cs.LG | null | 1003.1510 | null | null | http://arxiv.org/pdf/1003.1510v1 | 2010-03-07T18:32:47Z | 2010-03-07T18:32:47Z | Hierarchical Web Page Classification Based on a Topic Model and
Neighboring Pages Integration | Most Web page classification models typically apply the bag of words (BOW)
model to represent the feature space. The original BOW representation, however,
is unable to recognize semantic relationships between terms. One possible
solution is to apply the topic model approach based on the Latent Dirichlet
Allocation algorithm to cluster the term features into a set of latent topics.
Terms assigned into the same topic are semantically related. In this paper, we
propose a novel hierarchical classification method based on a topic model and
by integrating additional term features from neighboring pages. Our
hierarchical classification method consists of two phases: (1) feature
representation by using a topic model and integrating neighboring pages, and
(2) hierarchical Support Vector Machines (SVM) classification model constructed
from a confusion matrix. From the experimental results, the approach of using
the proposed hierarchical SVM model by integrating current page with
neighboring pages via the topic model yielded the best performance with the
accuracy equal to 90.33% and the F1 measure of 90.14%; an improvement of 5.12%
and 5.13% over the original SVM model, respectively.
| [
"Wongkot Sriurai, Phayung Meesad, Choochart Haruechaiyasak",
"['Wongkot Sriurai' 'Phayung Meesad' 'Choochart Haruechaiyasak']"
] |
cs.LG cs.IR | null | 1003.1795 | null | null | http://arxiv.org/pdf/1003.1795v1 | 2010-03-09T06:41:49Z | 2010-03-09T06:41:49Z | A Survey of Na\"ive Bayes Machine Learning approach in Text Document
Classification | Text Document classification aims in associating one or more predefined
categories based on the likelihood suggested by the training set of labeled
documents. Many machine learning algorithms play a vital role in training the
system with predefined categories among which Na\"ive Bayes has some intriguing
facts that it is simple, easy to implement and draws better accuracy in large
datasets in spite of the na\"ive dependence. The importance of Na\"ive Bayes
Machine learning approach has felt hence the study has been taken up for text
document classification and the statistical event models available. This survey
the various feature selection methods has been discussed and compared along
with the metrics related to text document classification.
| [
"['Vidhya. K. A' 'G. Aghila']",
"Vidhya. K. A, G. Aghila"
] |
cs.LG | null | 1003.2218 | null | null | http://arxiv.org/pdf/1003.2218v1 | 2010-03-10T21:53:56Z | 2010-03-10T21:53:56Z | Supermartingales in Prediction with Expert Advice | We apply the method of defensive forecasting, based on the use of
game-theoretic supermartingales, to prediction with expert advice. In the
traditional setting of a countable number of experts and a finite number of
outcomes, the Defensive Forecasting Algorithm is very close to the well-known
Aggregating Algorithm. Not only the performance guarantees but also the
predictions are the same for these two methods of fundamentally different
nature. We discuss also a new setting where the experts can give advice
conditional on the learner's future decision. Both the algorithms can be
adapted to the new setting and give the same performance guarantees as in the
traditional setting. Finally, we outline an application of defensive
forecasting to a setting with several loss functions.
| [
"Alexey Chernov, Yuri Kalnishkan, Fedor Zhdanov, Vladimir Vovk",
"['Alexey Chernov' 'Yuri Kalnishkan' 'Fedor Zhdanov' 'Vladimir Vovk']"
] |
cs.LG cs.IT cs.MM math.IT | null | 1003.2471 | null | null | http://arxiv.org/pdf/1003.2471v1 | 2010-03-12T04:07:41Z | 2010-03-12T04:07:41Z | Structure-Aware Stochastic Control for Transmission Scheduling | In this paper, we consider the problem of real-time transmission scheduling
over time-varying channels. We first formulate the transmission scheduling
problem as a Markov decision process (MDP) and systematically unravel the
structural properties (e.g. concavity in the state-value function and
monotonicity in the optimal scheduling policy) exhibited by the optimal
solutions. We then propose an online learning algorithm which preserves these
structural properties and achieves -optimal solutions for an arbitrarily small
. The advantages of the proposed online method are that: (i) it does not
require a priori knowledge of the traffic arrival and channel statistics and
(ii) it adaptively approximates the state-value functions using piece-wise
linear functions and has low storage and computation complexity. We also extend
the proposed low-complexity online learning solution to the prioritized data
transmission. The simulation results demonstrate that the proposed method
achieves significantly better utility (or delay)-energy trade-offs when
comparing to existing state-of-art online optimization methods.
| [
"Fangwen Fu and Mihaela van der Schaar",
"['Fangwen Fu' 'Mihaela van der Schaar']"
] |
cs.LO cs.AI cs.DB cs.LG | null | 1003.2586 | null | null | http://arxiv.org/pdf/1003.2586v1 | 2010-03-12T17:40:43Z | 2010-03-12T17:40:43Z | Inductive Logic Programming in Databases: from Datalog to DL+log | In this paper we address an issue that has been brought to the attention of
the database community with the advent of the Semantic Web, i.e. the issue of
how ontologies (and semantics conveyed by them) can help solving typical
database problems, through a better understanding of KR aspects related to
databases. In particular, we investigate this issue from the ILP perspective by
considering two database problems, (i) the definition of views and (ii) the
definition of constraints, for a database whose schema is represented also by
means of an ontology. Both can be reformulated as ILP problems and can benefit
from the expressive and deductive power of the KR framework DL+log. We
illustrate the application scenarios by means of examples. Keywords: Inductive
Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid
Knowledge Representation and Reasoning Systems. Note: To appear in Theory and
Practice of Logic Programming (TPLP).
| [
"['Francesca A. Lisi']",
"Francesca A. Lisi"
] |
cs.LG cs.CR | null | 1003.2751 | null | null | http://arxiv.org/pdf/1003.2751v1 | 2010-03-14T01:25:06Z | 2010-03-14T01:25:06Z | Near-Optimal Evasion of Convex-Inducing Classifiers | Classifiers are often used to detect miscreant activities. We study how an
adversary can efficiently query a classifier to elicit information that allows
the adversary to evade detection at near-minimal cost. We generalize results of
Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that
construct undetected instances of near-minimal cost using only polynomially
many queries in the dimension of the space and without reverse engineering the
decision boundary.
| [
"Blaine Nelson and Benjamin I. P. Rubinstein and Ling Huang and Anthony\n D. Joseph and Shing-hon Lau and Steven J. Lee and Satish Rao and Anthony Tran\n and J. D. Tygar",
"['Blaine Nelson' 'Benjamin I. P. Rubinstein' 'Ling Huang'\n 'Anthony D. Joseph' 'Shing-hon Lau' 'Steven J. Lee' 'Satish Rao'\n 'Anthony Tran' 'J. D. Tygar']"
] |
cs.LG cs.DM math.CO | null | 1003.3279 | null | null | http://arxiv.org/pdf/1003.3279v1 | 2010-03-17T01:48:56Z | 2010-03-17T01:48:56Z | A New Heuristic for Feature Selection by Consistent Biclustering | Given a set of data, biclustering aims at finding simultaneous partitions in
biclusters of its samples and of the features which are used for representing
the samples. Consistent biclusterings allow to obtain correct classifications
of the samples from the known classification of the features, and vice versa,
and they are very useful for performing supervised classifications. The problem
of finding consistent biclusterings can be seen as a feature selection problem,
where the features that are not relevant for classification purposes are
removed from the set of data, while the total number of features is maximized
in order to preserve information. This feature selection problem can be
formulated as a linear fractional 0-1 optimization problem. We propose a
reformulation of this problem as a bilevel optimization problem, and we present
a heuristic algorithm for an efficient solution of the reformulated problem.
Computational experiments show that the presented algorithm is able to find
better solutions with respect to the ones obtained by employing previously
presented heuristic algorithms.
| [
"Antonio Mucherino, Sonia Cafieri",
"['Antonio Mucherino' 'Sonia Cafieri']"
] |
cs.AI cs.DS cs.LG q-bio.NC | null | 1003.3821 | null | null | http://arxiv.org/pdf/1003.3821v1 | 2010-03-19T15:56:37Z | 2010-03-19T15:56:37Z | A Formal Approach to Modeling the Memory of a Living Organism | We consider a living organism as an observer of the evolution of its
environment recording sensory information about the state space X of the
environment in real time. Sensory information is sampled and then processed on
two levels. On the biological level, the organism serves as an evaluation
mechanism of the subjective relevance of the incoming data to the observer: the
observer assigns excitation values to events in X it could recognize using its
sensory equipment. On the algorithmic level, sensory input is used for updating
a database, the memory of the observer whose purpose is to serve as a
geometric/combinatorial model of X, whose nodes are weighted by the excitation
values produced by the evaluation mechanism. These values serve as a guidance
system for deciding how the database should transform as observation data
mounts. We define a searching problem for the proposed model and discuss the
model's flexibility and its computational efficiency, as well as the
possibility of implementing it as a dynamic network of neuron-like units. We
show how various easily observable properties of the human memory and thought
process can be explained within the framework of this model. These include:
reasoning (with efficiency bounds), errors, temporary and permanent loss of
information. We are also able to define general learning problems in terms of
the new model, such as the language acquisition problem.
| [
"Dan Guralnik",
"['Dan Guralnik']"
] |
cs.LG cs.AI cs.DS | null | 1003.3967 | null | null | http://arxiv.org/pdf/1003.3967v5 | 2017-12-06T08:21:07Z | 2010-03-21T04:06:22Z | Adaptive Submodularity: Theory and Applications in Active Learning and
Stochastic Optimization | Solving stochastic optimization problems under partial observability, where
one needs to adaptively make decisions with uncertain outcomes, is a
fundamental but notoriously difficult challenge. In this paper, we introduce
the concept of adaptive submodularity, generalizing submodular set functions to
adaptive policies. We prove that if a problem satisfies this property, a simple
adaptive greedy algorithm is guaranteed to be competitive with the optimal
policy. In addition to providing performance guarantees for both stochastic
maximization and coverage, adaptive submodularity can be exploited to
drastically speed up the greedy algorithm by using lazy evaluations. We
illustrate the usefulness of the concept by giving several examples of adaptive
submodular objectives arising in diverse applications including sensor
placement, viral marketing and active learning. Proving adaptive submodularity
for these problems allows us to recover existing results in these applications
as special cases, improve approximation guarantees and handle natural
generalizations.
| [
"['Daniel Golovin' 'Andreas Krause']",
"Daniel Golovin and Andreas Krause"
] |
cs.CE cs.LG q-bio.GN | null | 1003.4079 | null | null | http://arxiv.org/pdf/1003.4079v1 | 2010-03-22T06:31:36Z | 2010-03-22T06:31:36Z | Gene Expression Data Knowledge Discovery using Global and Local
Clustering | To understand complex biological systems, the research community has produced
huge corpus of gene expression data. A large number of clustering approaches
have been proposed for the analysis of gene expression data. However,
extracting important biological knowledge is still harder. To address this
task, clustering techniques are used. In this paper, hybrid Hierarchical
k-Means algorithm is used for clustering and biclustering gene expression data
is used. To discover both local and global clustering structure biclustering
and clustering algorithms are utilized. A validation technique, Figure of Merit
is used to determine the quality of clustering results. Appropriate knowledge
is mined from the clusters by embedding a BLAST similarity search program into
the clustering and biclustering process. To discover both local and global
clustering structure biclustering and clustering algorithms are utilized. To
determine the quality of clustering results, a validation technique, Figure of
Merit is used. Appropriate knowledge is mined from the clusters by embedding a
BLAST similarity search program into the clustering and biclustering process.
| [
"Swathi. H",
"['Swathi. H']"
] |
cs.GT cs.LG | 10.1016/j.geb.2012.05.002 | 1003.4274 | null | null | http://arxiv.org/abs/1003.4274v1 | 2010-03-22T20:40:28Z | 2010-03-22T20:40:28Z | Unbeatable Imitation | We show that for many classes of symmetric two-player games, the simple
decision rule "imitate-the-best" can hardly be beaten by any other decision
rule. We provide necessary and sufficient conditions for imitation to be
unbeatable and show that it can only be beaten by much in games that are of the
rock-scissors-paper variety. Thus, in many interesting examples, like 2x2
games, Cournot duopoly, price competition, rent seeking, public goods games,
common pool resource games, minimum effort coordination games, arms race,
search, bargaining, etc., imitation cannot be beaten by much even by a very
clever opponent.
| [
"Peter Duersch, Joerg Oechssler, Burkhard C. Schipper",
"['Peter Duersch' 'Joerg Oechssler' 'Burkhard C. Schipper']"
] |
cs.LG cs.AI cs.CV | null | 1003.4781 | null | null | http://arxiv.org/pdf/1003.4781v1 | 2010-03-25T02:21:11Z | 2010-03-25T02:21:11Z | Large Margin Boltzmann Machines and Large Margin Sigmoid Belief Networks | Current statistical models for structured prediction make simplifying
assumptions about the underlying output graph structure, such as assuming a
low-order Markov chain, because exact inference becomes intractable as the
tree-width of the underlying graph increases. Approximate inference algorithms,
on the other hand, force one to trade off representational power with
computational efficiency. In this paper, we propose two new types of
probabilistic graphical models, large margin Boltzmann machines (LMBMs) and
large margin sigmoid belief networks (LMSBNs), for structured prediction.
LMSBNs in particular allow a very fast inference algorithm for arbitrary graph
structures that runs in polynomial time with a high probability. This
probability is data-distribution dependent and is maximized in learning. The
new approach overcomes the representation-efficiency trade-off in previous
models and allows fast structured prediction with complicated graph structures.
We present results from applying a fully connected model to multi-label scene
classification and demonstrate that the proposed approach can yield significant
performance gains over current state-of-the-art methods.
| [
"Xu Miao, Rajesh P.N. Rao",
"['Xu Miao' 'Rajesh P. N. Rao']"
] |
stat.ML cs.LG | null | 1003.4944 | null | null | http://arxiv.org/pdf/1003.4944v1 | 2010-03-25T16:12:48Z | 2010-03-25T16:12:48Z | Incorporating Side Information in Probabilistic Matrix Factorization
with Gaussian Processes | Probabilistic matrix factorization (PMF) is a powerful method for modeling
data associated with pairwise relationships, finding use in collaborative
filtering, computational biology, and document analysis, among other areas. In
many domains, there is additional information that can assist in prediction.
For example, when modeling movie ratings, we might know when the rating
occurred, where the user lives, or what actors appear in the movie. It is
difficult, however, to incorporate this side information into the PMF model. We
propose a framework for incorporating side information by coupling together
multiple PMF problems via Gaussian process priors. We replace scalar latent
features with functions that vary over the space of side information. The GP
priors on these functions require them to vary smoothly and share information.
We successfully use this new method to predict the scores of professional
basketball games, where side information about the venue and date of the game
are relevant for the outcome.
| [
"['Ryan Prescott Adams' 'George E. Dahl' 'Iain Murray']",
"Ryan Prescott Adams, George E. Dahl, Iain Murray"
] |
cs.SD cs.LG | null | 1003.5623 | null | null | http://arxiv.org/pdf/1003.5623v1 | 2010-03-29T17:48:22Z | 2010-03-29T17:48:22Z | Spoken Language Identification Using Hybrid Feature Extraction Methods | This paper introduces and motivates the use of hybrid robust feature
extraction technique for spoken language identification (LID) system. The
speech recognizers use a parametric form of a signal to get the most important
distinguishable features of speech signal for recognition task. In this paper
Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction
coefficients (PLP) along with two hybrid features are used for language
Identification. Two hybrid features, Bark Frequency Cepstral Coefficients
(BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) were
obtained from combination of MFCC and PLP. Two different classifiers, Vector
Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model
(GMM) were used for classification. The experiment shows better identification
rate using hybrid feature extraction techniques compared to conventional
feature extraction methods.BFCC has shown better performance than MFCC with
both classifiers. RPLP along with GMM has shown best identification performance
among all feature extraction techniques.
| [
"Pawan Kumar, Astik Biswas, A .N. Mishra and Mahesh Chandra",
"['Pawan Kumar' 'Astik Biswas' 'A . N. Mishra' 'Mahesh Chandra']"
] |
cs.SD cs.LG | null | 1003.5627 | null | null | http://arxiv.org/pdf/1003.5627v1 | 2010-03-29T17:54:55Z | 2010-03-29T17:54:55Z | Wavelet-Based Mel-Frequency Cepstral Coefficients for Speaker
Identification using Hidden Markov Models | To improve the performance of speaker identification systems, an effective
and robust method is proposed to extract speech features, capable of operating
in noisy environment. Based on the time-frequency multi-resolution property of
wavelet transform, the input speech signal is decomposed into various frequency
channels. For capturing the characteristic of the signal, the Mel-Frequency
Cepstral Coefficients (MFCCs) of the wavelet channels are calculated. Hidden
Markov Models (HMMs) were used for the recognition stage as they give better
recognition for the speaker's features than Dynamic Time Warping (DTW).
Comparison of the proposed approach with the MFCCs conventional feature
extraction method shows that the proposed method not only effectively reduces
the influence of noise, but also improves recognition. A recognition rate of
99.3% was obtained using the proposed feature extraction technique compared to
98.7% using the MFCCs. When the test patterns were corrupted by additive white
Gaussian noise with 20 dB S/N ratio, the recognition rate was 97.3% using the
proposed method compared to 93.3% using the MFCCs.
| [
"Mahmoud I. Abdalla and Hanaa S. Ali",
"['Mahmoud I. Abdalla' 'Hanaa S. Ali']"
] |
cs.LG cs.CL | null | 1003.5749 | null | null | http://arxiv.org/pdf/1003.5749v1 | 2010-03-30T07:04:46Z | 2010-03-30T07:04:46Z | Etiqueter un corpus oral par apprentissage automatique \`a l'aide de
connaissances linguistiques | Thanks to the Eslo1 ("Enqu\^ete sociolinguistique d'Orl\'eans", i.e.
"Sociolinguistic Inquiery of Orl\'eans") campain, a large oral corpus has been
gathered and transcribed in a textual format. The purpose of the work presented
here is to associate a morpho-syntactic label to each unit of this corpus. To
this aim, we have first studied the specificities of the necessary labels, and
their various possible levels of description. This study has led to a new
original hierarchical structuration of labels. Then, considering that our new
set of labels was different from the one used in every available software, and
that these softwares usually do not fit for oral data, we have built a new
labeling tool by a Machine Learning approach, from data labeled by Cordial and
corrected by hand. We have applied linear CRF (Conditional Random Fields)
trying to take the best possible advantage of the linguistic knowledge that was
used to define the set of labels. We obtain an accuracy between 85 and 90%,
depending of the parameters used.
| [
"Iris Eshkol (CORAL), Isabelle Tellier (LIFO), Taalab Samer (LIFO),\n Sylvie Billot (LIFO)",
"['Iris Eshkol' 'Isabelle Tellier' 'Taalab Samer' 'Sylvie Billot']"
] |
cs.CV cs.LG | null | 1003.5865 | null | null | http://arxiv.org/pdf/1003.5865v1 | 2010-03-30T16:36:36Z | 2010-03-30T16:36:36Z | Offline Signature Identification by Fusion of Multiple Classifiers using
Statistical Learning Theory | This paper uses Support Vector Machines (SVM) to fuse multiple classifiers
for an offline signature system. From the signature images, global and local
features are extracted and the signatures are verified with the help of
Gaussian empirical rule, Euclidean and Mahalanobis distance based classifiers.
SVM is used to fuse matching scores of these matchers. Finally, recognition of
query signatures is done by comparing it with all signatures of the database.
The proposed system is tested on a signature database contains 5400 offline
signatures of 600 individuals and the results are found to be promising.
| [
"Dakshina Ranjan Kisku, Phalguni Gupta, Jamuna Kanta Sing",
"['Dakshina Ranjan Kisku' 'Phalguni Gupta' 'Jamuna Kanta Sing']"
] |
cs.LG cs.AI cs.RO stat.ML | 10.1145/1935826.1935878 | 1003.5956 | null | null | http://arxiv.org/abs/1003.5956v2 | 2012-03-01T23:33:07Z | 2010-03-31T01:20:07Z | Unbiased Offline Evaluation of Contextual-bandit-based News Article
Recommendation Algorithms | Contextual bandit algorithms have become popular for online recommendation
systems such as Digg, Yahoo! Buzz, and news recommendation in general.
\emph{Offline} evaluation of the effectiveness of new algorithms in these
applications is critical for protecting online user experiences but very
challenging due to their "partial-label" nature. Common practice is to create a
simulator which simulates the online environment for the problem at hand and
then run an algorithm against this simulator. However, creating simulator
itself is often difficult and modeling bias is usually unavoidably introduced.
In this paper, we introduce a \emph{replay} methodology for contextual bandit
algorithm evaluation. Different from simulator-based approaches, our method is
completely data-driven and very easy to adapt to different applications. More
importantly, our method can provide provably unbiased evaluations. Our
empirical results on a large-scale news article recommendation dataset
collected from Yahoo! Front Page conform well with our theoretical results.
Furthermore, comparisons between our offline replay and online bucket
evaluation of several contextual bandit algorithms show accuracy and
effectiveness of our offline evaluation method.
| [
"['Lihong Li' 'Wei Chu' 'John Langford' 'Xuanhui Wang']",
"Lihong Li and Wei Chu and John Langford and Xuanhui Wang"
] |
cs.CV cs.LG | null | 1004.0378 | null | null | http://arxiv.org/pdf/1004.0378v7 | 2012-07-20T01:21:59Z | 2010-04-02T19:26:47Z | Facial Expression Representation and Recognition Using 2DHLDA, Gabor
Wavelets, and Ensemble Learning | In this paper, a novel method for representation and recognition of the
facial expressions in two-dimensional image sequences is presented. We apply a
variation of two-dimensional heteroscedastic linear discriminant analysis
(2DHLDA) algorithm, as an efficient dimensionality reduction technique, to
Gabor representation of the input sequence. 2DHLDA is an extension of the
two-dimensional linear discriminant analysis (2DLDA) approach and it removes
the equal within-class covariance. By applying 2DHLDA in two directions, we
eliminate the correlations between both image columns and image rows. Then, we
perform a one-dimensional LDA on the new features. This combined method can
alleviate the small sample size problem and instability encountered by HLDA.
Also, employing both geometric and appearance features and using an ensemble
learning scheme based on data fusion, we create a classifier which can
efficiently classify the facial expressions. The proposed method is robust to
illumination changes and it can properly represent temporal information as well
as subtle changes in facial muscles. We provide experiments on Cohn-Kanade
database that show the superiority of the proposed method. KEYWORDS:
two-dimensional heteroscedastic linear discriminant analysis (2DHLDA), subspace
learning, facial expression analysis, Gabor wavelets, ensemble learning.
| [
"['Mahmoud Khademi' 'Mohammad H. Kiapour' 'Mehran Safayani'\n 'Mohammad T. Manzuri' 'M. Shojaei']",
"Mahmoud Khademi, Mohammad H. Kiapour, Mehran Safayani, Mohammad T.\n Manzuri, and M. Shojaei"
] |
stat.ML cs.LG | 10.1016/j.neucom.2009.11.022 | 1004.0456 | null | null | http://arxiv.org/abs/1004.0456v1 | 2010-04-03T16:28:47Z | 2010-04-03T16:28:47Z | Exploratory Analysis of Functional Data via Clustering and Optimal
Segmentation | We propose in this paper an exploratory analysis algorithm for functional
data. The method partitions a set of functions into $K$ clusters and represents
each cluster by a simple prototype (e.g., piecewise constant). The total number
of segments in the prototypes, $P$, is chosen by the user and optimally
distributed among the clusters via two dynamic programming algorithms. The
practical relevance of the method is shown on two real world datasets.
| [
"Georges H\\'ebrail and Bernard Hugueney and Yves Lechevallier and\n Fabrice Rossi",
"['Georges Hébrail' 'Bernard Hugueney' 'Yves Lechevallier' 'Fabrice Rossi']"
] |
cs.CV cs.LG | null | 1004.0515 | null | null | http://arxiv.org/pdf/1004.0515v1 | 2010-04-04T16:23:53Z | 2010-04-04T16:23:53Z | Recognizing Combinations of Facial Action Units with Different Intensity
Using a Mixture of Hidden Markov Models and Neural Network | Facial Action Coding System consists of 44 action units (AUs) and more than
7000 combinations. Hidden Markov models (HMMs) classifier has been used
successfully to recognize facial action units (AUs) and expressions due to its
ability to deal with AU dynamics. However, a separate HMM is necessary for each
single AU and each AU combination. Since combinations of AU numbering in
thousands, a more efficient method will be needed. In this paper an accurate
real-time sequence-based system for representation and recognition of facial
AUs is presented. Our system has the following characteristics: 1) employing a
mixture of HMMs and neural network, we develop a novel accurate classifier,
which can deal with AU dynamics, recognize subtle changes, and it is also
robust to intensity variations, 2) although we use an HMM for each single AU
only, by employing a neural network we can recognize each single and
combination AU, and 3) using both geometric and appearance-based features, and
applying efficient dimension reduction techniques, our system is robust to
illumination changes and it can represent the temporal information involved in
formation of the facial expressions. Extensive experiments on Cohn-Kanade
database show the superiority of the proposed method, in comparison with other
classifiers. Keywords: classifier design and evaluation, data fusion, facial
action units (AUs), hidden Markov models (HMMs), neural network (NN).
| [
"Mahmoud Khademi, Mohammad T. Manzuri-Shalmani, Mohammad H. Kiapour,\n and Ali A. Kiaei",
"['Mahmoud Khademi' 'Mohammad T. Manzuri-Shalmani' 'Mohammad H. Kiapour'\n 'Ali A. Kiaei']"
] |
cs.CV cs.LG | null | 1004.0517 | null | null | http://arxiv.org/pdf/1004.0517v1 | 2010-04-04T16:40:39Z | 2010-04-04T16:40:39Z | Multilinear Biased Discriminant Analysis: A Novel Method for Facial
Action Unit Representation | In this paper a novel efficient method for representation of facial action
units by encoding an image sequence as a fourth-order tensor is presented. The
multilinear tensor-based extension of the biased discriminant analysis (BDA)
algorithm, called multilinear biased discriminant analysis (MBDA), is first
proposed. Then, we apply the MBDA and two-dimensional BDA (2DBDA) algorithms,
as the dimensionality reduction techniques, to Gabor representations and the
geometric features of the input image sequence respectively. The proposed
scheme can deal with the asymmetry between positive and negative samples as
well as curse of dimensionality dilemma. Extensive experiments on Cohn-Kanade
database show the superiority of the proposed method for representation of the
subtle changes and the temporal information involved in formation of the facial
expressions. As an accurate tool, this representation can be applied to many
areas such as recognition of spontaneous and deliberate facial expressions,
multi modal/media human computer interaction and lie detection efforts.
| [
"['Mahmoud Khademi' 'Mehran Safayani' 'Mohammad T. Manzuri-Shalmani']",
"Mahmoud Khademi, Mehran Safayani, and Mohammad T. Manzuri-Shalmani"
] |
cs.LG cs.CR cs.NI | null | 1004.0567 | null | null | http://arxiv.org/pdf/1004.0567v1 | 2010-04-05T06:12:47Z | 2010-04-05T06:12:47Z | Using Rough Set and Support Vector Machine for Network Intrusion
Detection | The main function of IDS (Intrusion Detection System) is to protect the
system, analyze and predict the behaviors of users. Then these behaviors will
be considered an attack or a normal behavior. Though IDS has been developed for
many years, the large number of return alert messages makes managers maintain
system inefficiently. In this paper, we use RST (Rough Set Theory) and SVM
(Support Vector Machine) to detect intrusions. First, RST is used to preprocess
the data and reduce the dimensions. Next, the features were selected by RST
will be sent to SVM model to learn and test respectively. The method is
effective to decrease the space density of data. The experiments will compare
the results with different methods and show RST and SVM schema could improve
the false positive rate and accuracy.
| [
"['Rung-Ching Chen' 'Kai-Fan Cheng' 'Chia-Fen Hsieh']",
"Rung-Ching Chen, Kai-Fan Cheng and Chia-Fen Hsieh (Chaoyang University\n of Technology, Taiwan)"
] |
cs.CV cs.LG | null | 1004.0755 | null | null | http://arxiv.org/pdf/1004.0755v1 | 2010-04-06T02:27:58Z | 2010-04-06T02:27:58Z | Extended Two-Dimensional PCA for Efficient Face Representation and
Recognition | In this paper a novel method called Extended Two-Dimensional PCA (E2DPCA) is
proposed which is an extension to the original 2DPCA. We state that the
covariance matrix of 2DPCA is equivalent to the average of the main diagonal of
the covariance matrix of PCA. This implies that 2DPCA eliminates some
covariance information that can be useful for recognition. E2DPCA instead of
just using the main diagonal considers a radius of r diagonals around it and
expands the averaging so as to include the covariance information within those
diagonals. The parameter r unifies PCA and 2DPCA. r = 1 produces the covariance
of 2DPCA, r = n that of PCA. Hence, by controlling r it is possible to control
the trade-offs between recognition accuracy and energy compression (fewer
coefficients), and between training and recognition complexity. Experiments on
ORL face database show improvement in both recognition accuracy and recognition
time over the original 2DPCA.
| [
"['Mehran Safayani' 'Mohammad T. Manzuri-Shalmani' 'Mahmoud Khademi']",
"Mehran Safayani, Mohammad T. Manzuri-Shalmani, Mahmoud Khademi"
] |
cs.IT cs.LG math.IT | null | 1004.1003 | null | null | http://arxiv.org/pdf/1004.1003v1 | 2010-04-07T05:25:48Z | 2010-04-07T05:25:48Z | Message-Passing Inference on a Factor Graph for Collaborative Filtering | This paper introduces a novel message-passing (MP) framework for the
collaborative filtering (CF) problem associated with recommender systems. We
model the movie-rating prediction problem popularized by the Netflix Prize,
using a probabilistic factor graph model and study the model by deriving
generalization error bounds in terms of the training error. Based on the model,
we develop a new MP algorithm, termed IMP, for learning the model. To show
superiority of the IMP algorithm, we compare it with the closely related
expectation-maximization (EM) based algorithm and a number of other matrix
completion algorithms. Our simulation results on Netflix data show that, while
the methods perform similarly with large amounts of data, the IMP algorithm is
superior for small amounts of data. This improves the cold-start problem of the
CF systems in practice. Another advantage of the IMP algorithm is that it can
be analyzed using the technique of density evolution (DE) that was originally
developed for MP decoding of error-correcting codes.
| [
"['Byung-Hak Kim' 'Arvind Yedla' 'Henry D. Pfister']",
"Byung-Hak Kim, Arvind Yedla, and Henry D. Pfister"
] |
cs.LG cond-mat.stat-mech cs.AI cs.IT math.IT | null | 1004.1061 | null | null | http://arxiv.org/pdf/1004.1061v1 | 2010-04-07T11:52:25Z | 2010-04-07T11:52:25Z | On Tsallis Entropy Bias and Generalized Maximum Entropy Models | In density estimation task, maximum entropy model (Maxent) can effectively
use reliable prior information via certain constraints, i.e., linear
constraints without empirical parameters. However, reliable prior information
is often insufficient, and the selection of uncertain constraints becomes
necessary but poses considerable implementation complexity. Improper setting of
uncertain constraints can result in overfitting or underfitting. To solve this
problem, a generalization of Maxent, under Tsallis entropy framework, is
proposed. The proposed method introduces a convex quadratic constraint for the
correction of (expected) Tsallis entropy bias (TEB). Specifically, we
demonstrate that the expected Tsallis entropy of sampling distributions is
smaller than the Tsallis entropy of the underlying real distribution. This
expected entropy reduction is exactly the (expected) TEB, which can be
expressed by a closed-form formula and act as a consistent and unbiased
correction. TEB indicates that the entropy of a specific sampling distribution
should be increased accordingly. This entails a quantitative re-interpretation
of the Maxent principle. By compensating TEB and meanwhile forcing the
resulting distribution to be close to the sampling distribution, our
generalized TEBC Maxent can be expected to alleviate the overfitting and
underfitting. We also present a connection between TEB and Lidstone estimator.
As a result, TEB-Lidstone estimator is developed by analytically identifying
the rate of probability correction in Lidstone. Extensive empirical evaluation
shows promising performance of both TEBC Maxent and TEB-Lidstone in comparison
with various state-of-the-art density estimation methods.
| [
"Yuexian Hou, Tingxu Yan, Peng Zhang, Dawei Song, Wenjie Li",
"['Yuexian Hou' 'Tingxu Yan' 'Peng Zhang' 'Dawei Song' 'Wenjie Li']"
] |
cs.LG cs.AI | null | 1004.1230 | null | null | http://arxiv.org/pdf/1004.1230v1 | 2010-04-08T03:06:24Z | 2010-04-08T03:06:24Z | Ontology-supported processing of clinical text using medical knowledge
integration for multi-label classification of diagnosis coding | This paper discusses the knowledge integration of clinical information
extracted from distributed medical ontology in order to ameliorate a machine
learning-based multi-label coding assignment system. The proposed approach is
implemented using a decision tree based cascade hierarchical technique on the
university hospital data for patients with Coronary Heart Disease (CHD). The
preliminary results obtained show a satisfactory finding.
| [
"Phanu Waraporn, Phayung Meesad, Gareth Clayton",
"['Phanu Waraporn' 'Phayung Meesad' 'Gareth Clayton']"
] |
cs.LG cs.IR | null | 1004.1743 | null | null | http://arxiv.org/pdf/1004.1743v1 | 2010-04-10T21:58:16Z | 2010-04-10T21:58:16Z | An Analytical Study on Behavior of Clusters Using K Means, EM and K*
Means Algorithm | Clustering is an unsupervised learning method that constitutes a cornerstone
of an intelligent data analysis process. It is used for the exploration of
inter-relationships among a collection of patterns, by organizing them into
homogeneous clusters. Clustering has been dynamically applied to a variety of
tasks in the field of Information Retrieval (IR). Clustering has become one of
the most active area of research and the development. Clustering attempts to
discover the set of consequential groups where those within each group are more
closely related to one another than the others assigned to different groups.
The resultant clusters can provide a structure for organizing large bodies of
text for efficient browsing and searching. There exists a wide variety of
clustering algorithms that has been intensively studied in the clustering
problem. Among the algorithms that remain the most common and effectual, the
iterative optimization clustering algorithms have been demonstrated reasonable
performance for clustering, e.g. the Expectation Maximization (EM) algorithm
and its variants, and the well known k-means algorithm. This paper presents an
analysis on how partition method clustering techniques - EM, K -means and K*
Means algorithm work on heartspect dataset with below mentioned features -
Purity, Entropy, CPU time, Cluster wise analysis, Mean value analysis and inter
cluster distance. Thus the paper finally provides the experimental results of
datasets for five clusters to strengthen the results that the quality of the
behavior in clusters in EM algorithm is far better than k-means algorithm and
k*means algorithm.
| [
"['G. Nathiya' 'S. C. Punitha' 'M. Punithavalli']",
"G. Nathiya, S. C. Punitha, M. Punithavalli"
] |
cs.LG | null | 1004.1982 | null | null | http://arxiv.org/pdf/1004.1982v1 | 2010-04-09T09:36:28Z | 2010-04-09T09:36:28Z | State-Space Dynamics Distance for Clustering Sequential Data | This paper proposes a novel similarity measure for clustering sequential
data. We first construct a common state-space by training a single
probabilistic model with all the sequences in order to get a unified
representation for the dataset. Then, distances are obtained attending to the
transition matrices induced by each sequence in that state-space. This approach
solves some of the usual overfitting and scalability issues of the existing
semi-parametric techniques, that rely on training a model for each sequence.
Empirical studies on both synthetic and real-world datasets illustrate the
advantages of the proposed similarity measure for clustering sequences.
| [
"['Darío García-García' 'Emilio Parrado-Hernández' 'Fernando Díaz-de-María']",
"Dar\\'io Garc\\'ia-Garc\\'ia and Emilio Parrado-Hern\\'andez and Fernando\n D\\'iaz-de-Mar\\'ia"
] |
cs.NE cs.DC cs.LG | null | 1004.1997 | null | null | http://arxiv.org/pdf/1004.1997v1 | 2010-04-12T16:12:41Z | 2010-04-12T16:12:41Z | An optimized recursive learning algorithm for three-layer feedforward
neural networks for mimo nonlinear system identifications | Back-propagation with gradient method is the most popular learning algorithm
for feed-forward neural networks. However, it is critical to determine a proper
fixed learning rate for the algorithm. In this paper, an optimized recursive
algorithm is presented for online learning based on matrix operation and
optimization methods analytically, which can avoid the trouble to select a
proper learning rate for the gradient method. The proof of weak convergence of
the proposed algorithm also is given. Although this approach is proposed for
three-layer, feed-forward neural networks, it could be extended to multiple
layer feed-forward neural networks. The effectiveness of the proposed
algorithms applied to the identification of behavior of a two-input and
two-output non-linear dynamic system is demonstrated by simulation experiments.
| [
"['Daohang Sha' 'Vladimir B. Bajic']",
"Daohang Sha, Vladimir B. Bajic"
] |
cs.LG cs.AI cs.SY math.OC stat.ML | null | 1004.2027 | null | null | http://arxiv.org/pdf/1004.2027v2 | 2011-09-06T20:23:59Z | 2010-04-12T19:09:43Z | Dynamic Policy Programming | In this paper, we propose a novel policy iteration method, called dynamic
policy programming (DPP), to estimate the optimal policy in the
infinite-horizon Markov decision processes. We prove the finite-iteration and
asymptotic l\infty-norm performance-loss bounds for DPP in the presence of
approximation/estimation error. The bounds are expressed in terms of the
l\infty-norm of the average accumulated error as opposed to the l\infty-norm of
the error in the case of the standard approximate value iteration (AVI) and the
approximate policy iteration (API). This suggests that DPP can achieve a better
performance than AVI and API since it averages out the simulation noise caused
by Monte-Carlo sampling throughout the learning process. We examine this
theoretical results numerically by com- paring the performance of the
approximate variants of DPP with existing reinforcement learning (RL) methods
on different problem domains. Our results show that, in all cases, DPP-based
algorithms outperform other RL methods by a wide margin.
| [
"Mohammad Gheshlaghi Azar, Vicenc Gomez and Hilbert J. Kappen",
"['Mohammad Gheshlaghi Azar' 'Vicenc Gomez' 'Hilbert J. Kappen']"
] |
cs.LG | null | 1004.2316 | null | null | http://arxiv.org/pdf/1004.2316v2 | 2010-10-14T01:55:02Z | 2010-04-14T05:08:48Z | Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable
Information Criterion in Singular Learning Theory | In regular statistical models, the leave-one-out cross-validation is
asymptotically equivalent to the Akaike information criterion. However, since
many learning machines are singular statistical models, the asymptotic behavior
of the cross-validation remains unknown. In previous studies, we established
the singular learning theory and proposed a widely applicable information
criterion, the expectation value of which is asymptotically equal to the
average Bayes generalization loss. In the present paper, we theoretically
compare the Bayes cross-validation loss and the widely applicable information
criterion and prove two theorems. First, the Bayes cross-validation loss is
asymptotically equivalent to the widely applicable information criterion as a
random variable. Therefore, model selection and hyperparameter optimization
using these two values are asymptotically equivalent. Second, the sum of the
Bayes generalization error and the Bayes cross-validation error is
asymptotically equal to $2\lambda/n$, where $\lambda$ is the real log canonical
threshold and $n$ is the number of training samples. Therefore the relation
between the cross-validation error and the generalization error is determined
by the algebraic geometrical structure of a learning machine. We also clarify
that the deviance information criteria are different from the Bayes
cross-validation and the widely applicable information criterion.
| [
"['Sumio Watanabe']",
"Sumio Watanabe"
] |
cs.LG | null | 1004.3334 | null | null | http://arxiv.org/pdf/1004.3334v1 | 2010-04-20T02:52:27Z | 2010-04-20T02:52:27Z | Generation and Interpretation of Temporal Decision Rules | We present a solution to the problem of understanding a system that produces
a sequence of temporally ordered observations. Our solution is based on
generating and interpreting a set of temporal decision rules. A temporal
decision rule is a decision rule that can be used to predict or retrodict the
value of a decision attribute, using condition attributes that are observed at
times other than the decision attribute's time of observation. A rule set,
consisting of a set of temporal decision rules with the same decision
attribute, can be interpreted by our Temporal Investigation Method for
Enregistered Record Sequences (TIMERS) to signify an instantaneous, an acausal
or a possibly causal relationship between the condition attributes and the
decision attribute. We show the effectiveness of our method, by describing a
number of experiments with both synthetic and real temporal data.
| [
"['Kamran Karimi' 'Howard J. Hamilton']",
"Kamran Karimi and Howard J. Hamilton"
] |
math.AP cs.LG math-ph math.DS math.MP nlin.CD | 10.1007/s00220-011-1214-0 | 1004.3361 | null | null | http://arxiv.org/abs/1004.3361v3 | 2011-02-28T11:21:36Z | 2010-04-20T07:02:20Z | From open quantum systems to open quantum maps | For a class of quantized open chaotic systems satisfying a natural dynamical
assumption, we show that the study of the resolvent, and hence of scattering
and resonances, can be reduced to the study of a family of open quantum maps,
that is of finite dimensional operators obtained by quantizing the Poincar\'e
map associated with the flow near the set of trapped trajectories.
| [
"['Stéphane Nonnenmacher' 'Johannes Sjoestrand' 'Maciej Zworski']",
"St\\'ephane Nonnenmacher (IPHT), Johannes Sjoestrand (IMB), Maciej\n Zworski (UC Berkeley Maths)"
] |
cs.LG | null | 1004.3814 | null | null | http://arxiv.org/pdf/1004.3814v1 | 2010-04-21T23:09:06Z | 2010-04-21T23:09:06Z | Bregman Distance to L1 Regularized Logistic Regression | In this work we investigate the relationship between Bregman distances and
regularized Logistic Regression model. We present a detailed study of Bregman
Distance minimization, a family of generalized entropy measures associated with
convex functions. We convert the L1-regularized logistic regression into this
more general framework and propose a primal-dual method based algorithm for
learning the parameters. We pose L1-regularized logistic regression into
Bregman distance minimization and then apply non-linear constrained
optimization techniques to estimate the parameters of the logistic model.
| [
"Mithun Das Gupta, Thomas S. Huang",
"['Mithun Das Gupta' 'Thomas S. Huang']"
] |
cs.LG cs.DS | null | 1004.4223 | null | null | http://arxiv.org/pdf/1004.4223v1 | 2010-04-23T20:46:26Z | 2010-04-23T20:46:26Z | Settling the Polynomial Learnability of Mixtures of Gaussians | Given data drawn from a mixture of multivariate Gaussians, a basic problem is
to accurately estimate the mixture parameters. We give an algorithm for this
problem that has a running time, and data requirement polynomial in the
dimension and the inverse of the desired accuracy, with provably minimal
assumptions on the Gaussians. As simple consequences of our learning algorithm,
we can perform near-optimal clustering of the sample points and density
estimation for mixtures of k Gaussians, efficiently. The building blocks of our
algorithm are based on the work Kalai et al. [STOC 2010] that gives an
efficient algorithm for learning mixtures of two Gaussians by considering a
series of projections down to one dimension, and applying the method of moments
to each univariate projection. A major technical hurdle in Kalai et al. is
showing that one can efficiently learn univariate mixtures of two Gaussians. In
contrast, because pathological scenarios can arise when considering univariate
projections of mixtures of more than two Gaussians, the bulk of the work in
this paper concerns how to leverage an algorithm for learning univariate
mixtures (of many Gaussians) to yield an efficient algorithm for learning in
high dimensions. Our algorithm employs hierarchical clustering and rescaling,
together with delicate methods for backtracking and recovering from failures
that can occur in our univariate algorithm. Finally, while the running time and
data requirements of our algorithm depend exponentially on the number of
Gaussians in the mixture, we prove that such a dependence is necessary.
| [
"['Ankur Moitra' 'Gregory Valiant']",
"Ankur Moitra and Gregory Valiant"
] |
cs.LG | null | 1004.4421 | null | null | http://arxiv.org/pdf/1004.4421v2 | 2010-04-28T14:38:13Z | 2010-04-26T07:41:50Z | Efficient Learning with Partially Observed Attributes | We describe and analyze efficient algorithms for learning a linear predictor
from examples when the learner can only view a few attributes of each training
example. This is the case, for instance, in medical research, where each
patient participating in the experiment is only willing to go through a small
number of tests. Our analysis bounds the number of additional examples
sufficient to compensate for the lack of full information on each training
example. We demonstrate the efficiency of our algorithms by showing that when
running on digit recognition data, they obtain a high prediction accuracy even
when the learner gets to see only four pixels of each image.
| [
"['Nicolò Cesa-Bianchi' 'Shai Shalev-Shwartz' 'Ohad Shamir']",
"Nicol\\`o Cesa-Bianchi, Shai Shalev-Shwartz and Ohad Shamir"
] |
q-bio.QM cs.LG physics.data-an stat.ML | 10.1098/rsif.2012.0616 | 1004.4668 | null | null | http://arxiv.org/abs/1004.4668v3 | 2012-08-03T07:42:47Z | 2010-04-26T22:22:18Z | Evolutionary Inference for Function-valued Traits: Gaussian Process
Regression on Phylogenies | Biological data objects often have both of the following features: (i) they
are functions rather than single numbers or vectors, and (ii) they are
correlated due to phylogenetic relationships. In this paper we give a flexible
statistical model for such data, by combining assumptions from phylogenetics
with Gaussian processes. We describe its use as a nonparametric Bayesian prior
distribution, both for prediction (placing posterior distributions on ancestral
functions) and model selection (comparing rates of evolution across a
phylogeny, or identifying the most likely phylogenies consistent with the
observed data). Our work is integrative, extending the popular phylogenetic
Brownian Motion and Ornstein-Uhlenbeck models to functional data and Bayesian
inference, and extending Gaussian Process regression to phylogenies. We provide
a brief illustration of the application of our method.
| [
"['Nick S. Jones' 'John Moriarty']",
"Nick S. Jones and John Moriarty"
] |
cs.LG cs.DS | null | 1004.4864 | null | null | http://arxiv.org/pdf/1004.4864v1 | 2010-04-27T16:59:43Z | 2010-04-27T16:59:43Z | Polynomial Learning of Distribution Families | The question of polynomial learnability of probability distributions,
particularly Gaussian mixture distributions, has recently received significant
attention in theoretical computer science and machine learning. However,
despite major progress, the general question of polynomial learnability of
Gaussian mixture distributions still remained open. The current work resolves
the question of polynomial learnability for Gaussian mixtures in high dimension
with an arbitrary fixed number of components. The result on learning Gaussian
mixtures relies on an analysis of distributions belonging to what we call
"polynomial families" in low dimension. These families are characterized by
their moments being polynomial in parameters and include almost all common
probability distributions as well as their mixtures and products. Using tools
from real algebraic geometry, we show that parameters of any distribution
belonging to such a family can be learned in polynomial time and using a
polynomial number of sample points. The result on learning polynomial families
is quite general and is of independent interest. To estimate parameters of a
Gaussian mixture distribution in high dimensions, we provide a deterministic
algorithm for dimensionality reduction. This allows us to reduce learning a
high-dimensional mixture to a polynomial number of parameter estimations in low
dimension. Combining this reduction with the results on polynomial families
yields our result on learning arbitrary Gaussian mixtures in high dimensions.
| [
"Mikhail Belkin and Kaushik Sinha",
"['Mikhail Belkin' 'Kaushik Sinha']"
] |
cs.LG cs.IT math.IT stat.ML | null | 1004.5194 | null | null | http://arxiv.org/pdf/1004.5194v1 | 2010-04-29T06:38:47Z | 2010-04-29T06:38:47Z | Clustering processes | The problem of clustering is considered, for the case when each data point is
a sample generated by a stationary ergodic process. We propose a very natural
asymptotic notion of consistency, and show that simple consistent algorithms
exist, under most general non-parametric assumptions. The notion of consistency
is as follows: two samples should be put into the same cluster if and only if
they were generated by the same distribution. With this notion of consistency,
clustering generalizes such classical statistical problems as homogeneity
testing and process classification. We show that, for the case of a known
number of clusters, consistency can be achieved under the only assumption that
the joint distribution of the data is stationary ergodic (no parametric or
Markovian assumptions, no assumptions of independence, neither between nor
within the samples). If the number of clusters is unknown, consistency can be
achieved under appropriate assumptions on the mixing rates of the processes.
(again, no parametric or independence assumptions). In both cases we give
examples of simple (at most quadratic in each argument) algorithms which are
consistent.
| [
"Daniil Ryabko (INRIA Lille - Nord Europe)",
"['Daniil Ryabko']"
] |
cs.LG math.ST stat.ML stat.TH | 10.1109/ALLERTON.2010.5706896 | 1004.5229 | null | null | http://arxiv.org/abs/1004.5229v3 | 2010-10-13T10:11:39Z | 2010-04-29T09:31:55Z | Optimism in Reinforcement Learning and Kullback-Leibler Divergence | We consider model-based reinforcement learning in finite Markov De- cision
Processes (MDPs), focussing on so-called optimistic strategies. In MDPs,
optimism can be implemented by carrying out extended value it- erations under a
constraint of consistency with the estimated model tran- sition probabilities.
The UCRL2 algorithm by Auer, Jaksch and Ortner (2009), which follows this
strategy, has recently been shown to guarantee near-optimal regret bounds. In
this paper, we strongly argue in favor of using the Kullback-Leibler (KL)
divergence for this purpose. By studying the linear maximization problem under
KL constraints, we provide an ef- ficient algorithm, termed KL-UCRL, for
solving KL-optimistic extended value iteration. Using recent deviation bounds
on the KL divergence, we prove that KL-UCRL provides the same guarantees as
UCRL2 in terms of regret. However, numerical experiments on classical
benchmarks show a significantly improved behavior, particularly when the MDP
has reduced connectivity. To support this observation, we provide elements of
com- parison between the two algorithms based on geometric considerations.
| [
"Sarah Filippi (LTCI), Olivier Capp\\'e (LTCI), Aur\\'elien Garivier\n (LTCI)",
"['Sarah Filippi' 'Olivier Cappé' 'Aurélien Garivier']"
] |
cond-mat.dis-nn cs.AI cs.LG | null | 1004.5326 | null | null | http://arxiv.org/pdf/1004.5326v1 | 2010-04-29T15:35:32Z | 2010-04-29T15:35:32Z | Designing neural networks that process mean values of random variables | We introduce a class of neural networks derived from probabilistic models in
the form of Bayesian networks. By imposing additional assumptions about the
nature of the probabilistic models represented in the networks, we derive
neural networks with standard dynamics that require no training to determine
the synaptic weights, that perform accurate calculation of the mean values of
the random variables, that can pool multiple sources of evidence, and that deal
cleanly and consistently with inconsistent or contradictory evidence. The
presented neural networks capture many properties of Bayesian networks,
providing distributed versions of probabilistic models.
| [
"Michael J. Barber and John W. Clark",
"['Michael J. Barber' 'John W. Clark']"
] |
cs.LG | null | 1005.0027 | null | null | http://arxiv.org/pdf/1005.0027v2 | 2011-06-14T06:56:25Z | 2010-04-30T21:52:17Z | Learning from Multiple Outlooks | We propose a novel problem formulation of learning a single task when the
data are provided in different feature spaces. Each such space is called an
outlook, and is assumed to contain both labeled and unlabeled data. The
objective is to take advantage of the data from all the outlooks to better
classify each of the outlooks. We devise an algorithm that computes optimal
affine mappings from different outlooks to a target outlook by matching moments
of the empirical distributions. We further derive a probabilistic
interpretation of the resulting algorithm and a sample complexity bound
indicating how many samples are needed to adequately find the mapping. We
report the results of extensive experiments on activity recognition tasks that
show the value of the proposed approach in boosting performance.
| [
"Maayan Harel and Shie Mannor",
"['Maayan Harel' 'Shie Mannor']"
] |
cs.LG | null | 1005.0047 | null | null | http://arxiv.org/pdf/1005.0047v1 | 2010-05-01T06:06:36Z | 2010-05-01T06:06:36Z | A Geometric View of Conjugate Priors | In Bayesian machine learning, conjugate priors are popular, mostly due to
mathematical convenience. In this paper, we show that there are deeper reasons
for choosing a conjugate prior. Specifically, we formulate the conjugate prior
in the form of Bregman divergence and show that it is the inherent geometry of
conjugate priors that makes them appropriate and intuitive. This geometric
interpretation allows one to view the hyperparameters of conjugate priors as
the {\it effective} sample points, thus providing additional intuition. We use
this geometric understanding of conjugate priors to derive the hyperparameters
and expression of the prior used to couple the generative and discriminative
components of a hybrid model for semi-supervised learning.
| [
"['Arvind Agarwal' 'Hal Daume III']",
"Arvind Agarwal and Hal Daume III"
] |
stat.ML cs.CR cs.LG | null | 1005.0063 | null | null | http://arxiv.org/pdf/1005.0063v2 | 2010-07-28T23:24:04Z | 2010-05-01T11:06:12Z | Large Margin Multiclass Gaussian Classification with Differential
Privacy | As increasing amounts of sensitive personal information is aggregated into
data repositories, it has become important to develop mechanisms for processing
the data without revealing information about individual data instances. The
differential privacy model provides a framework for the development and
theoretical analysis of such mechanisms. In this paper, we propose an algorithm
for learning a discriminatively trained multi-class Gaussian classifier that
satisfies differential privacy using a large margin loss function with a
perturbed regularization term. We present a theoretical upper bound on the
excess risk of the classifier introduced by the perturbation.
| [
"Manas A. Pathak and Bhiksha Raj",
"['Manas A. Pathak' 'Bhiksha Raj']"
] |
cs.LG | 10.1109/TSP.2010.2050062 | 1005.0075 | null | null | http://arxiv.org/abs/1005.0075v1 | 2010-05-01T13:57:15Z | 2010-05-01T13:57:15Z | Distributive Stochastic Learning for Delay-Optimal OFDMA Power and
Subband Allocation | In this paper, we consider the distributive queue-aware power and subband
allocation design for a delay-optimal OFDMA uplink system with one base
station, $K$ users and $N_F$ independent subbands. Each mobile has an uplink
queue with heterogeneous packet arrivals and delay requirements. We model the
problem as an infinite horizon average reward Markov Decision Problem (MDP)
where the control actions are functions of the instantaneous Channel State
Information (CSI) as well as the joint Queue State Information (QSI). To
address the distributive requirement and the issue of exponential memory
requirement and computational complexity, we approximate the subband allocation
Q-factor by the sum of the per-user subband allocation Q-factor and derive a
distributive online stochastic learning algorithm to estimate the per-user
Q-factor and the Lagrange multipliers (LM) simultaneously and determine the
control actions using an auction mechanism. We show that under the proposed
auction mechanism, the distributive online learning converges almost surely
(with probability 1). For illustration, we apply the proposed distributive
stochastic learning framework to an application example with exponential packet
size distribution. We show that the delay-optimal power control has the {\em
multi-level water-filling} structure where the CSI determines the instantaneous
power allocation and the QSI determines the water-level. The proposed algorithm
has linear signaling overhead and computational complexity $\mathcal O(KN)$,
which is desirable from an implementation perspective.
| [
"Ying Cui and Vincent K.N.Lau",
"['Ying Cui' 'Vincent K. N. Lau']"
] |
cs.LG cs.AI | null | 1005.0125 | null | null | http://arxiv.org/pdf/1005.0125v1 | 2010-05-02T06:40:21Z | 2010-05-02T06:40:21Z | Adaptive Bases for Reinforcement Learning | We consider the problem of reinforcement learning using function
approximation, where the approximating basis can change dynamically while
interacting with the environment. A motivation for such an approach is
maximizing the value function fitness to the problem faced. Three errors are
considered: approximation square error, Bellman residual, and projected Bellman
residual. Algorithms under the actor-critic framework are presented, and shown
to converge. The advantage of such an adaptive basis is demonstrated in
simulations.
| [
"Dotan Di Castro and Shie Mannor",
"['Dotan Di Castro' 'Shie Mannor']"
] |
cs.LG stat.ML | null | 1005.0188 | null | null | http://arxiv.org/pdf/1005.0188v1 | 2010-05-03T05:59:41Z | 2010-05-03T05:59:41Z | Generative and Latent Mean Map Kernels | We introduce two kernels that extend the mean map, which embeds probability
measures in Hilbert spaces. The generative mean map kernel (GMMK) is a smooth
similarity measure between probabilistic models. The latent mean map kernel
(LMMK) generalizes the non-iid formulation of Hilbert space embeddings of
empirical distributions in order to incorporate latent variable models. When
comparing certain classes of distributions, the GMMK exhibits beneficial
regularization and generalization properties not shown for previous generative
kernels. We present experiments comparing support vector machine performance
using the GMMK and LMMK between hidden Markov models to the performance of
other methods on discrete and continuous observation sequence data. The results
suggest that, in many cases, the GMMK has generalization error competitive with
or better than other methods.
| [
"Nishant A. Mehta and Alexander G. Gray",
"['Nishant A. Mehta' 'Alexander G. Gray']"
] |
cs.LG | 10.1109/TVT.2010.2050081 | 1005.0340 | null | null | http://arxiv.org/abs/1005.0340v1 | 2010-05-03T16:35:49Z | 2010-05-03T16:35:49Z | Statistical Learning in Automated Troubleshooting: Application to LTE
Interference Mitigation | This paper presents a method for automated healing as part of off-line
automated troubleshooting. The method combines statistical learning with
constraint optimization. The automated healing aims at locally optimizing radio
resource management (RRM) or system parameters of cells with poor performance
in an iterative manner. The statistical learning processes the data using
Logistic Regression (LR) to extract closed form (functional) relations between
Key Performance Indicators (KPIs) and Radio Resource Management (RRM)
parameters. These functional relations are then processed by an optimization
engine which proposes new parameter values. The advantage of the proposed
formulation is the small number of iterations required by the automated healing
method to converge, making it suitable for off-line implementation. The
proposed method is applied to heal an Inter-Cell Interference Coordination
(ICIC) process in a 3G Long Term Evolution (LTE) network which is based on
soft-frequency reuse scheme. Numerical simulations illustrate the benefits of
the proposed approach.
| [
"Moazzam Islam Tiwana, Berna Sayrac and Zwi Altman",
"['Moazzam Islam Tiwana' 'Berna Sayrac' 'Zwi Altman']"
] |
astro-ph.GA cs.LG | null | 1005.0390 | null | null | http://arxiv.org/pdf/1005.0390v2 | 2010-06-01T07:54:29Z | 2010-05-03T20:01:38Z | Machine Learning for Galaxy Morphology Classification | In this work, decision tree learning algorithms and fuzzy inferencing systems
are applied for galaxy morphology classification. In particular, the CART, the
C4.5, the Random Forest and fuzzy logic algorithms are studied and reliable
classifiers are developed to distinguish between spiral galaxies, elliptical
galaxies or star/unknown galactic objects. Morphology information for the
training and testing datasets is obtained from the Galaxy Zoo project while the
corresponding photometric and spectra parameters are downloaded from the SDSS
DR7 catalogue.
| [
"Adam Gauci, Kristian Zarb Adami, John Abela",
"['Adam Gauci' 'Kristian Zarb Adami' 'John Abela']"
] |
stat.ML cs.LG | null | 1005.0437 | null | null | http://arxiv.org/pdf/1005.0437v1 | 2010-05-04T06:05:51Z | 2010-05-04T06:05:51Z | A Unifying View of Multiple Kernel Learning | Recent research on multiple kernel learning has lead to a number of
approaches for combining kernels in regularized risk minimization. The proposed
approaches include different formulations of objectives and varying
regularization strategies. In this paper we present a unifying general
optimization criterion for multiple kernel learning and show how existing
formulations are subsumed as special cases. We also derive the criterion's dual
representation, which is suitable for general smooth optimization algorithms.
Finally, we evaluate multiple kernel learning in this framework analytically
using a Rademacher complexity bound on the generalization error and empirically
in a set of experiments.
| [
"Marius Kloft, Ulrich R\\\"uckert and Peter L. Bartlett",
"['Marius Kloft' 'Ulrich Rückert' 'Peter L. Bartlett']"
] |
cs.LG cs.AI stat.ML | null | 1005.0530 | null | null | http://arxiv.org/pdf/1005.0530v1 | 2010-05-04T14:01:10Z | 2010-05-04T14:01:10Z | Feature Selection with Conjunctions of Decision Stumps and Learning from
Microarray Data | One of the objectives of designing feature selection learning algorithms is
to obtain classifiers that depend on a small number of attributes and have
verifiable future performance guarantees. There are few, if any, approaches
that successfully address the two goals simultaneously. Performance guarantees
become crucial for tasks such as microarray data analysis due to very small
sample sizes resulting in limited empirical evaluation. To the best of our
knowledge, such algorithms that give theoretical bounds on the future
performance have not been proposed so far in the context of the classification
of gene expression data. In this work, we investigate the premise of learning a
conjunction (or disjunction) of decision stumps in Occam's Razor, Sample
Compression, and PAC-Bayes learning settings for identifying a small subset of
attributes that can be used to perform reliable classification tasks. We apply
the proposed approaches for gene identification from DNA microarray data and
compare our results to those of well known successful approaches proposed for
the task. We show that our algorithm not only finds hypotheses with much
smaller number of genes while giving competitive classification accuracy but
also have tight risk guarantees on future performance unlike other approaches.
The proposed approaches are general and extensible in terms of both designing
novel algorithms and application to other domains.
| [
"Mohak Shah, Mario Marchand and Jacques Corbeil",
"['Mohak Shah' 'Mario Marchand' 'Jacques Corbeil']"
] |
stat.ML cond-mat.stat-mech cs.IT cs.LG math.IT physics.soc-ph | null | 1005.0794 | null | null | http://arxiv.org/pdf/1005.0794v1 | 2010-05-05T17:11:26Z | 2010-05-05T17:11:26Z | Active Learning for Hidden Attributes in Networks | In many networks, vertices have hidden attributes, or types, that are
correlated with the networks topology. If the topology is known but these
attributes are not, and if learning the attributes is costly, we need a method
for choosing which vertex to query in order to learn as much as possible about
the attributes of the other vertices. We assume the network is generated by a
stochastic block model, but we make no assumptions about its assortativity or
disassortativity. We choose which vertex to query using two methods: 1)
maximizing the mutual information between its attributes and those of the
others (a well-known approach in active learning) and 2) maximizing the average
agreement between two independent samples of the conditional Gibbs
distribution. Experimental results show that both these methods do much better
than simple heuristics. They also consistently identify certain vertices as
important by querying them early on.
| [
"Xiaoran Yan, Yaojia Zhu, Jean-Baptiste Rouquier, and Cristopher Moore",
"['Xiaoran Yan' 'Yaojia Zhu' 'Jean-Baptiste Rouquier' 'Cristopher Moore']"
] |
null | null | 1005.0826 | null | null | http://arxiv.org/pdf/1005.0826v2 | 2013-04-30T09:32:30Z | 2010-05-05T19:41:55Z | Clustering processes | The problem of clustering is considered, for the case when each data point is a sample generated by a stationary ergodic process. We propose a very natural asymptotic notion of consistency, and show that simple consistent algorithms exist, under most general non-parametric assumptions. The notion of consistency is as follows: two samples should be put into the same cluster if and only if they were generated by the same distribution. With this notion of consistency, clustering generalizes such classical statistical problems as homogeneity testing and process classification. We show that, for the case of a known number of clusters, consistency can be achieved under the only assumption that the joint distribution of the data is stationary ergodic (no parametric or Markovian assumptions, no assumptions of independence, neither between nor within the samples). If the number of clusters is unknown, consistency can be achieved under appropriate assumptions on the mixing rates of the processes. (again, no parametric or independence assumptions). In both cases we give examples of simple (at most quadratic in each argument) algorithms which are consistent. | [
"['Daniil Ryabko']"
] |
cs.LG | null | 1005.0897 | null | null | http://arxiv.org/pdf/1005.0897v1 | 2010-05-06T06:42:51Z | 2010-05-06T06:42:51Z | The Complex Gaussian Kernel LMS algorithm | Although the real reproducing kernels are used in an increasing number of
machine learning problems, complex kernels have not, yet, been used, in spite
of their potential interest in applications such as communications. In this
work, we focus our attention on the complex gaussian kernel and its possible
application in the complex Kernel LMS algorithm. In order to derive the
gradients needed to develop the complex kernel LMS (CKLMS), we employ the
powerful tool of Wirtinger's Calculus, which has recently attracted much
attention in the signal processing community. Writinger's calculus simplifies
computations and offers an elegant tool for treating complex signals. To this
end, the notion of Writinger's calculus is extended to include complex RKHSs.
Experiments verify that the CKLMS offers significant performance improvements
over the traditional complex LMS or Widely Linear complex LMS (WL-LMS)
algorithms, when dealing with nonlinearities.
| [
"Pantelis Bouboulis and Sergios Theodoridis",
"['Pantelis Bouboulis' 'Sergios Theodoridis']"
] |
cs.LG | null | 1005.0902 | null | null | http://arxiv.org/pdf/1005.0902v2 | 2010-05-25T17:57:00Z | 2010-05-06T06:59:22Z | Extension of Wirtinger Calculus in RKH Spaces and the Complex Kernel LMS | Over the last decade, kernel methods for nonlinear processing have
successfully been used in the machine learning community. However, so far, the
emphasis has been on batch techniques. It is only recently, that online
adaptive techniques have been considered in the context of signal processing
tasks. To the best of our knowledge, no kernel-based strategy has been
developed, so far, that is able to deal with complex valued signals. In this
paper, we take advantage of a technique called complexification of real RKHSs
to attack this problem. In order to derive gradients and subgradients of
operators that need to be defined on the associated complex RKHSs, we employ
the powerful tool ofWirtinger's Calculus, which has recently attracted much
attention in the signal processing community. Writinger's calculus simplifies
computations and offers an elegant tool for treating complex signals. To this
end, in this paper, the notion of Writinger's calculus is extended, for the
first time, to include complex RKHSs and use it to derive the Complex Kernel
Least-Mean-Square (CKLMS) algorithm. Experiments verify that the CKLMS can be
used to derive nonlinear stable algorithms, which offer significant performance
improvements over the traditional complex LMS orWidely Linear complex LMS
(WL-LMS) algorithms, when dealing with nonlinearities.
| [
"Pantelis Bouboulis, Sergios Theodoridis",
"['Pantelis Bouboulis' 'Sergios Theodoridis']"
] |
cs.DS cs.LG | null | 1005.1120 | null | null | http://arxiv.org/pdf/1005.1120v2 | 2010-06-18T17:39:04Z | 2010-05-07T02:29:27Z | Estimating small moments of data stream in nearly optimal space-time | For each $p \in (0,2]$, we present a randomized algorithm that returns an
$\epsilon$-approximation of the $p$th frequency moment of a data stream $F_p =
\sum_{i = 1}^n \abs{f_i}^p$. The algorithm requires space $O(\epsilon^{-2} \log
(mM)(\log n))$ and processes each stream update using time $O((\log n) (\log
\epsilon^{-1}))$. It is nearly optimal in terms of space (lower bound
$O(\epsilon^{-2} \log (mM))$ as well as time and is the first algorithm with
these properties. The technique separates heavy hitters from the remaining
items in the stream using an appropriate threshold and estimates the
contribution of the heavy hitters and the light elements to $F_p$ separately. A
key component is the design of an unbiased estimator for $\abs{f_i}^p$ whose
data structure has low update time and low variance.
| [
"Sumit Ganguly",
"['Sumit Ganguly']"
] |
cs.LG | null | 1005.1545 | null | null | http://arxiv.org/pdf/1005.1545v2 | 2011-05-09T13:08:45Z | 2010-05-10T13:49:01Z | Improving Semi-Supervised Support Vector Machines Through Unlabeled
Instances Selection | Semi-supervised support vector machines (S3VMs) are a kind of popular
approaches which try to improve learning performance by exploiting unlabeled
data. Though S3VMs have been found helpful in many situations, they may
degenerate performance and the resultant generalization ability may be even
worse than using the labeled data only. In this paper, we try to reduce the
chance of performance degeneration of S3VMs. Our basic idea is that, rather
than exploiting all unlabeled data, the unlabeled instances should be selected
such that only the ones which are very likely to be helpful are exploited,
while some highly risky unlabeled instances are avoided. We propose the
S3VM-\emph{us} method by using hierarchical clustering to select the unlabeled
instances. Experiments on a broad range of data sets over eighty-eight
different settings show that the chance of performance degeneration of
S3VM-\emph{us} is much smaller than that of existing S3VMs.
| [
"Yu-Feng Li, Zhi-Hua Zhou",
"['Yu-Feng Li' 'Zhi-Hua Zhou']"
] |
cs.LG | null | 1005.1918 | null | null | http://arxiv.org/pdf/1005.1918v2 | 2010-06-04T19:13:37Z | 2010-05-11T19:27:35Z | Prediction with Expert Advice under Discounted Loss | We study prediction with expert advice in the setting where the losses are
accumulated with some discounting---the impact of old losses may gradually
vanish. We generalize the Aggregating Algorithm and the Aggregating Algorithm
for Regression to this case, propose a suitable new variant of exponential
weights algorithm, and prove respective loss bounds.
| [
"['Alexey Chernov' 'Fedor Zhdanov']",
"Alexey Chernov and Fedor Zhdanov"
] |
cs.LG cs.NA | null | 1005.2146 | null | null | http://arxiv.org/pdf/1005.2146v1 | 2010-05-12T16:25:46Z | 2010-05-12T16:25:46Z | On the Finite Time Convergence of Cyclic Coordinate Descent Methods | Cyclic coordinate descent is a classic optimization method that has witnessed
a resurgence of interest in machine learning. Reasons for this include its
simplicity, speed and stability, as well as its competitive performance on
$\ell_1$ regularized smooth optimization problems. Surprisingly, very little is
known about its finite time convergence behavior on these problems. Most
existing results either just prove convergence or provide asymptotic rates. We
fill this gap in the literature by proving $O(1/k)$ convergence rates (where
$k$ is the iteration counter) for two variants of cyclic coordinate descent
under an isotonicity assumption. Our analysis proceeds by comparing the
objective values attained by the two variants with each other, as well as with
the gradient descent algorithm. We show that the iterates generated by the
cyclic coordinate descent methods remain better than those of gradient descent
uniformly over time.
| [
"['Ankan Saha' 'Ambuj Tewari']",
"Ankan Saha and Ambuj Tewari"
] |
cs.LG | null | 1005.2179 | null | null | http://arxiv.org/pdf/1005.2179v1 | 2010-05-12T19:53:29Z | 2010-05-12T19:53:29Z | Detecting Blackholes and Volcanoes in Directed Networks | In this paper, we formulate a novel problem for finding blackhole and volcano
patterns in a large directed graph. Specifically, a blackhole pattern is a
group which is made of a set of nodes in a way such that there are only inlinks
to this group from the rest nodes in the graph. In contrast, a volcano pattern
is a group which only has outlinks to the rest nodes in the graph. Both
patterns can be observed in real world. For instance, in a trading network, a
blackhole pattern may represent a group of traders who are manipulating the
market. In the paper, we first prove that the blackhole mining problem is a
dual problem of finding volcanoes. Therefore, we focus on finding the blackhole
patterns. Along this line, we design two pruning schemes to guide the blackhole
finding process. In the first pruning scheme, we strategically prune the search
space based on a set of pattern-size-independent pruning rules and develop an
iBlackhole algorithm. The second pruning scheme follows a divide-and-conquer
strategy to further exploit the pruning results from the first pruning scheme.
Indeed, a target directed graphs can be divided into several disconnected
subgraphs by the first pruning scheme, and thus the blackhole finding can be
conducted in each disconnected subgraph rather than in a large graph. Based on
these two pruning schemes, we also develop an iBlackhole-DC algorithm. Finally,
experimental results on real-world data show that the iBlackhole-DC algorithm
can be several orders of magnitude faster than the iBlackhole algorithm, which
has a huge computational advantage over a brute-force method.
| [
"['Zhongmou Li' 'Hui Xiong' 'Yanchi Liu']",
"Zhongmou Li, Hui Xiong, Yanchi Liu"
] |
cs.LG | null | 1005.2243 | null | null | http://arxiv.org/pdf/1005.2243v1 | 2010-05-13T01:59:57Z | 2010-05-13T01:59:57Z | Robustness and Generalization | We derive generalization bounds for learning algorithms based on their
robustness: the property that if a testing sample is "similar" to a training
sample, then the testing error is close to the training error. This provides a
novel approach, different from the complexity or stability arguments, to study
generalization of learning algorithms. We further show that a weak notion of
robustness is both sufficient and necessary for generalizability, which implies
that robustness is a fundamental property for learning algorithms to work.
| [
"['Huan Xu' 'Shie Mannor']",
"Huan Xu, Shie Mannor"
] |
stat.ML cs.LG | null | 1005.2263 | null | null | http://arxiv.org/pdf/1005.2263v2 | 2011-05-30T11:22:44Z | 2010-05-13T07:02:54Z | Context models on sequences of covers | We present a class of models that, via a simple construction, enables exact,
incremental, non-parametric, polynomial-time, Bayesian inference of conditional
measures. The approach relies upon creating a sequence of covers on the
conditioning variable and maintaining a different model for each set within a
cover. Inference remains tractable by specifying the probabilistic model in
terms of a random walk within the sequence of covers. We demonstrate the
approach on problems of conditional density estimation, which, to our knowledge
is the first closed-form, non-parametric Bayesian approach to this problem.
| [
"['Christos Dimitrakakis']",
"Christos Dimitrakakis"
] |
cs.LG | null | 1005.2296 | null | null | http://arxiv.org/pdf/1005.2296v2 | 2010-05-20T12:43:57Z | 2010-05-13T10:56:01Z | Online Learning of Noisy Data with Kernels | We study online learning when individual instances are corrupted by
adversarially chosen random noise. We assume the noise distribution is unknown,
and may change over time with no restriction other than having zero mean and
bounded variance. Our technique relies on a family of unbiased estimators for
non-linear functions, which may be of independent interest. We show that a
variant of online gradient descent can learn functions in any dot-product
(e.g., polynomial) or Gaussian kernel space with any analytic convex loss
function. Our variant uses randomized estimates that need to query a random
number of noisy copies of each instance, where with high probability this
number is upper bounded by a constant. Allowing such multiple queries cannot be
avoided: Indeed, we show that online learning is in general impossible when
only one noisy copy of each instance can be accessed.
| [
"['Nicolò Cesa-Bianchi' 'Shai Shalev-Shwartz' 'Ohad Shamir']",
"Nicol\\`o Cesa-Bianchi, Shai Shalev-Shwartz and Ohad Shamir"
] |
cs.LG cs.CC | null | 1005.2364 | null | null | http://arxiv.org/pdf/1005.2364v2 | 2010-05-14T11:28:03Z | 2010-05-13T15:59:01Z | A Short Introduction to Model Selection, Kolmogorov Complexity and
Minimum Description Length (MDL) | The concept of overfitting in model selection is explained and demonstrated
with an example. After providing some background information on information
theory and Kolmogorov complexity, we provide a short explanation of Minimum
Description Length and error minimization. We conclude with a discussion of the
typical features of overfitting in model selection.
| [
"Volker Nannen",
"['Volker Nannen']"
] |
cs.LG math.SP | null | 1005.2603 | null | null | http://arxiv.org/pdf/1005.2603v5 | 2010-07-12T13:38:57Z | 2010-05-14T18:52:24Z | Eigenvectors for clustering: Unipartite, bipartite, and directed graph
cases | This paper presents a concise tutorial on spectral clustering for broad
spectrum graphs which include unipartite (undirected) graph, bipartite graph,
and directed graph. We show how to transform bipartite graph and directed graph
into corresponding unipartite graph, therefore allowing a unified treatment to
all cases. In bipartite graph, we show that the relaxed solution to the $K$-way
co-clustering can be found by computing the left and right eigenvectors of the
data matrix. This gives a theoretical basis for $K$-way spectral co-clustering
algorithms proposed in the literatures. We also show that solving row and
column co-clustering is equivalent to solving row and column clustering
separately, thus giving a theoretical support for the claim: ``column
clustering implies row clustering and vice versa''. And in the last part, we
generalize the Ky Fan theorem---which is the central theorem for explaining
spectral clustering---to rectangular complex matrix motivated by the results
from bipartite graph analysis.
| [
"['Andri Mirzal' 'Masashi Furukawa']",
"Andri Mirzal and Masashi Furukawa"
] |
stat.ML cs.CV cs.LG | null | 1005.2638 | null | null | http://arxiv.org/pdf/1005.2638v1 | 2010-05-14T23:12:03Z | 2010-05-14T23:12:03Z | Hierarchical Clustering for Finding Symmetries and Other Patterns in
Massive, High Dimensional Datasets | Data analysis and data mining are concerned with unsupervised pattern finding
and structure determination in data sets. "Structure" can be understood as
symmetry and a range of symmetries are expressed by hierarchy. Such symmetries
directly point to invariants, that pinpoint intrinsic properties of the data
and of the background empirical domain of interest. We review many aspects of
hierarchy here, including ultrametric topology, generalized ultrametric,
linkages with lattices and other discrete algebraic structures and with p-adic
number representations. By focusing on symmetries in data we have a powerful
means of structuring and analyzing massive, high dimensional data stores. We
illustrate the powerfulness of hierarchical clustering in case studies in
chemistry and finance, and we provide pointers to other published case studies.
| [
"Fionn Murtagh and Pedro Contreras",
"['Fionn Murtagh' 'Pedro Contreras']"
] |
q-bio.PE cs.LG | null | 1005.2714 | null | null | http://arxiv.org/pdf/1005.2714v2 | 2012-02-27T18:09:51Z | 2010-05-15T23:50:50Z | Structural Drift: The Population Dynamics of Sequential Learning | We introduce a theory of sequential causal inference in which learners in a
chain estimate a structural model from their upstream teacher and then pass
samples from the model to their downstream student. It extends the population
dynamics of genetic drift, recasting Kimura's selectively neutral theory as a
special case of a generalized drift process using structured populations with
memory. We examine the diffusion and fixation properties of several drift
processes and propose applications to learning, inference, and evolution. We
also demonstrate how the organization of drift process space controls fidelity,
facilitates innovations, and leads to information loss in sequential learning
with and without memory.
| [
"James P. Crutchfield and Sean Whalen",
"['James P. Crutchfield' 'Sean Whalen']"
] |
cs.LG | null | 1005.3566 | null | null | http://arxiv.org/pdf/1005.3566v1 | 2010-05-19T22:58:53Z | 2010-05-19T22:58:53Z | Evolution with Drifting Targets | We consider the question of the stability of evolutionary algorithms to
gradual changes, or drift, in the target concept. We define an algorithm to be
resistant to drift if, for some inverse polynomial drift rate in the target
function, it converges to accuracy 1 -- \epsilon , with polynomial resources,
and then stays within that accuracy indefinitely, except with probability
\epsilon , at any one time. We show that every evolution algorithm, in the
sense of Valiant (2007; 2009), can be converted using the Correlational Query
technique of Feldman (2008), into such a drift resistant algorithm. For certain
evolutionary algorithms, such as for Boolean conjunctions, we give bounds on
the rates of drift that they can resist. We develop some new evolution
algorithms that are resistant to significant drift. In particular, we give an
algorithm for evolving linear separators over the spherically symmetric
distribution that is resistant to a drift rate of O(\epsilon /n), and another
algorithm over the more general product normal distributions that resists a
smaller drift rate.
The above translation result can be also interpreted as one on the robustness
of the notion of evolvability itself under changes of definition. As a second
result in that direction we show that every evolution algorithm can be
converted to a quasi-monotonic one that can evolve from any starting point
without the performance ever dipping significantly below that of the starting
point. This permits the somewhat unnatural feature of arbitrary performance
degradations to be removed from several known robustness translations.
| [
"['Varun Kanade' 'Leslie G. Valiant' 'Jennifer Wortman Vaughan']",
"Varun Kanade, Leslie G. Valiant and Jennifer Wortman Vaughan"
] |
null | null | 1005.3579 | null | null | http://arxiv.org/pdf/1005.3579v1 | 2010-05-20T01:59:42Z | 2010-05-20T01:59:42Z | Graph-Structured Multi-task Regression and an Efficient Optimization
Method for General Fused Lasso | We consider the problem of learning a structured multi-task regression, where the output consists of multiple responses that are related by a graph and the correlated response variables are dependent on the common inputs in a sparse but synergistic manner. Previous methods such as l1/l2-regularized multi-task regression assume that all of the output variables are equally related to the inputs, although in many real-world problems, outputs are related in a complex manner. In this paper, we propose graph-guided fused lasso (GFlasso) for structured multi-task regression that exploits the graph structure over the output variables. We introduce a novel penalty function based on fusion penalty to encourage highly correlated outputs to share a common set of relevant inputs. In addition, we propose a simple yet efficient proximal-gradient method for optimizing GFlasso that can also be applied to any optimization problems with a convex smooth loss and the general class of fusion penalty defined on arbitrary graph structures. By exploiting the structure of the non-smooth ''fusion penalty'', our method achieves a faster convergence rate than the standard first-order method, sub-gradient method, and is significantly more scalable than the widely adopted second-order cone-programming and quadratic-programming formulations. In addition, we provide an analysis of the consistency property of the GFlasso model. Experimental results not only demonstrate the superiority of GFlasso over the standard lasso but also show the efficiency and scalability of our proximal-gradient method. | [
"['Xi Chen' 'Seyoung Kim' 'Qihang Lin' 'Jaime G. Carbonell' 'Eric P. Xing']"
] |
cs.LG | null | 1005.3681 | null | null | http://arxiv.org/pdf/1005.3681v2 | 2010-08-01T08:31:29Z | 2010-05-20T12:39:56Z | Learning Kernel-Based Halfspaces with the Zero-One Loss | We describe and analyze a new algorithm for agnostically learning
kernel-based halfspaces with respect to the \emph{zero-one} loss function.
Unlike most previous formulations which rely on surrogate convex loss functions
(e.g. hinge-loss in SVM and log-loss in logistic regression), we provide finite
time/sample guarantees with respect to the more natural zero-one loss function.
The proposed algorithm can learn kernel-based halfspaces in worst-case time
$\poly(\exp(L\log(L/\epsilon)))$, for $\emph{any}$ distribution, where $L$ is a
Lipschitz constant (which can be thought of as the reciprocal of the margin),
and the learned classifier is worse than the optimal halfspace by at most
$\epsilon$. We also prove a hardness result, showing that under a certain
cryptographic assumption, no algorithm can learn kernel-based halfspaces in
time polynomial in $L$.
| [
"['Shai Shalev-Shwartz' 'Ohad Shamir' 'Karthik Sridharan']",
"Shai Shalev-Shwartz, Ohad Shamir and Karthik Sridharan"
] |
cs.AI cs.IR cs.LG | null | 1005.4298 | null | null | http://arxiv.org/pdf/1005.4298v1 | 2010-05-24T10:35:50Z | 2010-05-24T10:35:50Z | Distantly Labeling Data for Large Scale Cross-Document Coreference | Cross-document coreference, the problem of resolving entity mentions across
multi-document collections, is crucial to automated knowledge base construction
and data mining tasks. However, the scarcity of large labeled data sets has
hindered supervised machine learning research for this task. In this paper we
develop and demonstrate an approach based on ``distantly-labeling'' a data set
from which we can train a discriminative cross-document coreference model. In
particular we build a dataset of more than a million people mentions extracted
from 3.5 years of New York Times articles, leverage Wikipedia for distant
labeling with a generative model (and measure the reliability of such
labeling); then we train and evaluate a conditional random field coreference
model that has factors on cross-document entities as well as mention-pairs.
This coreference model obtains high accuracy in resolving mentions and entities
that are not present in the training data, indicating applicability to
non-Wikipedia data. Given the large amount of data, our work is also an
exercise demonstrating the scalability of our approach.
| [
"Sameer Singh and Michael Wick and Andrew McCallum",
"['Sameer Singh' 'Michael Wick' 'Andrew McCallum']"
] |
Subsets and Splits