categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 0611006 | null | null | http://arxiv.org/pdf/cs/0611006v1 | 2006-11-02T00:47:57Z | 2006-11-02T00:47:57Z | Evolving controllers for simulated car racing | This paper describes the evolution of controllers for racing a simulated radio-controlled car around a track, modelled on a real physical track. Five different controller architectures were compared, based on neural networks, force fields and action sequences. The controllers use either egocentric (first person), Newtonian (third person) or no information about the state of the car (open-loop controller). The only controller that was able to evolve good racing behaviour was based on a neural network acting on egocentric inputs. | [
"['Julian Togelius' 'Simon M. Lucas']"
] |
null | null | 0611011 | null | null | http://arxiv.org/abs/cs/0611011v1 | 2006-11-02T18:44:49Z | 2006-11-02T18:44:49Z | Hedging predictions in machine learning | Recent advances in machine learning make it possible to design efficient prediction algorithms for data sets with huge numbers of parameters. This paper describes a new technique for "hedging" the predictions output by many such algorithms, including support vector machines, kernel ridge regression, kernel nearest neighbours, and by many other state-of-the-art methods. The hedged predictions for the labels of new objects include quantitative measures of their own accuracy and reliability. These measures are provably valid under the assumption of randomness, traditional in machine learning: the objects and their labels are assumed to be generated independently from the same probability distribution. In particular, it becomes possible to control (up to statistical fluctuations) the number of erroneous predictions by selecting a suitable confidence level. Validity being achieved automatically, the remaining goal of hedged prediction is efficiency: taking full account of the new objects' features and other available information to produce as accurate predictions as possible. This can be done successfully using the powerful machinery of modern machine learning. | [
"['Alexander Gammerman' 'Vladimir Vovk']"
] |
null | null | 0611024 | null | null | http://arxiv.org/pdf/cs/0611024v1 | 2006-11-06T08:43:02Z | 2006-11-06T08:43:02Z | A Relational Approach to Functional Decomposition of Logic Circuits | Functional decomposition of logic circuits has profound influence on all quality aspects of the cost-effective implementation of modern digital systems. In this paper, a relational approach to the decomposition of logic circuits is proposed. This approach is parallel to the normalization of relational databases, they are governed by the same concepts of functional dependency (FD) and multi-valued dependency (MVD). It is manifest that the functional decomposition of switching function actually exploits the same idea and serves a similar purpose as database normalization. Partitions play an important role in the decomposition. The interdependency of two partitions can be represented by a bipartite graph. We demonstrate that both FD and MVD can be represented by bipartite graphs with specific topological properties, which are delineated by partitions of minterms. It follows that our algorithms are procedures of constructing those specific bipartite graphs of interest to meet the information-lossless criteria of functional decomposition. | [
"['Tony T. Lee' 'Tong Ye']"
] |
null | null | 0611042 | null | null | http://arxiv.org/pdf/cs/0611042v1 | 2006-11-09T21:10:32Z | 2006-11-09T21:10:32Z | CSCR:Computer Supported Collaborative Research | It is suggested that a new area of CSCR (Computer Supported Collaborative Research) is distinguished from CSCW and CSCL and that the demarcation between the three areas could do with greater clarification and prescription. | [
"['Vita Hinze-Hoare']"
] |
null | null | 0611054 | null | null | http://arxiv.org/pdf/cs/0611054v1 | 2006-11-13T23:28:12Z | 2006-11-13T23:28:12Z | How Random is a Coin Toss? Bayesian Inference and the Symbolic Dynamics
of Deterministic Chaos | Symbolic dynamics has proven to be an invaluable tool in analyzing the mechanisms that lead to unpredictability and random behavior in nonlinear dynamical systems. Surprisingly, a discrete partition of continuous state space can produce a coarse-grained description of the behavior that accurately describes the invariant properties of an underlying chaotic attractor. In particular, measures of the rate of information production--the topological and metric entropy rates--can be estimated from the outputs of Markov or generating partitions. Here we develop Bayesian inference for k-th order Markov chains as a method to finding generating partitions and estimating entropy rates from finite samples of discretized data produced by coarse-grained dynamical systems. | [
"['Christopher C. Strelioff' 'James P. Crutchfield']"
] |
null | null | 0611114 | null | null | http://arxiv.org/pdf/cs/0611114v2 | 2006-12-01T14:55:06Z | 2006-11-22T11:38:25Z | Very Sparse Stable Random Projections, Estimators and Tail Bounds for
Stable Random Projections | This paper will focus on three different aspects in improving the current practice of stable random projections. Firstly, we propose {em very sparse stable random projections} to significantly reduce the processing and storage cost, by replacing the $alpha$-stable distribution with a mixture of a symmetric $alpha$-Pareto distribution (with probability $beta$, $0<betaleq1$) and a point mass at the origin (with a probability $1-beta$). This leads to a significant $frac{1}{beta}$-fold speedup for small $beta$. Secondly, we provide an improved estimator for recovering the original $l_alpha$ norms from the projected data. The standard estimator is based on the (absolute) sample median, while we suggest using the geometric mean. The geometric mean estimator we propose is strictly unbiased and is easier to study. Moreover, the geometric mean estimator is more accurate, especially non-asymptotically. Thirdly, we provide an adequate answer to the basic question of how many projections (samples) are needed for achieving some pre-specified level of accuracy. cite{Proc:Indyk_FOCS00,Article:Indyk_TKDE03} did not provide a criterion that can be used in practice. The geometric mean estimator we propose allows us to derive sharp tail bounds which can be expressed in exponential forms with constants explicitly given. | [
"['Ping Li']"
] |
null | null | 0611123 | null | null | http://arxiv.org/pdf/cs/0611123v1 | 2006-11-23T19:12:30Z | 2006-11-23T19:12:30Z | Functional Bregman Divergence and Bayesian Estimation of Distributions | A class of distortions termed functional Bregman divergences is defined, which includes squared error and relative entropy. A functional Bregman divergence acts on functions or distributions, and generalizes the standard Bregman divergence for vectors and a previous pointwise Bregman divergence that was defined for functions. A recently published result showed that the mean minimizes the expected Bregman divergence. The new functional definition enables the extension of this result to the continuous case to show that the mean minimizes the expected functional Bregman divergence over a set of functions or distributions. It is shown how this theorem applies to the Bayesian estimation of distributions. Estimation of the uniform distribution from independent and identically drawn samples is used as a case study. | [
"['B. A. Frigyik' 'S. Srivastava' 'M. R. Gupta']"
] |
null | null | 0611124 | null | null | http://arxiv.org/pdf/cs/0611124v1 | 2006-11-24T08:49:30Z | 2006-11-24T08:49:30Z | Low-rank matrix factorization with attributes | We develop a new collaborative filtering (CF) method that combines both previously known users' preferences, i.e. standard CF, as well as product/user attributes, i.e. classical function approximation, to predict a given user's interest in a particular product. Our method is a generalized low rank matrix completion problem, where we learn a function whose inputs are pairs of vectors -- the standard low rank matrix completion problem being a special case where the inputs to the function are the row and column indices of the matrix. We solve this generalized matrix completion problem using tensor product kernels for which we also formally generalize standard kernel properties. Benchmark experiments on movie ratings show the advantages of our generalized matrix completion method over the standard matrix completion one with no information about movies or people, as well as over standard multi-task or single task learning methods. | [
"['Jacob Abernethy' 'Francis Bach' 'Theodoros Evgeniou'\n 'Jean-Philippe Vert']"
] |
null | null | 0611145 | null | null | http://arxiv.org/pdf/cs/0611145v1 | 2006-11-29T00:00:57Z | 2006-11-29T00:00:57Z | A Unified View of TD Algorithms; Introducing Full-Gradient TD and
Equi-Gradient Descent TD | This paper addresses the issue of policy evaluation in Markov Decision Processes, using linear function approximation. It provides a unified view of algorithms such as TD(lambda), LSTD(lambda), iLSTD, residual-gradient TD. It is asserted that they all consist in minimizing a gradient function and differ by the form of this function and their means of minimizing it. Two new schemes are introduced in that framework: Full-gradient TD which uses a generalization of the principle introduced in iLSTD, and EGD TD, which reduces the gradient by successive equi-gradient descents. These three algorithms form a new intermediate family with the interesting property of making much better use of the samples than TD while keeping a gradient descent scheme, which is useful for complexity issues and optimistic policy iteration. | [
"['Manuel Loth' 'Philippe Preux']"
] |
null | null | 0611150 | null | null | http://arxiv.org/pdf/cs/0611150v3 | 2006-12-05T15:04:52Z | 2006-11-29T13:59:31Z | A Novel Bayesian Classifier using Copula Functions | A useful method for representing Bayesian classifiers is through emph{discriminant functions}. Here, using copula functions, we propose a new model for discriminants. This model provides a rich and generalized class of decision boundaries. These decision boundaries significantly boost the classification accuracy especially for high dimensional feature spaces. We strengthen our analysis through simulation results. | [
"['Saket Sathe']"
] |
null | null | 0611164 | null | null | http://arxiv.org/pdf/cs/0611164v1 | 2006-11-30T15:36:27Z | 2006-11-30T15:36:27Z | Player co-modelling in a strategy board game: discovering how to play
fast | In this paper we experiment with a 2-player strategy board game where playing models are evolved using reinforcement learning and neural networks. The models are evolved to speed up automatic game development based on human involvement at varying levels of sophistication and density when compared to fully autonomous playing. The experimental results suggest a clear and measurable association between the ability to win games and the ability to do that fast, while at the same time demonstrating that there is a minimum level of human involvement beyond which no learning really occurs. | [
"['Dimitris Kalles']"
] |
null | null | 0612030 | null | null | http://arxiv.org/pdf/cs/0612030v1 | 2006-12-05T15:57:44Z | 2006-12-05T15:57:44Z | Loop corrections for approximate inference | We propose a method for improving approximate inference methods that corrects for the influence of loops in the graphical model. The method is applicable to arbitrary factor graphs, provided that the size of the Markov blankets is not too large. It is an alternative implementation of an idea introduced recently by Montanari and Rizzo (2005). In its simplest form, which amounts to the assumption that no loops are present, the method reduces to the minimal Cluster Variation Method approximation (which uses maximal factors as outer clusters). On the other hand, using estimates of the effect of loops (obtained by some approximate inference algorithm) and applying the Loop Correcting (LC) method usually gives significantly better results than applying the approximate inference algorithm directly without loop corrections. Indeed, we often observe that the loop corrected error is approximately the square of the error of the approximate inference method used to estimate the effect of loops. We compare different variants of the Loop Correcting method with other approximate inference methods on a variety of graphical models, including "real world" networks, and conclude that the LC approach generally obtains the most accurate results. | [
"['Joris Mooij' 'Bert Kappen']"
] |
null | null | 0612095 | null | null | http://arxiv.org/pdf/cs/0612095v2 | 2008-09-15T15:51:15Z | 2006-12-19T16:18:30Z | Approximation of the Two-Part MDL Code | Approximation of the optimal two-part MDL code for given data, through successive monotonically length-decreasing two-part MDL codes, has the following properties: (i) computation of each step may take arbitrarily long; (ii) we may not know when we reach the optimum, or whether we will reach the optimum at all; (iii) the sequence of models generated may not monotonically improve the goodness of fit; but (iv) the model associated with the optimum has (almost) the best goodness of fit. To express the practically interesting goodness of fit of individual models for individual data sets we have to rely on Kolmogorov complexity. | [
"['Pieter Adriaans' 'Paul Vitanyi']"
] |
null | null | 0612096 | null | null | http://arxiv.org/abs/cs/0612096v1 | 2006-12-19T22:32:18Z | 2006-12-19T22:32:18Z | Using state space differential geometry for nonlinear blind source
separation | Given a time series of multicomponent measurements of an evolving stimulus, nonlinear blind source separation (BSS) seeks to find a "source" time series, comprised of statistically independent combinations of the measured components. In this paper, we seek a source time series with local velocity cross correlations that vanish everywhere in stimulus state space. However, in an earlier paper the local velocity correlation matrix was shown to constitute a metric on state space. Therefore, nonlinear BSS maps onto a problem of differential geometry: given the metric observed in the measurement coordinate system, find another coordinate system in which the metric is diagonal everywhere. We show how to determine if the observed data are separable in this way, and, if they are, we show how to construct the required transformation to the source coordinate system, which is essentially unique except for an unknown rotation that can be found by applying the methods of linear BSS. Thus, the proposed technique solves nonlinear BSS in many situations or, at least, reduces it to linear BSS, without the use of probabilistic, parametric, or iterative procedures. This paper also describes a generalization of this methodology that performs nonlinear independent subspace separation. In every case, the resulting decomposition of the observed data is an intrinsic property of the stimulus' evolution in the sense that it does not depend on the way the observer chooses to view it (e.g., the choice of the observing machine's sensors). In other words, the decomposition is a property of the evolution of the "real" stimulus that is "out there" broadcasting energy to the observer. The technique is illustrated with analytic and numerical examples. | [
"['David N. Levin']"
] |
null | null | 0612117 | null | null | http://arxiv.org/abs/cs/0612117v1 | 2006-12-22T00:25:04Z | 2006-12-22T00:25:04Z | Statistical Mechanics of On-line Learning when a Moving Teacher Goes
around an Unlearnable True Teacher | In the framework of on-line learning, a learning machine might move around a teacher due to the differences in structures or output functions between the teacher and the learning machine. In this paper we analyze the generalization performance of a new student supervised by a moving machine. A model composed of a fixed true teacher, a moving teacher, and a student is treated theoretically using statistical mechanics, where the true teacher is a nonmonotonic perceptron and the others are simple perceptrons. Calculating the generalization errors numerically, we show that the generalization errors of a student can temporarily become smaller than that of a moving teacher, even if the student only uses examples from the moving teacher. However, the generalization error of the student eventually becomes the same value with that of the moving teacher. This behavior is qualitatively different from that of a linear model. | [
"['Masahiro Urakami' 'Seiji Miyoshi' 'Masato Okada']"
] |
null | null | 0701052 | null | null | http://arxiv.org/pdf/cs/0701052v1 | 2007-01-08T17:03:31Z | 2007-01-08T17:03:31Z | Time Series Forecasting: Obtaining Long Term Trends with Self-Organizing
Maps | Kohonen self-organisation maps are a well know classification tool, commonly used in a wide variety of problems, but with limited applications in time series forecasting context. In this paper, we propose a forecasting method specifically designed for multi-dimensional long-term trends prediction, with a double application of the Kohonen algorithm. Practical applications of the method are also presented. | [
"['Geoffroy Simon' 'Amaury Lendasse' 'Marie Cottrell' 'Jean-Claude Fort'\n 'Michel Verleysen']"
] |
null | null | 0701105 | null | null | http://arxiv.org/pdf/cs/0701105v1 | 2007-01-17T13:35:35Z | 2007-01-17T13:35:35Z | A Delta Debugger for ILP Query Execution | Because query execution is the most crucial part of Inductive Logic Programming (ILP) algorithms, a lot of effort is invested in developing faster execution mechanisms. These execution mechanisms typically have a low-level implementation, making them hard to debug. Moreover, other factors such as the complexity of the problems handled by ILP algorithms and size of the code base of ILP data mining systems make debugging at this level a very difficult job. In this work, we present the trace-based debugging approach currently used in the development of new execution mechanisms in hipP, the engine underlying the ACE Data Mining system. This debugger uses the delta debugging algorithm to automatically reduce the total time needed to expose bugs in ILP execution, thus making manual debugging step much lighter. | [
"['Remko Troncon' 'Gerda Janssens']"
] |
null | null | 0701120 | null | null | http://arxiv.org/abs/cs/0701120v1 | 2007-01-19T07:36:40Z | 2007-01-19T07:36:40Z | Algorithmic Complexity Bounds on Future Prediction Errors | We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor $M$ from the true distribution $mu$ by the algorithmic complexity of $mu$. Here we assume we are at a time $t>1$ and already observed $x=x_1...x_t$. We bound the future prediction performance on $x_{t+1}x_{t+2}...$ by a new variant of algorithmic complexity of $mu$ given $x$, plus the complexity of the randomness deficiency of $x$. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems. | [
"['A. Chernov' 'M. Hutter' 'J. Schmidhuber']"
] |
null | null | 0701125 | null | null | http://arxiv.org/pdf/cs/0701125v1 | 2007-01-20T00:18:06Z | 2007-01-20T00:18:06Z | Universal Algorithmic Intelligence: A mathematical top->down approach | Sequential decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameter-free theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline how the AIXI model can formally solve a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXItl that is still effectively more intelligent than any other time t and length l bounded agent. The computation time of AIXItl is of the order t x 2^l. The discussion includes formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches. | [
"['Marcus Hutter']"
] |
null | null | 0701419 | null | null | http://arxiv.org/pdf/math/0701419v4 | 2008-01-07T14:21:58Z | 2007-01-15T16:45:38Z | Strategies for prediction under imperfect monitoring | We propose simple randomized strategies for sequential prediction under imperfect monitoring, that is, when the forecaster does not have access to the past outcomes but rather to a feedback signal. The proposed strategies are consistent in the sense that they achieve, asymptotically, the best possible average reward. It was Rustichini (1999) who first proved the existence of such consistent predictors. The forecasters presented here offer the first constructive proof of consistency. Moreover, the proposed algorithms are computationally efficient. We also establish upper bounds for the rates of convergence. In the case of deterministic feedback, these rates are optimal up to logarithmic terms. | [
"['Gabor Lugosi' 'Shie Mannor' 'Gilles Stoltz']"
] |
null | null | 0702804 | null | null | http://arxiv.org/abs/math/0702804v1 | 2007-02-27T03:31:34Z | 2007-02-27T03:31:34Z | The Loss Rank Principle for Model Selection | We introduce a new principle for model selection in regression and classification. Many regression models are controlled by some smoothness or flexibility or complexity parameter c, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. Let f_D^c be the (best) regressor of complexity c on data D. A more flexible regressor can fit more data D' well than a more rigid one. If something (here small loss) is easy to achieve it's typically worth less. We define the loss rank of f_D^c as the number of other (fictitious) data D' that are fitted better by f_D'^c than D is fitted by f_D^c. We suggest selecting the model complexity c that has minimal loss rank (LoRP). Unlike most penalized maximum likelihood variants (AIC,BIC,MDL), LoRP only depends on the regression function and loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN. In this paper we formalize, discuss, and motivate LoRP, study it for specific regression problems, in particular linear ones, and compare it to other model selection schemes. | [
"['Marcus Hutter']"
] |
null | null | 0703055 | null | null | http://arxiv.org/pdf/cs/0703055v1 | 2007-03-12T19:14:23Z | 2007-03-12T19:14:23Z | Support and Quantile Tubes | This correspondence studies an estimator of the conditional support of a distribution underlying a set of i.i.d. observations. The relation with mutual information is shown via an extension of Fano's theorem in combination with a generalization bound based on a compression argument. Extensions to estimating the conditional quantile interval, and statistical guarantees on the minimal convex hull are given. | [
"['Kristiaan Pelckmans' 'Jos De Brabanter' 'Johan A. K. Suykens'\n 'Bart De Moor']"
] |
null | null | 0703062 | null | null | http://arxiv.org/pdf/cs/0703062v1 | 2007-03-13T08:53:41Z | 2007-03-13T08:53:41Z | Bandit Algorithms for Tree Search | Bandit based methods for tree search have recently gained popularity when applied to huge trees, e.g. in the game of go (Gelly et al., 2006). The UCT algorithm (Kocsis and Szepesvari, 2006), a tree search method based on Upper Confidence Bounds (UCB) (Auer et al., 2002), is believed to adapt locally to the effective smoothness of the tree. However, we show that UCT is too ``optimistic'' in some cases, leading to a regret O(exp(exp(D))) where D is the depth of the tree. We propose alternative bandit algorithms for tree search. First, a modification of UCT using a confidence sequence that scales exponentially with the horizon depth is proven to have a regret O(2^D sqrt{n}), but does not adapt to possible smoothness in the tree. We then analyze Flat-UCB performed on the leaves and provide a finite regret bound with high probability. Then, we introduce a UCB-based Bandit Algorithm for Smooth Trees which takes into account actual smoothness of the rewards for performing efficient ``cuts'' of sub-optimal branches with high confidence. Finally, we present an incremental tree search version which applies when the full tree is too big (possibly infinite) to be entirely represented and show that with high probability, essentially only the optimal branches is indefinitely developed. We illustrate these methods on a global optimization problem of a Lipschitz function, given noisy data. | [
"['Pierre-Arnaud Coquelin' 'Rémi Munos']"
] |
null | null | 0703125 | null | null | http://arxiv.org/abs/cs/0703125v1 | 2007-03-25T01:19:14Z | 2007-03-25T01:19:14Z | Intrinsic dimension of a dataset: what properties does one expect? | We propose an axiomatic approach to the concept of an intrinsic dimension of a dataset, based on a viewpoint of geometry of high-dimensional structures. Our first axiom postulates that high values of dimension be indicative of the presence of the curse of dimensionality (in a certain precise mathematical sense). The second axiom requires the dimension to depend smoothly on a distance between datasets (so that the dimension of a dataset and that of an approximating principal manifold would be close to each other). The third axiom is a normalization condition: the dimension of the Euclidean $n$-sphere $s^n$ is $Theta(n)$. We give an example of a dimension function satisfying our axioms, even though it is in general computationally unfeasible, and discuss a computationally cheap function satisfying most but not all of our axioms (the ``intrinsic dimensionality'' of Ch'avez et al.) | [
"['Vladimir Pestov']"
] |
null | null | 0703132 | null | null | http://arxiv.org/abs/cs/0703132v1 | 2007-03-27T05:46:31Z | 2007-03-27T05:46:31Z | Structure induction by lossless graph compression | This work is motivated by the necessity to automate the discovery of structure in vast and evergrowing collection of relational data commonly represented as graphs, for example genomic networks. A novel algorithm, dubbed Graphitour, for structure induction by lossless graph compression is presented and illustrated by a clear and broadly known case of nested structure in a DNA molecule. This work extends to graphs some well established approaches to grammatical inference previously applied only to strings. The bottom-up graph compression problem is related to the maximum cardinality (non-bipartite) maximum cardinality matching problem. The algorithm accepts a variety of graph types including directed graphs and graphs with labeled nodes and arcs. The resulting structure could be used for representation and classification of graphs. | [
"['Leonid Peshkin']"
] |
null | null | 0703138 | null | null | http://arxiv.org/pdf/cs/0703138v1 | 2007-03-28T04:41:54Z | 2007-03-28T04:41:54Z | Reinforcement Learning for Adaptive Routing | Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem. | [
"['Leonid Peshkin' 'Virginia Savova']"
] |
cs.IT cs.LG math.IT | 10.1109/ITW.2007.4313111 | 0704.0671 | null | null | http://arxiv.org/abs/0704.0671v1 | 2007-04-05T02:57:15Z | 2007-04-05T02:57:15Z | Learning from compressed observations | The problem of statistical learning is to construct a predictor of a random
variable $Y$ as a function of a related random variable $X$ on the basis of an
i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable
predictors are drawn from some specified class, and the goal is to approach
asymptotically the performance (expected loss) of the best predictor in the
class. We consider the setting in which one has perfect observation of the
$X$-part of the sample, while the $Y$-part has to be communicated at some
finite bit rate. The encoding of the $Y$-values is allowed to depend on the
$X$-values. Under suitable regularity conditions on the admissible predictors,
the underlying family of probability distributions and the loss function, we
give an information-theoretic characterization of achievable predictor
performance in terms of conditional distortion-rate functions. The ideas are
illustrated on the example of nonparametric regression in Gaussian noise.
| [
"Maxim Raginsky",
"['Maxim Raginsky']"
] |
cs.IT cs.LG math.IT | 10.1109/TSP.2008.920143 | 0704.0954 | null | null | http://arxiv.org/abs/0704.0954v1 | 2007-04-06T21:58:52Z | 2007-04-06T21:58:52Z | Sensor Networks with Random Links: Topology Design for Distributed
Consensus | In a sensor network, in practice, the communication among sensors is subject
to:(1) errors or failures at random times; (3) costs; and(2) constraints since
sensors and networks operate under scarce resources, such as power, data rate,
or communication. The signal-to-noise ratio (SNR) is usually a main factor in
determining the probability of error (or of communication failure) in a link.
These probabilities are then a proxy for the SNR under which the links operate.
The paper studies the problem of designing the topology, i.e., assigning the
probabilities of reliable communication among sensors (or of link failures) to
maximize the rate of convergence of average consensus, when the link
communication costs are taken into account, and there is an overall
communication budget constraint. To consider this problem, we address a number
of preliminary issues: (1) model the network as a random topology; (2)
establish necessary and sufficient conditions for mean square sense (mss) and
almost sure (a.s.) convergence of average consensus when network links fail;
and, in particular, (3) show that a necessary and sufficient condition for both
mss and a.s. convergence is for the algebraic connectivity of the mean graph
describing the network topology to be strictly positive. With these results, we
formulate topology design, subject to random link failures and to a
communication cost constraint, as a constrained convex optimization problem to
which we apply semidefinite programming techniques. We show by an extensive
numerical study that the optimal design improves significantly the convergence
speed of the consensus algorithm and can achieve the asymptotic performance of
a non-random network at a fraction of the communication cost.
| [
"Soummya Kar and Jose M. F. Moura",
"['Soummya Kar' 'Jose M. F. Moura']"
] |
cs.LG cs.SC | null | 0704.1020 | null | null | http://arxiv.org/pdf/0704.1020v1 | 2007-04-08T10:15:54Z | 2007-04-08T10:15:54Z | The on-line shortest path problem under partial monitoring | The on-line shortest path problem is considered under various models of
partial monitoring. Given a weighted directed acyclic graph whose edge weights
can change in an arbitrary (adversarial) way, a decision maker has to choose in
each round of a game a path between two distinguished vertices such that the
loss of the chosen path (defined as the sum of the weights of its composing
edges) be as small as possible. In a setting generalizing the multi-armed
bandit problem, after choosing a path, the decision maker learns only the
weights of those edges that belong to the chosen path. For this problem, an
algorithm is given whose average cumulative loss in n rounds exceeds that of
the best path, matched off-line to the entire sequence of the edge weights, by
a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on
the number of edges of the graph. The algorithm can be implemented with linear
complexity in the number of rounds n and in the number of edges. An extension
to the so-called label efficient setting is also given, in which the decision
maker is informed about the weights of the edges corresponding to the chosen
path at a total of m << n time instances. Another extension is shown where the
decision maker competes against a time-varying path, a generalization of the
problem of tracking the best expert. A version of the multi-armed bandit
setting for shortest path is also discussed where the decision maker learns
only the total weight of the chosen path but not the weights of the individual
edges on the path. Applications to routing in packet switched networks along
with simulation results are also presented.
| [
"['Andras Gyorgy' 'Tamas Linder' 'Gabor Lugosi' 'Gyorgy Ottucsak']",
"Andras Gyorgy, Tamas Linder, Gabor Lugosi, Gyorgy Ottucsak"
] |
cs.LG cs.AI cs.NE | null | 0704.1028 | null | null | http://arxiv.org/pdf/0704.1028v1 | 2007-04-08T17:36:00Z | 2007-04-08T17:36:00Z | A neural network approach to ordinal regression | Ordinal regression is an important type of learning, which has properties of
both classification and regression. Here we describe a simple and effective
approach to adapt a traditional neural network to learn ordinal categories. Our
approach is a generalization of the perceptron method for ordinal regression.
On several benchmark datasets, our method (NNRank) outperforms a neural network
classification method. Compared with the ordinal regression methods using
Gaussian processes and support vector machines, NNRank achieves comparable
performance. Moreover, NNRank has the advantages of traditional neural
networks: learning in both online and batch modes, handling very large training
datasets, and making rapid predictions. These features make NNRank a useful and
complementary tool for large-scale data processing tasks such as information
retrieval, web page ranking, collaborative filtering, and protein ranking in
Bioinformatics.
| [
"['Jianlin Cheng']",
"Jianlin Cheng"
] |
cs.LG | null | 0704.1274 | null | null | http://arxiv.org/pdf/0704.1274v1 | 2007-04-10T17:01:07Z | 2007-04-10T17:01:07Z | Parametric Learning and Monte Carlo Optimization | This paper uncovers and explores the close relationship between Monte Carlo
Optimization of a parametrized integral (MCO), Parametric machine-Learning
(PL), and `blackbox' or `oracle'-based optimization (BO). We make four
contributions. First, we prove that MCO is mathematically identical to a broad
class of PL problems. This identity potentially provides a new application
domain for all broadly applicable PL techniques: MCO. Second, we introduce
immediate sampling, a new version of the Probability Collectives (PC) algorithm
for blackbox optimization. Immediate sampling transforms the original BO
problem into an MCO problem. Accordingly, by combining these first two
contributions, we can apply all PL techniques to BO. In our third contribution
we validate this way of improving BO by demonstrating that cross-validation and
bagging improve immediate sampling. Finally, conventional MC and MCO procedures
ignore the relationship between the sample point locations and the associated
values of the integrand; only the values of the integrand at those locations
are considered. We demonstrate that one can exploit the sample location
information using PL techniques, for example by forming a fit of the sample
locations to the associated values of the integrand. This provides an
additional way to apply PL techniques to improve MCO.
| [
"['David H. Wolpert' 'Dev G. Rajnarayan']",
"David H. Wolpert and Dev G. Rajnarayan"
] |
cs.LG cs.AI | null | 0704.1409 | null | null | http://arxiv.org/pdf/0704.1409v3 | 2012-06-08T14:08:19Z | 2007-04-11T13:17:01Z | Preconditioned Temporal Difference Learning | This paper has been withdrawn by the author. This draft is withdrawn for its
poor quality in english, unfortunately produced by the author when he was just
starting his science route. Look at the ICML version instead:
http://icml2008.cs.helsinki.fi/papers/111.pdf
| [
"Yao HengShuai",
"['Yao HengShuai']"
] |
cs.LG cs.DS | null | 0704.2092 | null | null | http://arxiv.org/pdf/0704.2092v2 | 2009-03-23T03:22:02Z | 2007-04-17T03:52:41Z | A Note on the Inapproximability of Correlation Clustering | We consider inapproximability of the correlation clustering problem defined
as follows: Given a graph $G = (V,E)$ where each edge is labeled either "+"
(similar) or "-" (dissimilar), correlation clustering seeks to partition the
vertices into clusters so that the number of pairs correctly (resp.
incorrectly) classified with respect to the labels is maximized (resp.
minimized). The two complementary problems are called MaxAgree and MinDisagree,
respectively, and have been studied on complete graphs, where every edge is
labeled, and general graphs, where some edge might not have been labeled.
Natural edge-weighted versions of both problems have been studied as well. Let
S-MaxAgree denote the weighted problem where all weights are taken from set S,
we show that S-MaxAgree with weights bounded by $O(|V|^{1/2-\delta})$
essentially belongs to the same hardness class in the following sense: if there
is a polynomial time algorithm that approximates S-MaxAgree within a factor of
$\lambda = O(\log{|V|})$ with high probability, then for any choice of S',
S'-MaxAgree can be approximated in polynomial time within a factor of $(\lambda
+ \epsilon)$, where $\epsilon > 0$ can be arbitrarily small, with high
probability. A similar statement also holds for $S-MinDisagree. This result
implies it is hard (assuming $NP \neq RP$) to approximate unweighted MaxAgree
within a factor of $80/79-\epsilon$, improving upon a previous known factor of
$116/115-\epsilon$ by Charikar et. al. \cite{Chari05}.
| [
"Jinsong Tan",
"['Jinsong Tan']"
] |
cs.IT cs.LG math.IT | null | 0704.2644 | null | null | http://arxiv.org/pdf/0704.2644v1 | 2007-04-20T01:25:22Z | 2007-04-20T01:25:22Z | Joint universal lossy coding and identification of stationary mixing
sources | The problem of joint universal source coding and modeling, treated in the
context of lossless codes by Rissanen, was recently generalized to fixed-rate
lossy coding of finitely parametrized continuous-alphabet i.i.d. sources. We
extend these results to variable-rate lossy block coding of stationary ergodic
sources and show that, for bounded metric distortion measures, any finitely
parametrized family of stationary sources satisfying suitable mixing,
smoothness and Vapnik-Chervonenkis learnability conditions admits universal
schemes for joint lossy source coding and identification. We also give several
explicit examples of parametric sources satisfying the regularity conditions.
| [
"Maxim Raginsky",
"['Maxim Raginsky']"
] |
cs.LG | null | 0704.2668 | null | null | http://arxiv.org/pdf/0704.2668v1 | 2007-04-20T08:26:29Z | 2007-04-20T08:26:29Z | Supervised Feature Selection via Dependence Estimation | We introduce a framework for filtering features that employs the
Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence
between the features and the labels. The key idea is that good features should
maximise such dependence. Feature selection for various supervised learning
problems (including classification and regression) is unified under this
framework, and the solutions can be approximated using a backward-elimination
algorithm. We demonstrate the usefulness of our method on both artificial and
real world datasets.
| [
"Le Song, Alex Smola, Arthur Gretton, Karsten Borgwardt, Justin Bedo",
"['Le Song' 'Alex Smola' 'Arthur Gretton' 'Karsten Borgwardt' 'Justin Bedo']"
] |
cs.IT cs.AI cs.LG cs.NI math.IT | null | 0705.0760 | null | null | http://arxiv.org/pdf/0705.0760v1 | 2007-05-05T18:57:47Z | 2007-05-05T18:57:47Z | Equivalence of LP Relaxation and Max-Product for Weighted Matching in
General Graphs | Max-product belief propagation is a local, iterative algorithm to find the
mode/MAP estimate of a probability distribution. While it has been successfully
employed in a wide variety of applications, there are relatively few
theoretical guarantees of convergence and correctness for general loopy graphs
that may have many short cycles. Of these, even fewer provide exact ``necessary
and sufficient'' characterizations.
In this paper we investigate the problem of using max-product to find the
maximum weight matching in an arbitrary graph with edge weights. This is done
by first constructing a probability distribution whose mode corresponds to the
optimal matching, and then running max-product. Weighted matching can also be
posed as an integer program, for which there is an LP relaxation. This
relaxation is not always tight. In this paper we show that \begin{enumerate}
\item If the LP relaxation is tight, then max-product always converges, and
that too to the correct answer. \item If the LP relaxation is loose, then
max-product does not converge. \end{enumerate} This provides an exact,
data-dependent characterization of max-product performance, and a precise
connection to LP relaxation, which is a well-studied optimization technique.
Also, since LP relaxation is known to be tight for bipartite graphs, our
results generalize other recent results on using max-product to find weighted
matchings in bipartite graphs.
| [
"['Sujay Sanghavi']",
"Sujay Sanghavi"
] |
cs.LG | null | 0705.1585 | null | null | http://arxiv.org/pdf/0705.1585v1 | 2007-05-11T04:54:54Z | 2007-05-11T04:54:54Z | HMM Speaker Identification Using Linear and Non-linear Merging
Techniques | Speaker identification is a powerful, non-invasive and in-expensive biometric
technique. The recognition accuracy, however, deteriorates when noise levels
affect a specific band of frequency. In this paper, we present a sub-band based
speaker identification that intends to improve the live testing performance.
Each frequency sub-band is processed and classified independently. We also
compare the linear and non-linear merging techniques for the sub-bands
recognizer. Support vector machines and Gaussian Mixture models are the
non-linear merging techniques that are investigated. Results showed that the
sub-band based method used with linear merging techniques enormously improved
the performance of the speaker identification over the performance of wide-band
recognizers when tested live. A live testing improvement of 9.78% was achieved
| [
"Unathi Mahola, Fulufhelo V. Nelwamondo, Tshilidzi Marwala",
"['Unathi Mahola' 'Fulufhelo V. Nelwamondo' 'Tshilidzi Marwala']"
] |
cs.LG cond-mat.dis-nn | 10.1143/JPSJ.76.114001 | 0705.2318 | null | null | http://arxiv.org/abs/0705.2318v1 | 2007-05-16T09:58:39Z | 2007-05-16T09:58:39Z | Statistical Mechanics of Nonlinear On-line Learning for Ensemble
Teachers | We analyze the generalization performance of a student in a model composed of
nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We
calculate the generalization error of the student analytically or numerically
using statistical mechanics in the framework of on-line learning. We treat two
well-known learning rules: Hebbian learning and perceptron learning. As a
result, it is proven that the nonlinear model shows qualitatively different
behaviors from the linear model. Moreover, it is clarified that Hebbian
learning and perceptron learning show qualitatively different behaviors from
each other. In Hebbian learning, we can analytically obtain the solutions. In
this case, the generalization error monotonically decreases. The steady value
of the generalization error is independent of the learning rate. The larger the
number of teachers is and the more variety the ensemble teachers have, the
smaller the generalization error is. In perceptron learning, we have to
numerically obtain the solutions. In this case, the dynamical behaviors of the
generalization error are non-monotonic. The smaller the learning rate is, the
larger the number of teachers is; and the more variety the ensemble teachers
have, the smaller the minimum value of the generalization error is.
| [
"['Hideto Utsumi' 'Seiji Miyoshi' 'Masato Okada']",
"Hideto Utsumi, Seiji Miyoshi, Masato Okada"
] |
cs.LG cs.AI | null | 0705.2765 | null | null | http://arxiv.org/pdf/0705.2765v1 | 2007-05-18T19:44:19Z | 2007-05-18T19:44:19Z | On the monotonization of the training set | We consider the problem of minimal correction of the training set to make it
consistent with monotonic constraints. This problem arises during analysis of
data sets via techniques that require monotone data. We show that this problem
is NP-hard in general and is equivalent to finding a maximal independent set in
special orgraphs. Practically important cases of that problem considered in
detail. These are the cases when a partial order given on the replies set is a
total order or has a dimension 2. We show that the second case can be reduced
to maximization of a quadratic convex function on a convex set. For this case
we construct an approximate polynomial algorithm based on convex optimization.
| [
"['Rustem Takhanov']",
"Rustem Takhanov"
] |
stat.ME cs.LG math.ST physics.soc-ph stat.ML stat.TH | null | 0705.4485 | null | null | http://arxiv.org/pdf/0705.4485v1 | 2007-05-30T23:22:59Z | 2007-05-30T23:22:59Z | Mixed membership stochastic blockmodels | Observations consisting of measurements on relationships for pairs of objects
arise in many settings, such as protein interaction and gene regulatory
networks, collections of author-recipient email, and social networks. Analyzing
such data with probabilisic models can be delicate because the simple
exchangeability assumptions underlying many boilerplate models no longer hold.
In this paper, we describe a latent variable model of such data called the
mixed membership stochastic blockmodel. This model extends blockmodels for
relational data to ones which capture mixed membership latent relational
structure, thus providing an object-specific low-dimensional representation. We
develop a general variational inference algorithm for fast approximate
posterior inference. We explore applications to social and protein interaction
networks.
| [
"Edoardo M Airoldi, David M Blei, Stephen E Fienberg, Eric P Xing",
"['Edoardo M Airoldi' 'David M Blei' 'Stephen E Fienberg' 'Eric P Xing']"
] |
cs.AI cs.LG | null | 0705.4566 | null | null | http://arxiv.org/pdf/0705.4566v1 | 2007-05-31T10:35:07Z | 2007-05-31T10:35:07Z | Loop corrections for message passing algorithms in continuous variable
models | In this paper we derive the equations for Loop Corrected Belief Propagation
on a continuous variable Gaussian model. Using the exactness of the averages
for belief propagation for Gaussian models, a different way of obtaining the
covariances is found, based on Belief Propagation on cavity graphs. We discuss
the relation of this loop correction algorithm to Expectation Propagation
algorithms for the case in which the model is no longer Gaussian, but slightly
perturbed by nonlinear terms.
| [
"Bastian Wemmenhove and Bert Kappen",
"['Bastian Wemmenhove' 'Bert Kappen']"
] |
cs.LG cs.AI | 10.1109/ICTAI.2007.99 | 0706.0585 | null | null | http://arxiv.org/abs/0706.0585v1 | 2007-06-05T05:55:07Z | 2007-06-05T05:55:07Z | A Novel Model of Working Set Selection for SMO Decomposition Methods | In the process of training Support Vector Machines (SVMs) by decomposition
methods, working set selection is an important technique, and some exciting
schemes were employed into this field. To improve working set selection, we
propose a new model for working set selection in sequential minimal
optimization (SMO) decomposition methods. In this model, it selects B as
working set without reselection. Some properties are given by simple proof, and
experiments demonstrate that the proposed method is in general faster than
existing methods.
| [
"['Zhendong Zhao' 'Lei Yuan' 'Yuxuan Wang' 'Forrest Sheng Bao'\n 'Shunyi Zhang Yanfei Sun']",
"Zhendong Zhao, Lei Yuan, Yuxuan Wang, Forrest Sheng Bao, Shunyi Zhang\n Yanfei Sun"
] |
q-bio.QM cs.LG physics.soc-ph stat.ME stat.ML | 10.1371/journal.pcbi.0030252 | 0706.2040 | null | null | http://arxiv.org/abs/0706.2040v2 | 2007-11-10T19:25:59Z | 2007-06-14T14:52:06Z | Getting started in probabilistic graphical models | Probabilistic graphical models (PGMs) have become a popular tool for
computational analysis of biological data in a variety of domains. But, what
exactly are they and how do they work? How can we use PGMs to discover patterns
that are biologically relevant? And to what extent can PGMs help us formulate
new hypotheses that are testable at the bench? This note sketches out some
answers and illustrates the main ideas behind the statistical approach to
biological pattern discovery.
| [
"Edoardo M Airoldi",
"['Edoardo M Airoldi']"
] |
cs.LG stat.ML | null | 0706.3188 | null | null | http://arxiv.org/pdf/0706.3188v1 | 2007-06-21T16:40:06Z | 2007-06-21T16:40:06Z | A tutorial on conformal prediction | Conformal prediction uses past experience to determine precise levels of
confidence in new predictions. Given an error probability $\epsilon$, together
with a method that makes a prediction $\hat{y}$ of a label $y$, it produces a
set of labels, typically containing $\hat{y}$, that also contains $y$ with
probability $1-\epsilon$. Conformal prediction can be applied to any method for
producing $\hat{y}$: a nearest-neighbor method, a support-vector machine, ridge
regression, etc.
Conformal prediction is designed for an on-line setting in which labels are
predicted successively, each one being revealed before the next is predicted.
The most novel and valuable feature of conformal prediction is that if the
successive examples are sampled independently from the same distribution, then
the successive predictions will be right $1-\epsilon$ of the time, even though
they are based on an accumulating dataset rather than on independent datasets.
In addition to the model under which successive examples are sampled
independently, other on-line compression models can also use conformal
prediction. The widely used Gaussian linear model is one of these.
This tutorial presents a self-contained account of the theory of conformal
prediction and works through several numerical examples. A more comprehensive
treatment of the topic is provided in "Algorithmic Learning in a Random World",
by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005).
| [
"Glenn Shafer and Vladimir Vovk",
"['Glenn Shafer' 'Vladimir Vovk']"
] |
null | null | 0706.3679 | null | null | http://arxiv.org/pdf/0706.3679v1 | 2007-06-25T17:28:57Z | 2007-06-25T17:28:57Z | Scale-sensitive Psi-dimensions: the Capacity Measures for Classifiers
Taking Values in R^Q | Bounds on the risk play a crucial role in statistical learning theory. They usually involve as capacity measure of the model studied the VC dimension or one of its extensions. In classification, such "VC dimensions" exist for models taking values in {0, 1}, {1,..., Q} and R. We introduce the generalizations appropriate for the missing case, the one of models with values in R^Q. This provides us with a new guaranteed risk for M-SVMs which appears superior to the existing one. | [
"['Yann Guermeur']"
] |
cs.LG cs.AI cs.IT math.IT | null | 0707.0498 | null | null | http://arxiv.org/pdf/0707.0498v1 | 2007-07-03T20:43:43Z | 2007-07-03T20:43:43Z | The Role of Time in the Creation of Knowledge | This paper I assume that in humans the creation of knowledge depends on a
discrete time, or stage, sequential decision-making process subjected to a
stochastic, information transmitting environment. For each time-stage, this
environment randomly transmits Shannon type information-packets to the
decision-maker, who examines each of them for relevancy and then determines his
optimal choices. Using this set of relevant information-packets, the
decision-maker adapts, over time, to the stochastic nature of his environment,
and optimizes the subjective expected rate-of-growth of knowledge. The
decision-maker's optimal actions, lead to a decision function that involves,
over time, his view of the subjective entropy of the environmental process and
other important parameters at each time-stage of the process. Using this model
of human behavior, one could create psychometric experiments using computer
simulation and real decision-makers, to play programmed games to measure the
resulting human performance.
| [
"Roy E. Murphy",
"['Roy E. Murphy']"
] |
cs.AI cs.LG cs.MS | null | 0707.0701 | null | null | http://arxiv.org/pdf/0707.0701v2 | 2008-10-08T18:41:53Z | 2007-07-04T21:53:11Z | Clustering and Feature Selection using Sparse Principal Component
Analysis | In this paper, we study the application of sparse principal component
analysis (PCA) to clustering and feature selection problems. Sparse PCA seeks
sparse factors, or linear combinations of the data variables, explaining a
maximum amount of variance in the data while having only a limited number of
nonzero coefficients. PCA is often used as a simple clustering technique and
sparse factors allow us here to interpret the clusters in terms of a reduced
set of variables. We begin with a brief introduction and motivation on sparse
PCA and detail our implementation of the algorithm in d'Aspremont et al.
(2005). We then apply these results to some classic clustering and feature
selection problems arising in biology.
| [
"Ronny Luss, Alexandre d'Aspremont",
"['Ronny Luss' \"Alexandre d'Aspremont\"]"
] |
cs.AI cs.LG | null | 0707.0704 | null | null | http://arxiv.org/pdf/0707.0704v1 | 2007-07-04T22:13:42Z | 2007-07-04T22:13:42Z | Model Selection Through Sparse Maximum Likelihood Estimation | We consider the problem of estimating the parameters of a Gaussian or binary
distribution in such a way that the resulting undirected graphical model is
sparse. Our approach is to solve a maximum likelihood problem with an added
l_1-norm penalty term. The problem as formulated is convex but the memory
requirements and complexity of existing interior point methods are prohibitive
for problems with more than tens of nodes. We present two new algorithms for
solving problems with at least a thousand nodes in the Gaussian case. Our first
algorithm uses block coordinate descent, and can be interpreted as recursive
l_1-norm penalized regression. Our second algorithm, based on Nesterov's first
order method, yields a complexity estimate with a better dependence on problem
size than existing interior point methods. Using a log determinant relaxation
of the log partition function (Wainwright & Jordan (2006)), we show that these
same algorithms can be used to solve an approximate sparse maximum likelihood
problem for the binary case. We test our algorithms on synthetic data, as well
as on gene expression and senate voting records data.
| [
"['Onureena Banerjee' 'Laurent El Ghaoui' \"Alexandre d'Aspremont\"]",
"Onureena Banerjee, Laurent El Ghaoui, Alexandre d'Aspremont"
] |
cs.AI cs.LG | null | 0707.0705 | null | null | http://arxiv.org/pdf/0707.0705v4 | 2007-11-09T17:27:11Z | 2007-07-04T22:28:28Z | Optimal Solutions for Sparse Principal Component Analysis | Given a sample covariance matrix, we examine the problem of maximizing the
variance explained by a linear combination of the input variables while
constraining the number of nonzero coefficients in this combination. This is
known as sparse principal component analysis and has a wide array of
applications in machine learning and engineering. We formulate a new
semidefinite relaxation to this problem and derive a greedy algorithm that
computes a full set of good solutions for all target numbers of non zero
coefficients, with total complexity O(n^3), where n is the number of variables.
We then use the same relaxation to derive sufficient conditions for global
optimality of a solution, which can be tested in O(n^3) per pattern. We discuss
applications in subset selection and sparse recovery and show on artificial
examples and biological data that our algorithm does provide globally optimal
solutions in many cases.
| [
"Alexandre d'Aspremont, Francis Bach, Laurent El Ghaoui",
"[\"Alexandre d'Aspremont\" 'Francis Bach' 'Laurent El Ghaoui']"
] |
math.ST cs.LG math.PR stat.AP stat.TH | null | 0707.0805 | null | null | http://arxiv.org/pdf/0707.0805v2 | 2011-06-24T15:08:17Z | 2007-07-05T15:28:05Z | A New Generalization of Chebyshev Inequality for Random Vectors | In this article, we derive a new generalization of Chebyshev inequality for
random vectors. We demonstrate that the new generalization is much less
conservative than the classical generalization.
| [
"['Xinjia Chen']",
"Xinjia Chen"
] |
cs.AI cs.LG | null | 0707.1452 | null | null | http://arxiv.org/pdf/0707.1452v1 | 2007-07-10T13:47:32Z | 2007-07-10T13:47:32Z | Clusters, Graphs, and Networks for Analysing Internet-Web-Supported
Communication within a Virtual Community | The proposal is to use clusters, graphs and networks as models in order to
analyse the Web structure. Clusters, graphs and networks provide knowledge
representation and organization. Clusters were generated by co-site analysis.
The sample is a set of academic Web sites from the countries belonging to the
European Union. These clusters are here revisited from the point of view of
graph theory and social network analysis. This is a quantitative and structural
analysis. In fact, the Internet is a computer network that connects people and
organizations. Thus we may consider it to be a social network. The set of Web
academic sites represents an empirical social network, and is viewed as a
virtual community. The network structural properties are here analysed applying
together cluster analysis, graph theory and social network analysis.
| [
"['Xavier Polanco']",
"Xavier Polanco (INIST)"
] |
cs.IT cs.LG math.IT | null | 0707.3087 | null | null | http://arxiv.org/pdf/0707.3087v3 | 2009-07-22T00:58:34Z | 2007-07-20T14:51:39Z | Universal Reinforcement Learning | We consider an agent interacting with an unmodeled environment. At each time,
the agent makes an observation, takes an action, and incurs a cost. Its actions
can influence future observations and costs. The goal is to minimize the
long-term average cost. We propose a novel algorithm, known as the active LZ
algorithm, for optimal control based on ideas from the Lempel-Ziv scheme for
universal data compression and prediction. We establish that, under the active
LZ algorithm, if there exists an integer $K$ such that the future is
conditionally independent of the past given a window of $K$ consecutive actions
and observations, then the average cost converges to the optimum. Experimental
results involving the game of Rock-Paper-Scissors illustrate merits of the
algorithm.
| [
"Vivek F. Farias, Ciamac C. Moallemi, Tsachy Weissman, Benjamin Van Roy",
"['Vivek F. Farias' 'Ciamac C. Moallemi' 'Tsachy Weissman'\n 'Benjamin Van Roy']"
] |
cs.LG | null | 0707.3390 | null | null | http://arxiv.org/pdf/0707.3390v2 | 2008-01-28T10:10:31Z | 2007-07-23T14:35:20Z | Consistency of the group Lasso and multiple kernel learning | We consider the least-square regression problem with regularization by a
block 1-norm, i.e., a sum of Euclidean norms over spaces of dimensions larger
than one. This problem, referred to as the group Lasso, extends the usual
regularization by the 1-norm where all spaces have dimension one, where it is
commonly referred to as the Lasso. In this paper, we study the asymptotic model
consistency of the group Lasso. We derive necessary and sufficient conditions
for the consistency of group Lasso under practical assumptions, such as model
misspecification. When the linear predictors and Euclidean norms are replaced
by functions and reproducing kernel Hilbert norms, the problem is usually
referred to as multiple kernel learning and is commonly used for learning from
heterogeneous data sources and for non linear variable selection. Using tools
from functional analysis, and in particular covariance operators, we extend the
consistency results to this infinite dimensional case and also propose an
adaptive scheme to obtain a consistent model estimate, even when the necessary
condition required for the non adaptive scheme is not satisfied.
| [
"Francis Bach (WILLOW Project - Inria/Ens)",
"['Francis Bach']"
] |
quant-ph cs.LG | 10.1007/s11128-007-0061-6 | 0707.3479 | null | null | http://arxiv.org/abs/0707.3479v1 | 2007-07-24T13:17:55Z | 2007-07-24T13:17:55Z | Quantum Algorithms for Learning and Testing Juntas | In this article we develop quantum algorithms for learning and testing
juntas, i.e. Boolean functions which depend only on an unknown set of k out of
n input variables. Our aim is to develop efficient algorithms:
- whose sample complexity has no dependence on n, the dimension of the domain
the Boolean functions are defined over;
- with no access to any classical or quantum membership ("black-box")
queries. Instead, our algorithms use only classical examples generated
uniformly at random and fixed quantum superpositions of such classical
examples;
- which require only a few quantum examples but possibly many classical
random examples (which are considered quite "cheap" relative to quantum
examples).
Our quantum algorithms are based on a subroutine FS which enables sampling
according to the Fourier spectrum of f; the FS subroutine was used in earlier
work of Bshouty and Jackson on quantum learning. Our results are as follows:
- We give an algorithm for testing k-juntas to accuracy $\epsilon$ that uses
$O(k/\epsilon)$ quantum examples. This improves on the number of examples used
by the best known classical algorithm.
- We establish the following lower bound: any FS-based k-junta testing
algorithm requires $\Omega(\sqrt{k})$ queries.
- We give an algorithm for learning $k$-juntas to accuracy $\epsilon$ that
uses $O(\epsilon^{-1} k\log k)$ quantum examples and $O(2^k \log(1/\epsilon))$
random examples. We show that this learning algorithms is close to optimal by
giving a related lower bound.
| [
"['Alp Atici' 'Rocco A. Servedio']",
"Alp Atici, Rocco A. Servedio"
] |
q-bio.QM cs.LG | null | 0708.0171 | null | null | http://arxiv.org/pdf/0708.0171v1 | 2007-08-01T19:13:52Z | 2007-08-01T19:13:52Z | Virtual screening with support vector machines and structure kernels | Support vector machines and kernel methods have recently gained considerable
attention in chemoinformatics. They offer generally good performance for
problems of supervised classification or regression, and provide a flexible and
computationally efficient framework to include relevant information and prior
knowledge about the data and problems to be handled. In particular, with kernel
methods molecules do not need to be represented and stored explicitly as
vectors or fingerprints, but only to be compared to each other through a
comparison function technically called a kernel. While classical kernels can be
used to compare vector or fingerprint representations of molecules, completely
new kernels were developed in the recent years to directly compare the 2D or 3D
structures of molecules, without the need for an explicit vectorization step
through the extraction of molecular descriptors. While still in their infancy,
these approaches have already demonstrated their relevance on several toxicity
prediction and structure-activity relationship problems.
| [
"Pierre Mah\\'e (XRCE), Jean-Philippe Vert (CB)",
"['Pierre Mahé' 'Jean-Philippe Vert']"
] |
physics.data-an cond-mat.stat-mech cs.IT cs.LG math-ph math.IT math.MP math.ST nlin.CD stat.TH | null | 0708.0654 | null | null | http://arxiv.org/pdf/0708.0654v2 | 2008-06-29T23:52:26Z | 2007-08-05T01:37:53Z | Structure or Noise? | We show how rate-distortion theory provides a mechanism for automated theory
building by naturally distinguishing between regularity and randomness. We
start from the simple principle that model variables should, as much as
possible, render the future and past conditionally independent. From this, we
construct an objective function for model making whose extrema embody the
trade-off between a model's structural complexity and its predictive power. The
solutions correspond to a hierarchy of models that, at each level of
complexity, achieve optimal predictive power at minimal cost. In the limit of
maximal prediction the resulting optimal model identifies a process's intrinsic
organization by extracting the underlying causal states. In this limit, the
model's complexity is given by the statistical complexity, which is known to be
minimal for achieving maximum prediction. Examples show how theory building can
profit from analyzing a process's causal compressibility, which is reflected in
the optimal models' rate-distortion curve--the process's characteristic for
optimally balancing structure and noise at different levels of representation.
| [
"Susanne Still, James P. Crutchfield",
"['Susanne Still' 'James P. Crutchfield']"
] |
cs.LG | null | 0708.1242 | null | null | http://arxiv.org/pdf/0708.1242v3 | 2007-11-15T16:37:51Z | 2007-08-09T10:21:34Z | Cost-minimising strategies for data labelling : optimal stopping and
active learning | Supervised learning deals with the inference of a distribution over an output
or label space $\CY$ conditioned on points in an observation space $\CX$, given
a training dataset $D$ of pairs in $\CX \times \CY$. However, in a lot of
applications of interest, acquisition of large amounts of observations is easy,
while the process of generating labels is time-consuming or costly. One way to
deal with this problem is {\em active} learning, where points to be labelled
are selected with the aim of creating a model with better performance than that
of an model trained on an equal number of randomly sampled points. In this
paper, we instead propose to deal with the labelling cost directly: The
learning goal is defined as the minimisation of a cost which is a function of
the expected model performance and the total cost of the labels used. This
allows the development of general strategies and specific algorithms for (a)
optimal stopping, where the expected cost dictates whether label acquisition
should continue (b) empirical evaluation, where the cost is used as a
performance metric for a given combination of inference, stopping and sampling
methods. Though the main focus of the paper is optimal stopping, we also aim to
provide the background for further developments and discussion in the related
field of active learning.
| [
"Christos Dimitrakakis and Christian Savu-Krohn",
"['Christos Dimitrakakis' 'Christian Savu-Krohn']"
] |
cs.LG | null | 0708.1503 | null | null | http://arxiv.org/pdf/0708.1503v1 | 2007-08-10T19:19:54Z | 2007-08-10T19:19:54Z | Defensive forecasting for optimal prediction with expert advice | The method of defensive forecasting is applied to the problem of prediction
with expert advice for binary outcomes. It turns out that defensive forecasting
is not only competitive with the Aggregating Algorithm but also handles the
case of "second-guessing" experts, whose advice depends on the learner's
prediction; this paper assumes that the dependence on the learner's prediction
is continuous.
| [
"Vladimir Vovk",
"['Vladimir Vovk']"
] |
cs.IT cond-mat.stat-mech cs.LG math.IT math.ST stat.TH | null | 0708.1580 | null | null | http://arxiv.org/pdf/0708.1580v2 | 2010-08-19T23:46:24Z | 2007-08-11T19:13:29Z | Optimal Causal Inference: Estimating Stored Information and
Approximating Causal Architecture | We introduce an approach to inferring the causal architecture of stochastic
dynamical systems that extends rate distortion theory to use causal
shielding---a natural principle of learning. We study two distinct cases of
causal inference: optimal causal filtering and optimal causal estimation.
Filtering corresponds to the ideal case in which the probability distribution
of measurement sequences is known, giving a principled method to approximate a
system's causal structure at a desired level of representation. We show that,
in the limit in which a model complexity constraint is relaxed, filtering finds
the exact causal architecture of a stochastic dynamical system, known as the
causal-state partition. From this, one can estimate the amount of historical
information the process stores. More generally, causal filtering finds a graded
model-complexity hierarchy of approximations to the causal architecture. Abrupt
changes in the hierarchy, as a function of approximation, capture distinct
scales of structural organization.
For nonideal cases with finite data, we show how the correct number of
underlying causal states can be found by optimal causal estimation. A
previously derived model complexity control term allows us to correct for the
effect of statistical fluctuations in probability estimates and thereby avoid
over-fitting.
| [
"['Susanne Still' 'James P. Crutchfield' 'Christopher J. Ellison']",
"Susanne Still, James P. Crutchfield, Christopher J. Ellison"
] |
cs.IT cs.LG math.IT math.PR | null | 0708.2319 | null | null | http://arxiv.org/pdf/0708.2319v1 | 2007-08-17T06:39:11Z | 2007-08-17T06:39:11Z | On Semimeasures Predicting Martin-Loef Random Sequences | Solomonoff's central result on induction is that the posterior of a universal
semimeasure M converges rapidly and with probability 1 to the true sequence
generating posterior mu, if the latter is computable. Hence, M is eligible as a
universal sequence predictor in case of unknown mu. Despite some nearby results
and proofs in the literature, the stronger result of convergence for all
(Martin-Loef) random sequences remained open. Such a convergence result would
be particularly interesting and natural, since randomness can be defined in
terms of M itself. We show that there are universal semimeasures M which do not
converge for all random sequences, i.e. we give a partial negative answer to
the open problem. We also provide a positive answer for some non-universal
semimeasures. We define the incomputable measure D as a mixture over all
computable measures and the enumerable semimeasure W as a mixture over all
enumerable nearly-measures. We show that W converges to D and D to mu on all
random sequences. The Hellinger distance measuring closeness of two
distributions plays a central role.
| [
"['Marcus Hutter' 'Andrej Muchnik']",
"Marcus Hutter and Andrej Muchnik"
] |
cs.LG | null | 0708.2353 | null | null | http://arxiv.org/pdf/0708.2353v2 | 2007-08-23T12:44:34Z | 2007-08-17T12:18:24Z | Continuous and randomized defensive forecasting: unified view | Defensive forecasting is a method of transforming laws of probability (stated
in game-theoretic terms as strategies for Sceptic) into forecasting algorithms.
There are two known varieties of defensive forecasting: "continuous", in which
Sceptic's moves are assumed to depend on the forecasts in a (semi)continuous
manner and which produces deterministic forecasts, and "randomized", in which
the dependence of Sceptic's moves on the forecasts is arbitrary and
Forecaster's moves are allowed to be randomized. This note shows that the
randomized variety can be obtained from the continuous variety by smearing
Sceptic's moves to make them continuous.
| [
"Vladimir Vovk",
"['Vladimir Vovk']"
] |
cs.LG cs.CC | null | 0708.3226 | null | null | http://arxiv.org/pdf/0708.3226v7 | 2010-04-04T20:39:03Z | 2007-08-23T18:26:21Z | A Dichotomy Theorem for General Minimum Cost Homomorphism Problem | In the constraint satisfaction problem ($CSP$), the aim is to find an
assignment of values to a set of variables subject to specified constraints. In
the minimum cost homomorphism problem ($MinHom$), one is additionally given
weights $c_{va}$ for every variable $v$ and value $a$, and the aim is to find
an assignment $f$ to the variables that minimizes $\sum_{v} c_{vf(v)}$. Let
$MinHom(\Gamma)$ denote the $MinHom$ problem parameterized by the set of
predicates allowed for constraints. $MinHom(\Gamma)$ is related to many
well-studied combinatorial optimization problems, and concrete applications can
be found in, for instance, defence logistics and machine learning. We show that
$MinHom(\Gamma)$ can be studied by using algebraic methods similar to those
used for CSPs. With the aid of algebraic techniques, we classify the
computational complexity of $MinHom(\Gamma)$ for all choices of $\Gamma$. Our
result settles a general dichotomy conjecture previously resolved only for
certain classes of directed graphs, [Gutin, Hell, Rafiey, Yeo, European J. of
Combinatorics, 2008].
| [
"['Rustem Takhanov']",
"Rustem Takhanov"
] |
cs.LG | null | 0709.0509 | null | null | http://arxiv.org/pdf/0709.0509v1 | 2007-09-04T19:36:22Z | 2007-09-04T19:36:22Z | Filtering Additive Measurement Noise with Maximum Entropy in the Mean | The purpose of this note is to show how the method of maximum entropy in the
mean (MEM) may be used to improve parametric estimation when the measurements
are corrupted by large level of noise. The method is developed in the context
on a concrete example: that of estimation of the parameter in an exponential
distribution. We compare the performance of our method with the bayesian and
maximum likelihood approaches.
| [
"Henryk Gzyl and Enrique ter Horst",
"['Henryk Gzyl' 'Enrique ter Horst']"
] |
math.ST cs.IT cs.LG math.IT stat.ML stat.TH | null | 0709.1516 | null | null | http://arxiv.org/pdf/0709.1516v1 | 2007-09-11T01:39:20Z | 2007-09-11T01:39:20Z | On Universal Prediction and Bayesian Confirmation | The Bayesian framework is a well-studied and successful framework for
inductive reasoning, which includes hypothesis testing and confirmation,
parameter estimation, sequence prediction, classification, and regression. But
standard statistical guidelines for choosing the model class and prior are not
always available or fail, in particular in complex situations. Solomonoff
completed the Bayesian framework by providing a rigorous, unique, formal, and
universal choice for the model class and the prior. We discuss in breadth how
and in which sense universal (non-i.i.d.) sequence prediction solves various
(philosophical) problems of traditional Bayesian sequence prediction. We show
that Solomonoff's model possesses many desirable properties: Strong total and
weak instantaneous bounds, and in contrast to most classical continuous prior
densities has no zero p(oste)rior problem, i.e. can confirm universal
hypotheses, is reparametrization and regrouping invariant, and avoids the
old-evidence and updating problem. It even performs well (actually better) in
non-computable environments.
| [
"Marcus Hutter",
"['Marcus Hutter']"
] |
cs.LG cs.GT | null | 0709.2446 | null | null | http://arxiv.org/pdf/0709.2446v1 | 2007-09-15T20:48:57Z | 2007-09-15T20:48:57Z | Learning for Dynamic Bidding in Cognitive Radio Resources | In this paper, we model the various wireless users in a cognitive radio
network as a collection of selfish, autonomous agents that strategically
interact in order to acquire the dynamically available spectrum opportunities.
Our main focus is on developing solutions for wireless users to successfully
compete with each other for the limited and time-varying spectrum
opportunities, given the experienced dynamics in the wireless network. We
categorize these dynamics into two types: one is the disturbance due to the
environment (e.g. wireless channel conditions, source traffic characteristics,
etc.) and the other is the impact caused by competing users. To analyze the
interactions among users given the environment disturbance, we propose a
general stochastic framework for modeling how the competition among users for
spectrum opportunities evolves over time. At each stage of the dynamic resource
allocation, a central spectrum moderator auctions the available resources and
the users strategically bid for the required resources. The joint bid actions
affect the resource allocation and hence, the rewards and future strategies of
all users. Based on the observed resource allocation and corresponding rewards
from previous allocations, we propose a best response learning algorithm that
can be deployed by wireless users to improve their bidding policy at each
stage. The simulation results show that by deploying the proposed best response
learning algorithm, the wireless users can significantly improve their own
performance in terms of both the packet loss rate and the incurred cost for the
used resources.
| [
"Fangwen Fu, Mihaela van der Schaar",
"['Fangwen Fu' 'Mihaela van der Schaar']"
] |
cs.LG cs.NE stat.AP | 10.1016/j.chemolab.2005.06.010 | 0709.3427 | null | null | http://arxiv.org/abs/0709.3427v1 | 2007-09-21T12:49:47Z | 2007-09-21T12:49:47Z | Mutual information for the selection of relevant variables in
spectrometric nonlinear modelling | Data from spectrophotometers form vectors of a large number of exploitable
variables. Building quantitative models using these variables most often
requires using a smaller set of variables than the initial one. Indeed, a too
large number of input variables to a model results in a too large number of
parameters, leading to overfitting and poor generalization abilities. In this
paper, we suggest the use of the mutual information measure to select variables
from the initial set. The mutual information measures the information content
in input variables with respect to the model output, without making any
assumption on the model that will be used; it is thus suitable for nonlinear
modelling. In addition, it leads to the selection of variables among the
initial set, and not to linear or nonlinear combinations of them. Without
decreasing the model performances compared to other variable projection
methods, it allows therefore a greater interpretability of the results.
| [
"['Fabrice Rossi' 'Amaury Lendasse' 'Damien François' 'Vincent Wertz'\n 'Michel Verleysen']",
"Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Amaury\n Lendasse (CIS), Damien Fran\\c{c}ois (CESAME), Vincent Wertz (CESAME), Michel\n Verleysen (DICE - MLG)"
] |
cs.NE cs.LG | 10.1016/j.neunet.2006.05.002 | 0709.3461 | null | null | http://arxiv.org/abs/0709.3461v1 | 2007-09-21T15:20:07Z | 2007-09-21T15:20:07Z | Fast Algorithm and Implementation of Dissimilarity Self-Organizing Maps | In many real world applications, data cannot be accurately represented by
vectors. In those situations, one possible solution is to rely on dissimilarity
measures that enable sensible comparison between observations. Kohonen's
Self-Organizing Map (SOM) has been adapted to data described only through their
dissimilarity matrix. This algorithm provides both non linear projection and
clustering of non vector data. Unfortunately, the algorithm suffers from a high
cost that makes it quite difficult to use with voluminous data sets. In this
paper, we propose a new algorithm that provides an important reduction of the
theoretical cost of the dissimilarity SOM without changing its outcome (the
results are exactly the same as the ones obtained with the original algorithm).
Moreover, we introduce implementation methods that result in very short running
times. Improvements deduced from the theoretical cost model are validated on
simulated and real world data (a word list clustering problem). We also
demonstrate that the proposed implementation methods reduce by a factor up to 3
the running time of the fast algorithm over a standard implementation.
| [
"['Brieuc Conan-Guez' 'Fabrice Rossi' 'Aïcha El Golli']",
"Brieuc Conan-Guez (LITA), Fabrice Rossi (INRIA Rocquencourt / INRIA\n Sophia Antipolis), A\\\"icha El Golli (INRIA Rocquencourt / INRIA Sophia\n Antipolis)"
] |
cs.NE cs.LG | null | 0709.3586 | null | null | http://arxiv.org/pdf/0709.3586v1 | 2007-09-22T15:53:54Z | 2007-09-22T15:53:54Z | Une adaptation des cartes auto-organisatrices pour des donn\'ees
d\'ecrites par un tableau de dissimilarit\'es | Many data analysis methods cannot be applied to data that are not represented
by a fixed number of real values, whereas most of real world observations are
not readily available in such a format. Vector based data analysis methods have
therefore to be adapted in order to be used with non standard complex data. A
flexible and general solution for this adaptation is to use a (dis)similarity
measure. Indeed, thanks to expert knowledge on the studied data, it is
generally possible to define a measure that can be used to make pairwise
comparison between observations. General data analysis methods are then
obtained by adapting existing methods to (dis)similarity matrices. In this
article, we propose an adaptation of Kohonen's Self Organizing Map (SOM) to
(dis)similarity data. The proposed algorithm is an adapted version of the
vector based batch SOM. The method is validated on real world data: we provide
an analysis of the usage patterns of the web site of the Institut National de
Recherche en Informatique et Automatique, constructed thanks to web log mining
method.
| [
"['Aïcha El Golli' 'Fabrice Rossi' 'Brieuc Conan-Guez' 'Yves Lechevallier']",
"A\\\"icha El Golli (INRIA Rocquencourt / INRIA Sophia Antipolis),\n Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Brieuc\n Conan-Guez (LITA), Yves Lechevallier (INRIA Rocquencourt / INRIA Sophia\n Antipolis)"
] |
cs.NE cs.LG | null | 0709.3587 | null | null | http://arxiv.org/pdf/0709.3587v1 | 2007-09-22T15:54:37Z | 2007-09-22T15:54:37Z | Self-organizing maps and symbolic data | In data analysis new forms of complex data have to be considered like for
example (symbolic data, functional data, web data, trees, SQL query and
multimedia data, ...). In this context classical data analysis for knowledge
discovery based on calculating the center of gravity can not be used because
input are not $\mathbb{R}^p$ vectors. In this paper, we present an application
on real world symbolic data using the self-organizing map. To this end, we
propose an extension of the self-organizing map that can handle symbolic data.
| [
"A\\\"icha El Golli (INRIA Rocquencourt / INRIA Sophia Antipolis), Brieuc\n Conan-Guez (INRIA Rocquencourt / INRIA Sophia Antipolis), Fabrice Rossi\n (INRIA Rocquencourt / INRIA Sophia Antipolis)",
"['Aïcha El Golli' 'Brieuc Conan-Guez' 'Fabrice Rossi']"
] |
cs.LG stat.AP | 10.1016/j.chemolab.2006.06.007 | 0709.3639 | null | null | http://arxiv.org/abs/0709.3639v1 | 2007-09-23T14:08:51Z | 2007-09-23T14:08:51Z | Fast Selection of Spectral Variables with B-Spline Compression | The large number of spectral variables in most data sets encountered in
spectral chemometrics often renders the prediction of a dependent variable
uneasy. The number of variables hopefully can be reduced, by using either
projection techniques or selection methods; the latter allow for the
interpretation of the selected variables. Since the optimal approach of testing
all possible subsets of variables with the prediction model is intractable, an
incremental selection approach using a nonparametric statistics is a good
option, as it avoids the computationally intensive use of the model itself. It
has two drawbacks however: the number of groups of variables to test is still
huge, and colinearities can make the results unstable. To overcome these
limitations, this paper presents a method to select groups of spectral
variables. It consists in a forward-backward procedure applied to the
coefficients of a B-Spline representation of the spectra. The criterion used in
the forward-backward procedure is the mutual information, allowing to find
nonlinear dependencies between variables, on the contrary of the generally used
correlation. The spline representation is used to get interpretability of the
results, as groups of consecutive spectral variables will be selected. The
experiments conducted on NIR spectra from fescue grass and diesel fuels show
that the method provides clearly identified groups of selected variables,
making interpretation easy, while keeping a low computational load. The
prediction performances obtained using the selected coefficients are higher
than those obtained by the same method applied directly to the original
variables and similar to those obtained using traditional models, although
using significantly less spectral variables.
| [
"Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Damien\n Fran\\c{c}ois (CESAME), Vincent Wertz (CESAME), Marc Meurens (BNUT), Michel\n Verleysen (DICE - MLG)",
"['Fabrice Rossi' 'Damien François' 'Vincent Wertz' 'Marc Meurens'\n 'Michel Verleysen']"
] |
cs.LG stat.AP | 10.1016/j.neucom.2006.11.019 | 0709.3640 | null | null | http://arxiv.org/abs/0709.3640v1 | 2007-09-23T14:09:28Z | 2007-09-23T14:09:28Z | Resampling methods for parameter-free and robust feature selection with
mutual information | Combining the mutual information criterion with a forward feature selection
strategy offers a good trade-off between optimality of the selected feature
subset and computation time. However, it requires to set the parameter(s) of
the mutual information estimator and to determine when to halt the forward
procedure. These two choices are difficult to make because, as the
dimensionality of the subset increases, the estimation of the mutual
information becomes less and less reliable. This paper proposes to use
resampling methods, a K-fold cross-validation and the permutation test, to
address both issues. The resampling methods bring information about the
variance of the estimator, information which can then be used to automatically
set the parameter and to calculate a threshold to stop the forward procedure.
The procedure is illustrated on a synthetic dataset as well as on real-world
examples.
| [
"Damien Fran\\c{c}ois (CESAME), Fabrice Rossi (INRIA Rocquencourt /\n INRIA Sophia Antipolis), Vincent Wertz (CESAME), Michel Verleysen (DICE -\n MLG)",
"['Damien François' 'Fabrice Rossi' 'Vincent Wertz' 'Michel Verleysen']"
] |
cs.LG cs.AI cs.NE | null | 0709.3965 | null | null | http://arxiv.org/pdf/0709.3965v2 | 2007-09-26T10:37:00Z | 2007-09-25T14:28:32Z | Evolving Classifiers: Methods for Incremental Learning | The ability of a classifier to take on new information and classes by
evolving the classifier without it having to be fully retrained is known as
incremental learning. Incremental learning has been successfully applied to
many classification problems, where the data is changing and is not all
available at once. In this paper there is a comparison between Learn++, which
is one of the most recent incremental learning algorithms, and the new proposed
method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has
shown good incremental learning capabilities on benchmark datasets on which the
new ILUGA method has been tested. ILUGA has also shown good incremental
learning ability using only a few classifiers and does not suffer from
catastrophic forgetting. The results obtained for ILUGA on the Optical
Character Recognition (OCR) and Wine datasets are good, with an overall
accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT
for the difficult multi-class OCR dataset.
| [
"Greg Hulley and Tshilidzi Marwala",
"['Greg Hulley' 'Tshilidzi Marwala']"
] |
cs.LG cs.AI | null | 0709.3967 | null | null | http://arxiv.org/pdf/0709.3967v1 | 2007-09-25T14:37:40Z | 2007-09-25T14:37:40Z | Classification of Images Using Support Vector Machines | Support Vector Machines (SVMs) are a relatively new supervised classification
technique to the land cover mapping community. They have their roots in
Statistical Learning Theory and have gained prominence because they are robust,
accurate and are effective even when using a small training sample. By their
nature SVMs are essentially binary classifiers, however, they can be adopted to
handle the multiple classification tasks common in remote sensing studies. The
two approaches commonly used are the One-Against-One (1A1) and One-Against-All
(1AA) techniques. In this paper, these approaches are evaluated in as far as
their impact and implication for land cover mapping. The main finding from this
research is that whereas the 1AA technique is more predisposed to yielding
unclassified and mixed pixels, the resulting classification accuracy is not
significantly different from 1A1 approach. It is the authors conclusions that
ultimately the choice of technique adopted boils down to personal preference
and the uniqueness of the dataset at hand.
| [
"['Gidudu Anthony' 'Hulley Greg' 'Marwala Tshilidzi']",
"Gidudu Anthony, Hulley Greg and Marwala Tshilidzi"
] |
cs.LG | null | 0710.0485 | null | null | http://arxiv.org/pdf/0710.0485v2 | 2008-06-27T18:45:01Z | 2007-10-02T10:08:41Z | Prediction with expert advice for the Brier game | We show that the Brier game of prediction is mixable and find the optimal
learning rate and substitution function for it. The resulting prediction
algorithm is applied to predict results of football and tennis matches. The
theoretical performance guarantee turns out to be rather tight on these data
sets, especially in the case of the more extensive tennis data.
| [
"['Vladimir Vovk' 'Fedor Zhdanov']",
"Vladimir Vovk and Fedor Zhdanov"
] |
cs.DB cs.LG cs.LO | null | 0710.2083 | null | null | http://arxiv.org/pdf/0710.2083v1 | 2007-10-10T18:00:44Z | 2007-10-10T18:00:44Z | Association Rules in the Relational Calculus | One of the most utilized data mining tasks is the search for association
rules. Association rules represent significant relationships between items in
transactions. We extend the concept of association rule to represent a much
broader class of associations, which we refer to as \emph{entity-relationship
rules.} Semantically, entity-relationship rules express associations between
properties of related objects. Syntactically, these rules are based on a broad
subclass of safe domain relational calculus queries. We propose a new
definition of support and confidence for entity-relationship rules and for the
frequency of entity-relationship queries. We prove that the definition of
frequency satisfies standard probability axioms and the Apriori property.
| [
"['Oliver Schulte' 'Flavia Moser' 'Martin Ester' 'Zhiyong Lu']",
"Oliver Schulte, Flavia Moser, Martin Ester and Zhiyong Lu"
] |
cs.CL cs.AI cs.LG | null | 0710.2446 | null | null | http://arxiv.org/pdf/0710.2446v1 | 2007-10-12T12:44:11Z | 2007-10-12T12:44:11Z | The structure of verbal sequences analyzed with unsupervised learning
techniques | Data mining allows the exploration of sequences of phenomena, whereas one
usually tends to focus on isolated phenomena or on the relation between two
phenomena. It offers invaluable tools for theoretical analyses and exploration
of the structure of sentences, texts, dialogues, and speech. We report here the
results of an attempt at using it for inspecting sequences of verbs from French
accounts of road accidents. This analysis comes from an original approach of
unsupervised training allowing the discovery of the structure of sequential
data. The entries of the analyzer were only made of the verbs appearing in the
sentences. It provided a classification of the links between two successive
verbs into four distinct clusters, allowing thus text segmentation. We give
here an interpretation of these clusters by applying a statistical analysis to
independent semantic annotations.
| [
"['Catherine Recanati' 'Nicoleta Rogovschi' 'Younès Bennani']",
"Catherine Recanati (LIPN), Nicoleta Rogovschi (LIPN), Youn\\`es Bennani\n (LIPN)"
] |
cs.LG | null | 0710.2848 | null | null | http://arxiv.org/pdf/0710.2848v1 | 2007-10-15T15:38:33Z | 2007-10-15T15:38:33Z | Consistency of trace norm minimization | Regularization by the sum of singular values, also referred to as the trace
norm, is a popular technique for estimating low rank rectangular matrices. In
this paper, we extend some of the consistency results of the Lasso to provide
necessary and sufficient conditions for rank consistency of trace norm
minimization with the square loss. We also provide an adaptive version that is
rank consistent even when the necessary condition for the non adaptive version
is not fulfilled.
| [
"Francis Bach (WILLOW Project - Inria/Ens)",
"['Francis Bach']"
] |
cs.LG cs.IR | null | 0710.2889 | null | null | http://arxiv.org/pdf/0710.2889v2 | 2007-12-07T00:02:44Z | 2007-10-15T18:25:15Z | An efficient reduction of ranking to classification | This paper describes an efficient reduction of the learning problem of
ranking to binary classification. The reduction guarantees an average pairwise
misranking regret of at most that of the binary classifier regret, improving a
recent result of Balcan et al which only guarantees a factor of 2. Moreover,
our reduction applies to a broader class of ranking loss functions, admits a
simpler proof, and the expected running time complexity of our algorithm in
terms of number of calls to a classifier or preference function is improved
from $\Omega(n^2)$ to $O(n \log n)$. In addition, when the top $k$ ranked
elements only are required ($k \ll n$), as in many applications in information
extraction or search engines, the time complexity of our algorithm can be
further reduced to $O(k \log k + n)$. Our reduction and algorithm are thus
practical for realistic applications where the number of points to rank exceeds
several thousands. Much of our results also extend beyond the bipartite case
previously studied.
Our rediction is a randomized one. To complement our result, we also derive
lower bounds on any deterministic reduction from binary (preference)
classification to ranking, implying that our use of a randomized reduction is
essentially necessary for the guarantees we provide.
| [
"Nir Ailon and Mehryar Mohri",
"['Nir Ailon' 'Mehryar Mohri']"
] |
cs.LG cs.CE q-bio.QM | null | 0710.5116 | null | null | http://arxiv.org/pdf/0710.5116v1 | 2007-10-26T15:13:21Z | 2007-10-26T15:13:21Z | Combining haplotypers | Statistically resolving the underlying haplotype pair for a genotype
measurement is an important intermediate step in gene mapping studies, and has
received much attention recently. Consequently, a variety of methods for this
problem have been developed. Different methods employ different statistical
models, and thus implicitly encode different assumptions about the nature of
the underlying haplotype structure. Depending on the population sample in
question, their relative performance can vary greatly, and it is unclear which
method to choose for a particular sample. Instead of choosing a single method,
we explore combining predictions returned by different methods in a principled
way, and thereby circumvent the problem of method selection.
We propose several techniques for combining haplotype reconstructions and
analyze their computational properties. In an experimental study on real-world
haplotype data we show that such techniques can provide more accurate and
robust reconstructions, and are useful for outlier detection. Typically, the
combined prediction is at least as accurate as or even more accurate than the
best individual method, effectively circumventing the method selection problem.
| [
"['Matti Kääriäinen' 'Niels Landwehr' 'Sampsa Lappalainen'\n 'Taneli Mielikäinen']",
"Matti K\\\"a\\\"ari\\\"ainen, Niels Landwehr, Sampsa Lappalainen and Taneli\n Mielik\\\"ainen"
] |
cs.DS cs.LG | null | 0711.0189 | null | null | http://arxiv.org/pdf/0711.0189v1 | 2007-11-01T19:04:43Z | 2007-11-01T19:04:43Z | A Tutorial on Spectral Clustering | In recent years, spectral clustering has become one of the most popular
modern clustering algorithms. It is simple to implement, can be solved
efficiently by standard linear algebra software, and very often outperforms
traditional clustering algorithms such as the k-means algorithm. On the first
glance spectral clustering appears slightly mysterious, and it is not obvious
to see why it works at all and what it really does. The goal of this tutorial
is to give some intuition on those questions. We describe different graph
Laplacians and their basic properties, present the most common spectral
clustering algorithms, and derive those algorithms from scratch by several
different approaches. Advantages and disadvantages of the different spectral
clustering algorithms are discussed.
| [
"Ulrike von Luxburg",
"['Ulrike von Luxburg']"
] |
cs.AI cs.LG | null | 0711.1814 | null | null | http://arxiv.org/pdf/0711.1814v1 | 2007-11-12T17:15:34Z | 2007-11-12T17:15:34Z | Building Rules on Top of Ontologies for the Semantic Web with Inductive
Logic Programming | Building rules on top of ontologies is the ultimate goal of the logical layer
of the Semantic Web. To this aim an ad-hoc mark-up language for this layer is
currently under discussion. It is intended to follow the tradition of hybrid
knowledge representation and reasoning systems such as $\mathcal{AL}$-log that
integrates the description logic $\mathcal{ALC}$ and the function-free Horn
clausal language \textsc{Datalog}. In this paper we consider the problem of
automating the acquisition of these rules for the Semantic Web. We propose a
general framework for rule induction that adopts the methodological apparatus
of Inductive Logic Programming and relies on the expressive and deductive power
of $\mathcal{AL}$-log. The framework is valid whatever the scope of induction
(description vs. prediction) is. Yet, for illustrative purposes, we also
discuss an instantiation of the framework which aims at description and turns
out to be useful in Ontology Refinement.
Keywords: Inductive Logic Programming, Hybrid Knowledge Representation and
Reasoning Systems, Ontologies, Semantic Web.
Note: To appear in Theory and Practice of Logic Programming (TPLP)
| [
"['Francesca A. Lisi']",
"Francesca A. Lisi"
] |
cs.LG cs.CL cs.IR | null | 0711.2023 | null | null | http://arxiv.org/pdf/0711.2023v1 | 2007-11-13T16:28:47Z | 2007-11-13T16:28:47Z | Empirical Evaluation of Four Tensor Decomposition Algorithms | Higher-order tensor decompositions are analogous to the familiar Singular
Value Decomposition (SVD), but they transcend the limitations of matrices
(second-order tensors). SVD is a powerful tool that has achieved impressive
results in information retrieval, collaborative filtering, computational
linguistics, computational vision, and other fields. However, SVD is limited to
two-dimensional arrays of data (two modes), and many potential applications
have three or more modes, which require higher-order tensor decompositions.
This paper evaluates four algorithms for higher-order tensor decomposition:
Higher-Order Singular Value Decomposition (HO-SVD), Higher-Order Orthogonal
Iteration (HOOI), Slice Projection (SP), and Multislice Projection (MP). We
measure the time (elapsed run time), space (RAM and disk space requirements),
and fit (tensor reconstruction accuracy) of the four algorithms, under a
variety of conditions. We find that standard implementations of HO-SVD and HOOI
do not scale up to larger tensors, due to increasing RAM requirements. We
recommend HOOI for tensors that are small enough for the available RAM and MP
for larger tensors.
| [
"['Peter D. Turney']",
"Peter D. Turney (National Research Council of Canada)"
] |
math.ST cs.LG math.PR stat.TH | null | 0711.2801 | null | null | http://arxiv.org/pdf/0711.2801v2 | 2007-12-02T21:59:44Z | 2007-11-18T17:28:23Z | Inverse Sampling for Nonasymptotic Sequential Estimation of Bounded
Variable Means | In this paper, we consider the nonasymptotic sequential estimation of means
of random variables bounded in between zero and one. We have rigorously
demonstrated that, in order to guarantee prescribed relative precision and
confidence level, it suffices to continue sampling until the sample sum is no
less than a certain bound and then take the average of samples as an estimate
for the mean of the bounded random variable. We have developed an explicit
formula and a bisection search method for the determination of such bound of
sample sum, without any knowledge of the bounded variable. Moreover, we have
derived bounds for the distribution of sample size. In the special case of
Bernoulli random variables, we have established analytical and numerical
methods to further reduce the bound of sample sum and thus improve the
efficiency of sampling. Furthermore, the fallacy of existing results are
detected and analyzed.
| [
"['Xinjia Chen']",
"Xinjia Chen"
] |
cs.LG cs.AI cs.CV | null | 0711.2914 | null | null | http://arxiv.org/pdf/0711.2914v1 | 2007-11-19T12:25:00Z | 2007-11-19T12:25:00Z | Image Classification Using SVMs: One-against-One Vs One-against-All | Support Vector Machines (SVMs) are a relatively new supervised classification
technique to the land cover mapping community. They have their roots in
Statistical Learning Theory and have gained prominence because they are robust,
accurate and are effective even when using a small training sample. By their
nature SVMs are essentially binary classifiers, however, they can be adopted to
handle the multiple classification tasks common in remote sensing studies. The
two approaches commonly used are the One-Against-One (1A1) and One-Against-All
(1AA) techniques. In this paper, these approaches are evaluated in as far as
their impact and implication for land cover mapping. The main finding from this
research is that whereas the 1AA technique is more predisposed to yielding
unclassified and mixed pixels, the resulting classification accuracy is not
significantly different from 1A1 approach. It is the authors conclusion
therefore that ultimately the choice of technique adopted boils down to
personal preference and the uniqueness of the dataset at hand.
| [
"['Gidudu Anthony' 'Hulley Gregg' 'Marwala Tshilidzi']",
"Gidudu Anthony, Hulley Gregg and Marwala Tshilidzi"
] |
cs.LG | null | 0711.3594 | null | null | http://arxiv.org/pdf/0711.3594v1 | 2007-11-22T15:05:35Z | 2007-11-22T15:05:35Z | Clustering with Transitive Distance and K-Means Duality | Recent spectral clustering methods are a propular and powerful technique for
data clustering. These methods need to solve the eigenproblem whose
computational complexity is $O(n^3)$, where $n$ is the number of data samples.
In this paper, a non-eigenproblem based clustering method is proposed to deal
with the clustering problem. Its performance is comparable to the spectral
clustering algorithms but it is more efficient with computational complexity
$O(n^2)$. We show that with a transitive distance and an observed property,
called K-means duality, our algorithm can be used to handle data sets with
complex cluster shapes, multi-scale clusters, and noise. Moreover, no
parameters except the number of clusters need to be set in our algorithm.
| [
"['Chunjing Xu' 'Jianzhuang Liu' 'Xiaoou Tang']",
"Chunjing Xu, Jianzhuang Liu, Xiaoou Tang"
] |
cs.LG cs.IT math.IT | null | 0711.3675 | null | null | http://arxiv.org/pdf/0711.3675v1 | 2007-11-23T07:45:52Z | 2007-11-23T07:45:52Z | Derivations of Normalized Mutual Information in Binary Classifications | This correspondence studies the basic problem of classifications - how to
evaluate different classifiers. Although the conventional performance indexes,
such as accuracy, are commonly used in classifier selection or evaluation,
information-based criteria, such as mutual information, are becoming popular in
feature/model selections. In this work, we propose to assess classifiers in
terms of normalized mutual information (NI), which is novel and well defined in
a compact range for classifier evaluation. We derive close-form relations of
normalized mutual information with respect to accuracy, precision, and recall
in binary classifications. By exploring the relations among them, we reveal
that NI is actually a set of nonlinear functions, with a concordant
power-exponent form, to each performance index. The relations can also be
expressed with respect to precision and recall, or to false alarm and hitting
rate (recall).
| [
"['Yong Wang' 'Bao-Gang Hu']",
"Yong Wang, Bao-Gang Hu"
] |
cs.LG | null | 0711.4452 | null | null | http://arxiv.org/pdf/0711.4452v1 | 2007-11-28T12:05:47Z | 2007-11-28T12:05:47Z | Covariance and PCA for Categorical Variables | Covariances from categorical variables are defined using a regular simplex
expression for categories. The method follows the variance definition by Gini,
and it gives the covariance as a solution of simultaneous equations. The
calculated results give reasonable values for test data. A method of principal
component analysis (RS-PCA) is also proposed using regular simplex expressions,
which allows easy interpretation of the principal components. The proposed
methods apply to variable selection problem of categorical data USCensus1990
data. The proposed methods give appropriate criterion for the variable
selection problem of categorical
| [
"Hirotaka Niitsuma and Takashi Okada",
"['Hirotaka Niitsuma' 'Takashi Okada']"
] |
cs.LG | null | 0712.0130 | null | null | http://arxiv.org/pdf/0712.0130v1 | 2007-12-02T09:38:26Z | 2007-12-02T09:38:26Z | On the Relationship between the Posterior and Optimal Similarity | For a classification problem described by the joint density $P(\omega,x)$,
models of $P(\omega\eq\omega'|x,x')$ (the ``Bayesian similarity measure'') have
been shown to be an optimal similarity measure for nearest neighbor
classification. This paper analyzes demonstrates several additional properties
of that conditional distribution. The paper first shows that we can
reconstruct, up to class labels, the class posterior distribution $P(\omega|x)$
given $P(\omega\eq\omega'|x,x')$, gives a procedure for recovering the class
labels, and gives an asymptotically Bayes-optimal classification procedure. It
also shows, given such an optimal similarity measure, how to construct a
classifier that outperforms the nearest neighbor classifier and achieves
Bayes-optimal classification rates. The paper then analyzes Bayesian similarity
in a framework where a classifier faces a number of related classification
tasks (multitask learning) and illustrates that reconstruction of the class
posterior distribution is not possible in general. Finally, the paper
identifies a distinct class of classification problems using
$P(\omega\eq\omega'|x,x')$ and shows that using $P(\omega\eq\omega'|x,x')$ to
solve those problems is the Bayes optimal solution.
| [
"Thomas M. Breuel",
"['Thomas M. Breuel']"
] |
cs.AI cs.CC cs.DM cs.LG | null | 0712.0451 | null | null | http://arxiv.org/pdf/0712.0451v1 | 2007-12-04T08:52:46Z | 2007-12-04T08:52:46Z | A Reactive Tabu Search Algorithm for Stimuli Generation in
Psycholinguistics | The generation of meaningless "words" matching certain statistical and/or
linguistic criteria is frequently needed for experimental purposes in
Psycholinguistics. Such stimuli receive the name of pseudowords or nonwords in
the Cognitive Neuroscience literatue. The process for building nonwords
sometimes has to be based on linguistic units such as syllables or morphemes,
resulting in a numerical explosion of combinations when the size of the
nonwords is increased. In this paper, a reactive tabu search scheme is proposed
to generate nonwords of variables size. The approach builds pseudowords by
using a modified Metaheuristic algorithm based on a local search procedure
enhanced by a feedback-based scheme. Experimental results show that the new
algorithm is a practical and effective tool for nonword generation.
| [
"Alejandro Chinea Manrique De Lara",
"['Alejandro Chinea Manrique De Lara']"
] |
cs.LG | null | 0712.0653 | null | null | http://arxiv.org/pdf/0712.0653v2 | 2009-05-11T05:49:09Z | 2007-12-05T05:39:07Z | Equations of States in Singular Statistical Estimation | Learning machines which have hierarchical structures or hidden variables are
singular statistical models because they are nonidentifiable and their Fisher
information matrices are singular. In singular statistical models, neither the
Bayes a posteriori distribution converges to the normal distribution nor the
maximum likelihood estimator satisfies asymptotic normality. This is the main
reason why it has been difficult to predict their generalization performances
from trained states. In this paper, we study four errors, (1) Bayes
generalization error, (2) Bayes training error, (3) Gibbs generalization error,
and (4) Gibbs training error, and prove that there are mathematical relations
among these errors. The formulas proved in this paper are equations of states
in statistical estimation because they hold for any true distribution, any
parametric model, and any a priori distribution. Also we show that Bayes and
Gibbs generalization errors are estimated by Bayes and Gibbs training errors,
and propose widely applicable information criteria which can be applied to both
regular and singular statistical models.
| [
"['Sumio Watanabe']",
"Sumio Watanabe"
] |
cs.LG cs.DM | null | 0712.0840 | null | null | http://arxiv.org/pdf/0712.0840v1 | 2007-12-05T22:25:03Z | 2007-12-05T22:25:03Z | A Universal Kernel for Learning Regular Languages | We give a universal kernel that renders all the regular languages linearly
separable. We are not able to compute this kernel efficiently and conjecture
that it is intractable, but we do have an efficient $\eps$-approximation.
| [
"Leonid (Aryeh) Kontorovich",
"['Leonid' 'Kontorovich']"
] |
cs.LG cs.AI cs.NE | null | 0712.0938 | null | null | http://arxiv.org/pdf/0712.0938v1 | 2007-12-06T13:52:04Z | 2007-12-06T13:52:04Z | Automatic Pattern Classification by Unsupervised Learning Using
Dimensionality Reduction of Data with Mirroring Neural Networks | This paper proposes an unsupervised learning technique by using Multi-layer
Mirroring Neural Network and Forgy's clustering algorithm. Multi-layer
Mirroring Neural Network is a neural network that can be trained with
generalized data inputs (different categories of image patterns) to perform
non-linear dimensionality reduction and the resultant low-dimensional code is
used for unsupervised pattern classification using Forgy's algorithm. By
adapting the non-linear activation function (modified sigmoidal function) and
initializing the weights and bias terms to small random values, mirroring of
the input pattern is initiated. In training, the weights and bias terms are
changed in such a way that the input presented is reproduced at the output by
back propagating the error. The mirroring neural network is capable of reducing
the input vector to a great degree (approximately 1/30th the original size) and
also able to reconstruct the input pattern at the output layer from this
reduced code units. The feature set (output of central hidden layer) extracted
from this network is fed to Forgy's algorithm, which classify input data
patterns into distinguishable classes. In the implementation of Forgy's
algorithm, initial seed points are selected in such a way that they are distant
enough to be perfectly grouped into different categories. Thus a new method of
unsupervised learning is formulated and demonstrated in this paper. This method
gave impressive results when applied to classification of different image
patterns.
| [
"['Dasika Ratna Deepthi' 'G. R. Aditya Krishna' 'K. Eswaran']",
"Dasika Ratna Deepthi, G.R.Aditya Krishna and K. Eswaran"
] |
cs.CC cs.LG | null | 0712.1402 | null | null | http://arxiv.org/pdf/0712.1402v2 | 2010-03-08T19:30:26Z | 2007-12-10T06:50:36Z | Reconstruction of Markov Random Fields from Samples: Some Easy
Observations and Algorithms | Markov random fields are used to model high dimensional distributions in a
number of applied areas. Much recent interest has been devoted to the
reconstruction of the dependency structure from independent samples from the
Markov random fields. We analyze a simple algorithm for reconstructing the
underlying graph defining a Markov random field on $n$ nodes and maximum degree
$d$ given observations. We show that under mild non-degeneracy conditions it
reconstructs the generating graph with high probability using $\Theta(d
\epsilon^{-2}\delta^{-4} \log n)$ samples where $\epsilon,\delta$ depend on the
local interactions. For most local interaction $\eps,\delta$ are of order
$\exp(-O(d))$.
Our results are optimal as a function of $n$ up to a multiplicative constant
depending on $d$ and the strength of the local interactions. Our results seem
to be the first results for general models that guarantee that {\em the}
generating model is reconstructed. Furthermore, we provide explicit $O(n^{d+2}
\epsilon^{-2}\delta^{-4} \log n)$ running time bound. In cases where the
measure on the graph has correlation decay, the running time is $O(n^2 \log n)$
for all fixed $d$. We also discuss the effect of observing noisy samples and
show that as long as the noise level is low, our algorithm is effective. On the
other hand, we construct an example where large noise implies
non-identifiability even for generic noise and interactions. Finally, we
briefly show that in some simple cases, models with hidden nodes can also be
recovered.
| [
"['Guy Bresler' 'Elchanan Mossel' 'Allan Sly']",
"Guy Bresler, Elchanan Mossel, Allan Sly"
] |
cs.NI cs.LG | null | 0712.2497 | null | null | http://arxiv.org/pdf/0712.2497v1 | 2007-12-15T06:50:43Z | 2007-12-15T06:50:43Z | A New Theoretic Foundation for Cross-Layer Optimization | Cross-layer optimization solutions have been proposed in recent years to
improve the performance of network users operating in a time-varying,
error-prone wireless environment. However, these solutions often rely on ad-hoc
optimization approaches, which ignore the different environmental dynamics
experienced at various layers by a user and violate the layered network
architecture of the protocol stack by requiring layers to provide access to
their internal protocol parameters to other layers. This paper presents a new
theoretic foundation for cross-layer optimization, which allows each layer to
make autonomous decisions individually, while maximizing the utility of the
wireless user by optimally determining what information needs to be exchanged
among layers. Hence, this cross-layer framework does not change the current
layered architecture. Specifically, because the wireless user interacts with
the environment at various layers of the protocol stack, the cross-layer
optimization problem is formulated as a layered Markov decision process (MDP)
in which each layer adapts its own protocol parameters and exchanges
information (messages) with other layers in order to cooperatively maximize the
performance of the wireless user. The message exchange mechanism for
determining the optimal cross-layer transmission strategies has been designed
for both off-line optimization and on-line dynamic adaptation. We also show
that many existing cross-layer optimization algorithms can be formulated as
simplified, sub-optimal, versions of our layered MDP framework.
| [
"Fangwen Fu and Mihaela van der Schaar",
"['Fangwen Fu' 'Mihaela van der Schaar']"
] |
cs.LG | null | 0712.2869 | null | null | http://arxiv.org/pdf/0712.2869v1 | 2007-12-18T03:30:05Z | 2007-12-18T03:30:05Z | Density estimation in linear time | We consider the problem of choosing a density estimate from a set of
distributions F, minimizing the L1-distance to an unknown distribution
(Devroye, Lugosi 2001). Devroye and Lugosi analyze two algorithms for the
problem: Scheffe tournament winner and minimum distance estimate. The Scheffe
tournament estimate requires fewer computations than the minimum distance
estimate, but has strictly weaker guarantees than the latter.
We focus on the computational aspect of density estimation. We present two
algorithms, both with the same guarantee as the minimum distance estimate. The
first one, a modification of the minimum distance estimate, uses the same
number (quadratic in |F|) of computations as the Scheffe tournament. The second
one, called ``efficient minimum loss-weight estimate,'' uses only a linear
number of computations, assuming that F is preprocessed.
We also give examples showing that the guarantees of the algorithms cannot be
improved and explore randomized algorithms for density estimation.
| [
"['Satyaki Mahalanabis' 'Daniel Stefankovic']",
"Satyaki Mahalanabis, Daniel Stefankovic"
] |
cs.LG | null | 0712.3402 | null | null | http://arxiv.org/pdf/0712.3402v1 | 2007-12-20T13:06:50Z | 2007-12-20T13:06:50Z | Graph kernels between point clouds | Point clouds are sets of points in two or three dimensions. Most kernel
methods for learning on sets of points have not yet dealt with the specific
geometrical invariances and practical constraints associated with point clouds
in computer vision and graphics. In this paper, we present extensions of graph
kernels for point clouds, which allow to use kernel methods for such ob jects
as shapes, line drawings, or any three-dimensional point clouds. In order to
design rich and numerically efficient kernels with as few free parameters as
possible, we use kernels between covariance matrices and their factorizations
on graphical models. We derive polynomial time dynamic programming recursions
and present applications to recognition of handwritten digits and Chinese
characters from few training examples.
| [
"Francis Bach (WILLOW Project - Inria/Ens)",
"['Francis Bach']"
] |
cs.NE cs.AI cs.LG | null | 0712.3654 | null | null | http://arxiv.org/pdf/0712.3654v1 | 2007-12-21T10:05:52Z | 2007-12-21T10:05:52Z | Improving the Performance of PieceWise Linear Separation Incremental
Algorithms for Practical Hardware Implementations | In this paper we shall review the common problems associated with Piecewise
Linear Separation incremental algorithms. This kind of neural models yield poor
performances when dealing with some classification problems, due to the
evolving schemes used to construct the resulting networks. So as to avoid this
undesirable behavior we shall propose a modification criterion. It is based
upon the definition of a function which will provide information about the
quality of the network growth process during the learning phase. This function
is evaluated periodically as the network structure evolves, and will permit, as
we shall show through exhaustive benchmarks, to considerably improve the
performance(measured in terms of network complexity and generalization
capabilities) offered by the networks generated by these incremental models.
| [
"['Alejandro Chinea Manrique De Lara' 'Juan Manuel Moreno'\n 'Arostegui Jordi Madrenas' 'Joan Cabestany']",
"Alejandro Chinea Manrique De Lara, Juan Manuel Moreno, Arostegui Jordi\n Madrenas, Joan Cabestany"
] |
cs.LG cs.CY | 10.1142/S0129183109013613 | 0712.3807 | null | null | http://arxiv.org/abs/0712.3807v3 | 2009-10-14T15:30:56Z | 2007-12-26T14:25:18Z | Improved Collaborative Filtering Algorithm via Information
Transformation | In this paper, we propose a spreading activation approach for collaborative
filtering (SA-CF). By using the opinion spreading process, the similarity
between any users can be obtained. The algorithm has remarkably higher accuracy
than the standard collaborative filtering (CF) using Pearson correlation.
Furthermore, we introduce a free parameter $\beta$ to regulate the
contributions of objects to user-user correlations. The numerical results
indicate that decreasing the influence of popular objects can further improve
the algorithmic accuracy and personality. We argue that a better algorithm
should simultaneously require less computation and generate higher accuracy.
Accordingly, we further propose an algorithm involving only the top-$N$ similar
neighbors for each target user, which has both less computational complexity
and higher algorithmic accuracy.
| [
"Jian-Guo Liu, Bing-Hong Wang, Qiang Guo",
"['Jian-Guo Liu' 'Bing-Hong Wang' 'Qiang Guo']"
] |
stat.CO cs.LG | 10.1111/j.1467-9868.2009.00698.x | 0712.4273 | null | null | http://arxiv.org/abs/0712.4273v4 | 2017-03-01T13:40:32Z | 2007-12-27T19:44:34Z | Online EM Algorithm for Latent Data Models | In this contribution, we propose a generic online (also sometimes called
adaptive or recursive) version of the Expectation-Maximisation (EM) algorithm
applicable to latent variable models of independent observations. Compared to
the algorithm of Titterington (1984), this approach is more directly connected
to the usual EM algorithm and does not rely on integration with respect to the
complete data distribution. The resulting algorithm is usually simpler and is
shown to achieve convergence to the stationary points of the Kullback-Leibler
divergence between the marginal distribution of the observation and the model
distribution at the optimal rate, i.e., that of the maximum likelihood
estimator. In addition, the proposed approach is also suitable for conditional
(or regression) models, as illustrated in the case of the mixture of linear
regressions model.
| [
"['Olivier Cappé' 'Eric Moulines']",
"Olivier Capp\\'e (LTCI), Eric Moulines (LTCI)"
] |
cs.IT cs.LG math.IT math.OC | null | 0801.0390 | null | null | http://arxiv.org/pdf/0801.0390v1 | 2008-01-02T13:23:04Z | 2008-01-02T13:23:04Z | Staring at Economic Aggregators through Information Lenses | It is hard to exaggerate the role of economic aggregators -- functions that
summarize numerous and / or heterogeneous data -- in economic models since the
early XX$^{th}$ century. In many cases, as witnessed by the pioneering works of
Cobb and Douglas, these functions were information quantities tailored to
economic theories, i.e. they were built to fit economic phenomena. In this
paper, we look at these functions from the complementary side: information. We
use a recent toolbox built on top of a vast class of distortions coined by
Bregman, whose application field rivals metrics' in various subfields of
mathematics. This toolbox makes it possible to find the quality of an
aggregator (for consumptions, prices, labor, capital, wages, etc.), from the
standpoint of the information it carries. We prove a rather striking result.
From the informational standpoint, well-known economic aggregators do belong
to the \textit{optimal} set. As common economic assumptions enter the analysis,
this large set shrinks, and it essentially ends up \textit{exactly fitting}
either CES, or Cobb-Douglas, or both. To summarize, in the relevant economic
contexts, one could not have crafted better some aggregator from the
information standpoint. We also discuss global economic behaviors of optimal
information aggregators in general, and present a brief panorama of the links
between economic and information aggregators.
Keywords: Economic Aggregators, CES, Cobb-Douglas, Bregman divergences
| [
"['Richard Nock' 'Nicolas Sanz' 'Fred Celimene' 'Frank Nielsen']",
"Richard Nock, Nicolas Sanz, Fred Celimene, Frank Nielsen"
] |